melodyhub / README.md
sander-wood's picture
Update README.md
fd08ed8 verified
metadata
license: mit
task_categories:
  - text-generation
pretty_name: MelodyHub
size_categories:
  - 1M<n<10M
tags:
  - music

Dataset Summary

MelodyHub is a curated dataset essential for training MelodyT5, containing 261,900 melodies formatted in ABC notation and sourced from public sheet music datasets and online platforms. It includes folk songs and other non-copyrighted musical scores, ensuring diversity across traditions and epochs. The dataset includes seven melody-centric tasks: cataloging, generation, harmonization, melodization, segmentation, transcription, and variation. These tasks result in over one million task instances, providing a comprehensive resource for symbolic music processing. Each task is presented in a score-to-score format with task identifiers included in the input data. MelodyHub's rigorous curation process ensures high-quality, consistent data suitable for developing and evaluating symbolic music models.

ABC Notation

ABC notation is an ASCII-based plain text musical notation system that is commonly used for transcribing traditional music and sharing sheet music online. It provides a simple and concise way to represent musical elements such as notes, rhythms, chords, and more.

For those looking to interact with ABC notation in various ways, there are several tools available:

  1. Online ABC Player: This web-based tool allows you to input ABC notation and hear the corresponding audio playback. By pasting your ABC code into the player, you can instantly listen to the tune as it would sound when played.

  2. ABC Sheet Music Editor - EasyABC: EasyABC is a user-friendly software application designed for creating, editing, and formatting ABC notation. Its graphical interface enables you to input your ABC code, preview the sheet music, and make adjustments as necessary.

To learn more about ABC notaton, please see ABC Examples and ABC Strandard.

Melody Curation

The MelodyHub dataset was curated using publicly available sheet music datasets and online platforms, with original formats like ABC notation, MusicXML, and Humdrum. The data curation process included several steps:

  1. Exclusion of Copyrighted Entries: Entries featuring explicit copyright indicators such as "copyright" or "©" symbols were excluded.

  2. Format Standardization: All data was first converted to MusicXML format for standardization purposes. Subsequently, it was transformed into ABC notation to ensure consistent formatting across the dataset.

  3. Filtering by Musical Complexity: Melodies consisting of fewer than eight bars were omitted from the dataset to maintain adequate complexity and musical richness.

  4. Removal of Non-Musical Content: Lyrics and non-musical content (e.g., contact information of transcribers and URL links) were removed to focus solely on musical notation.

  5. Trimming Rest Bars: Leading and trailing bars of complete rest were removed from each piece to refine the musical content.

  6. Verification of Barlines: Each piece underwent verification for the presence of a final barline. If absent, a barline was added to ensure completeness and consistency.

  7. Deduplication: Entries were deduplicated to prevent redundancy and ensure each melody is unique within the dataset.

By ensuring the quality and consistency of the MelodyHub dataset, these steps led to a substantial collection of 261,900 melodies with uniform formatting, making it suitable for training and evaluating symbolic music models like MelodyT5.

Task Definition

Following the curation of melody data, the MelodyHub dataset was segmented into seven tasks, presented in a score-to-score format with input-output pairs. In MelodyHub, every input data includes a task identifier (e.g., %%harmonization) at the outset to specify the intended task. Below are the definitions of these tasks:

  • Cataloging: This task selects melodies with music-related metadata like titles, composers, and geographical origins (e.g., C:J.S. Bach, O:Germany). The input data includes information fields with these attributes, while specific information is removed and the order is randomized. The output includes the corresponding metadata without the musical score.

  • Generation: Here, the input solely consists of a task identifier (i.e., %%generation), while the output comprises comprehensive musical scores. Following TunesFormer, control codes are affixed to all melodies as information fields to denote musical structure information. These codes, namely S:, B:, and E:, signify the number of sections, bars per section, and edit distance similarity between every pair of sections within the tune.

  • Harmonization: This task involves melodies containing chord symbols. The chord symbols are removed from the input, while the original data is retained as the output. An additional information field denoting edit distance similarity (E:) is appended to the output, indicating the similarity between the input and output, ranging from 0 to 10 (no match at all to exact match). Lower similarity values suggest the need for more chord symbols.

  • Melodization: In contrast to harmonization, this task operates inversely and also employs melodies containing chord symbols. The notes in the original score are replaced with rests, and adjacent rest durations are combined. The resultant score, comprising rests and chord symbols, serves as the input. Similar to harmonization, an E: field is added at the outset of the output, with lower values facilitating the generation of more intricate melodies.

  • Segmentation: Melodies in Humdrum format (i.e., KernScores and Meertens Tune Collections) containing curly braces indicating segmentation or voices from the JSB Chorales dataset (four-part compositions) with fermatas are chosen. These markers are transformed into breath marks. The input data omits all breath marks, while the output introduces an E: field at the beginning to aid the generation of breath marks, with lower values implying the need for more breath marks to be added.

  • Transcription: ABC notation is initially converted to MIDI, then reconverted back to ABC. The resultant ABC from the MIDI conversion loses substantial score information, such as distinguishing enharmonic equivalents and missing musical ornaments (e.g., trill). The MIDI-converted ABC serves as the input, while the original ABC, appended with an added E: field, constitutes the output. Lower E: values denote greater discrepancies between the transcribed and input scores, particularly due to absent repeat symbols.

  • Variation: This task centres on data from The Session, wherein each ABC notation file may contain multiple variants of the same tune. Tunes with two or more variations are selected, with every possible pair of variants utilized as both input and output. The output initiates with an E: field signifying the extent of disparities between the input and output scores, with lower values suggesting substantial variations in the musical scores.

Together, these tasks encompass 1,067,747 instances, spanning analytical to generative challenges in Music Information Retrieval (MIR). This comprehensive dataset serves as a valuable resource for developing and evaluating symbolic music models like MelodyT5.

Copyright Disclaimer

This dataset is for research use only and not for commercial purposes. We believe all data in this dataset is in the public domain. If you own the copyright to any musical composition in the MelodyHub dataset and have concerns, please contact us at [email protected]. We will address your concerns and take appropriate action if needed.

BibTeX Citation

@misc{wu2024melodyt5unifiedscoretoscoretransformer,
      title={MelodyT5: A Unified Score-to-Score Transformer for Symbolic Music Processing}, 
      author={Shangda Wu and Yashan Wang and Xiaobing Li and Feng Yu and Maosong Sun},
      year={2024},
      eprint={2407.02277},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2407.02277}, 
}