Exploring musical data sources

Exploring musical data sources

One of the key requirements for our social music analysis widget will be access to a large collection of music data. Such a requirement begs a number of important questions:

  • How should we define a datum of music? What kind of items do we want to find in a music database?
  • How should an item of music be represented? Sound? Notation?
  • What counts as 'large'? What size of collection would be large enough?
  • From where can music data be sourced?
  • Who owns music data? And will they let us use it?
  • How can we access any available music data from a software system such as this widget?
  • How can our users reliably identify the music datum on which they want to comment?

Having already identified the PRAISE project's Music Circle system as a platform for developing this widget, we're now exploring integration of music data sources. For the PRAISE team, the datum of music is not too complicated: their users upload tracks which are recordings of themselves performing a piece. Then their peers comment on that performance by attaching comments to the track. These tracks are distinct and top-level entities; they don't need to have any relationship to each other (even if, for example, they are performances of the same piece) and they don't need to be arranged in any kind of hierarchical structure.

In our implementation of Music Circle, instead of requiring users to upload tracks, we'd like to allow them to comment on musical works (as an abstract entity, for example Beethoven's Fifth Symphony), or on specific performances of musical works (for example, Roger Norrington's 1988 recording of Beethoven's Fifth Symphony), or on specific editions of musical works (for example, Jonathan Del Mar's 1996 edition of Beethoven's Fifth Symphony). Enabling this will require being able to capture works, performances, and editions as entities, something for which a project called the Music Ontology may be useful. The Music Ontology was conceived by Yves Raimond and extends the International Federation of Library Associations' Functional Requirements for Bibliographic Records (FRBR) defining musical equivalents for some key bibliographic concepts:

Extends frbr:Work as the abstract notion of an intellectual creation, for example Beethoven's Fifth Symphony
Extends frbr:Expression, a particular way of representing an intellectual creation, for example a printed score of Beethoven's Fifth Symphony, or a performance of Beethoven's Fifth Symphony
Extends frbr:Manifestation, a specific version of a manifestation of an intellectual creation, for example the recording of the Norrington 1988 performance of Beethoven's Fifth Symphony
Extends frbr:Item which represents a single, real-world entity; an actual copy of a book or a CD, for example my CD of the recording of the Norrington 1988 performance of Beethoven's Fifth Symphony

So the entities on which we would like our users to be able to comment are mo:MusicalWorks and mo:MusicalManifestations. Although, to make a timed comment on a mo:MusicalWork requires that that work should be renderable so that a user can select a part of it on which to comment. So even if a user is not concerned about commenting on a particular mo:MusicalManifestation (edition or performance) he or she will still need to be presented with some (call it "default") mo:MusicalManifestation. Furthermore, a database of mo:MusicalManifestations in practice actually has to contain an mo:MusicalItem for each mo:MusicalManifestation anyway because the mo:MusicalManifestation remains an abstract entity. So our data requirement then becomes: all the mo:MusicalWorks our users want to discuss(!); at least one default mo:MusicalManifestation for each work, plus any specific mo:MusicalManifestations our users want to discuss; and an mo:MusicalItem for each mo:MusicalManifestation.

The question then is, from where can we source this kind of musical data?

Paul Lamere has recently published a list of online music APIs many of which are APIs over data sets. The coverage of these data sets is quite skewed towards popular music which has the advantage of mo:MusicalWorks generally having just one canonical (recorded performance) mo:MusicalManifestation. This makes identity of mo:MusicalWorks a more straightforward issue. Even where art music is covered, it's generally at the mo:MusicalManifestation level; we find databases of recordings. These databases don't provide uniquely identifiable musical works. For example, browsing MusicBrainz' works beginning with 'S', at around page 498 (as of today), we find lots of works which are really movements from Haydn symphonies, and then often the symphony itself (e.g. "Military") listed as a work as well as its movements. There are even things such as excerpts (e.g. from "The Clock") being listed as musical works; perhaps it's possible to make a case for this? Or perhaps it would be better classified an mo:MusicalExpression? (It should be noted, of course, that MusicBrainz does allow editing of its data and there are many enthusiastic contributors to its database.) What's clearly lacking here is a realiable source of mo:MusicalWorks.

If we can reliably turn a user's search terms into a musical work identity, the next question is from where can we source an appropriate representation of that work? Ideally we'd be able to get a waveform and it would fantastic if we could get notation (although notation for a whole work raises problems of how to render it in a manageable way). We'd also ideally have audio playback, and random access would be great. Amongst Lamere's list of audio content APIs is Tomahk which is reasonably good at resolving strings identifying musical works into links to audio sources for recordings of those works from a variety of sources. One possibility for waveforms might be rendering them in the browser using something like Peaks.js. Lamere doesn't list musescore.com, a database of scores uploaded by users of the open source MuseScore notation editor. The coverage is quite small, but it's continually growing. For the core classical canon, we can check out MuseData published by the Center for Computer Assisted Research in the Humanities at Stanford.

Even after we've made any decisions about data sources, we'll then have a lot of software engineering to do in order to manage ingestion and representation of data. We'll be exploring hypermedia API design to this end soon...