MuseNet, developed by OpenAI, is a unique tool that can generate 4-minute music compositions with up to 10 different instruments. It’s trained on a variety of data from sources like Classical Archives, BitMidi, and the MAESTRO dataset. The training involves predicting the next note in a sequence, similar to the technology used in GPT-2.
The tool recognizes every combination of notes played simultaneously as a ‘chord’ and assigns a token to each. It uses composer and instrument tokens to control the type of music it creates. This allows MuseNet to blend different styles and instruments while maintaining the structure of a piece. The inclusion of composer and instrument tokens during training allows users to have more control over the generated music.
What sets MuseNet apart is its ability to smoothly combine different musical styles, creating unique compositions. Users can choose a composer or style, start with a snippet of a famous piece, or let the AI lead the way to create entirely new compositions. While users can suggest instruments, the final selection depends on all possible notes and instruments. The system works best with styles and instruments that go well together. Unusual combinations can be more challenging.