Meta published its AI-powered music generator, which is accessible, unlike Google’s. Meta’s MusicGen tool can convert text content into 12 seconds of audio, transforming it into a driving 80s piece with heavy drum beats and synth pads. When steered with a reference voice, it matches the melody and lyrics to create a unique and engaging audio experience.
Meta claims MusicGen was trained on 20,000 hours of music, including 390,000 device tracks and 10,000 licensed tracks. The company has made pre-trained models accessible to anyone with the necessary hardware, primarily a GPU with 16 GB of memory.
MusicGen is an AI-powered music generator for soothing listening. It is similar to chip music. and is slightly superior, but not enough to eliminate the need for live musicians. However, they won’t be honored. Diffusion, Dance Diffusion, and OpenAI’s Jukebox are examples of advances in generative music, but they raise ethical and legal questions. Musicians and users are uncomfortable with AI learning from improvised solos to produce similar effects.
We present MusicGen: A simple and controllable music generation model. MusicGen can be prompted by both text and melody.
We release code (MIT) and models (CC-BY NC) for open research, reproducibility, and for the music community: https://t.co/OkYjL4xDN7 pic.twitter.com/h1l4LGzYgf— Felix Kreuk (@FelixKreuk) June 9, 2023
Homemade tracks using AI to create recognizable sounds are gaining popularity. Music publishers label these tracks as streaming partners, often prevailing. It is whether “Synthetic media” music violates the copyright of performers, record companies, and other rights holders. Legal disputes may affect music-generating AI, potentially impacting artists’ rights and wisdom and potentially affecting consent.
Meta claims MusicGen’s training music is covered by contractual contracts with right holders, including a deal with Shutterstock, without limiting its usage.