How Meta’s AI Generates Music Based on a Reference Melody

<h1>MusicGen by Meta</h1> <p>On June 13th, 2023, Meta (formerly Facebook) made waves in the music and AI communities with the release of their generative music model, MusicGen. This model not only surpasses Google&rsquo;s MusicLM, which was launched earlier this year, in terms of capabilities but is also trained on licensed music data and open-sourced for non-commercial use.</p> <p>This means that you can not only read the&nbsp;<a href="https://arxiv.org/abs/2306.05284" rel="noopener ugc nofollow" target="_blank">research paper</a>&nbsp;or listen to&nbsp;<a href="https://ai.honu.io/papers/musicgen/" rel="noopener ugc nofollow" target="_blank">demos</a>&nbsp;but also copy their code from&nbsp;<a href="https://github.com/facebookresearch/audiocraft" rel="noopener ugc nofollow" target="_blank">GitHub</a>&nbsp;or experiment with the model in a web app on&nbsp;<a href="https://huggingface.co/spaces/facebook/MusicGen" rel="noopener ugc nofollow" target="_blank">HuggingFace</a>.</p> <p>In addition to generating audio from a text prompt, MusicGen can also generate music based on a given reference melody, a feature known as melody conditioning. In this blog post, I will demonstrate how Meta implemented this useful and fascinating functionality into their model. But before we delve into that, let&rsquo;s first understand how melody conditioning works in practice.</p> <p><a href="https://medium.com/towards-data-science/how-metas-ai-generates-music-based-on-a-reference-melody-de34acd783"><strong>Read More</strong></a></p>
Tags: Melody