This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages) This article may require cleanup to meet Wikipedia's quality standards. No cleanup reason has been specified. Please help improve this article if you can. (December 2010) (Learn how and when to remove this template message) This article contains content that is written like an advertisement. Please help improve it by removing promotional content and inappropriate external links, and by adding encyclopedic content written from a neutral point of view. (December 2010) (Learn how and when to remove this template message) This article needs to be updated. Please help update this article to reflect recent events or newly available information. (May 2023) (Learn how and when to remove this template message)

Artificial intelligence and music (AIM) is a common subject in the International Computer Music Conference, the Computing Society Conference[1] and the International Joint Conference on Artificial Intelligence. The first International Computer Music Conference (ICMC) was held in 1974 at Michigan State University.[2] Current research includes the application of AI in music composition, performance, theory and digital sound processing.

A key part of this field is the development of music software programs which use AI to generate music.[3] As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment.[4] Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control.

Erwin Panofksy proposed that in all art, there existed 3 levels of meaning: primary meaning, or the natural subject; secondary meaning, or the conventional subject; and tertiary meaning, the intrinsic content of the subject.[5][6] AI music explores the foremost of these, creating music without the "intention" which is usually behind it, leaving composers who listen to machine-generated pieces feeling unsettled by the lack of apparent meaning.[7]


Artificial intelligence finds its beginnings in music with the transcription problem: accurately recording a performance into musical notation as it is played. Père Engramelle's schematic of a "piano roll," a mode of automatically recording note timing and duration in a way which could be easily transcribed to proper musical notation by hand, was first implemented by German engineers J.F. Unger and J. Hohlfield in 1752.[8]

In 1957, the ILLIAC I (Illinois Automatic Computer) produced the "Illiac Suite for String Quartet," a completely computer-generated piece of music. The computer was programmed to accomplish this by composer Lejaren Hiller and mathematician Leonard Isaacson.[9]

In 1960, Russian researcher Rudolf Zaripov published worldwide first paper on algorithmic music composing using the "Ural-1" computer.[9][10]

In 1965, inventor Ray Kurzweil developed software capable of recognizing musical patterns and synthesizing new compositions from them. The computer first appeared on the quiz show I've Got a Secret.[9]

By 1983, Yamaha Corporation's Kansei Music System had gained momentum, and a paper was published on its development in 1989. The software utilized music information processing and artificial intelligence techniques to essentially solve the transcription problem for simpler melodies, although higher-level melodies and musical complexities are regarded even today as difficult deep learning tasks, and near-perfect transcription is still a subject of research.[8][11]

In 1997, an artificial intelligence program named Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music to imitate the style of Bach.[12] EMI would later become the basis for a more sophisticated algorithm called Emily Howell, named for its creator.

In 2002, the music research team at the Sony Computer Science Laboratory Paris, led by French composer and scientist François Pachet, designed the Continuator, an algorithm uniquely capable of resuming a composition after a live musician stopped.[9]

Emily Howell would continue to make advancements in musical artificial intelligence, publishing its first album "From Darkness, Light" in 2009, and its second "Breathless" by 2012. Since then, many more pieces by artificial intelligence and various groups have been published.[9]

In 2010, Iamus became the first AI to produce a fragment of original contemporary classical music, in its own style: "Iamus' Opus 1." Located at the Universidad de Malága (Malága University) in Spain, the computer can generate a fully original piece in a variety of musical styles in the span of eight minutes.[9]

Software applications

Interactive scores

Multimedia Scenarios in interactive scores are represented by temporal objects, temporal relations, and interactive objects. Examples of temporal objects are sounds, videos and light controls. Temporal objects can be triggered by interactive objects (usually launched by the user) and several temporal objects can be executed simultaneously. A temporal object may contain other temporal objects: this hierarchy allows us to control the start or end of a temporal object by controlling the start or end of its parent. Hierarchy is ever-present in all kinds of music: music pieces are often characterized by movements, parts, motives, and measures, among other segments.[13][14]

Computer Accompaniment (Carnegie Mellon University)

The Computer Music Project at Carnegie Mellon University develops computer music and interactive performance technology to enhance human musical experience and creativity. This interdisciplinary effort draws on music theory, cognitive science, artificial intelligence and machine learning, human computer interaction, real-time systems, computer graphics and animation, multimedia, programming languages, and signal processing.[15]


Main article: ChucK

Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language.[16] By extracting and classifying the theoretical techniques it finds in musical pieces, the software is able to synthesize entirely new pieces from the techniques it has learned.[17] The technology is used by SLOrk (Stanford Laptop Orchestra)[18] and PLOrk (Princeton Laptop Orchestra).


Main article: Jukedeck

Jukedeck was a website that let people use artificial intelligence to generate original, royalty-free music for use in videos.[19][20] The team started building the music generation technology in 2010,[21] formed a company around it in 2012,[22] and launched the website publicly in 2015.[20] The technology used was originally a rule-based algorithmic composition system,[23] which was later replaced with artificial neural networks.[19] The website was used to create over 1 million pieces of music, and brands that used it included Coca-Cola, Google, UKTV, and the Natural History Museum, London.[24] In 2019, the company was acquired by ByteDance.[25][26][27]


MorpheuS[28] is a research project by Dorien Herremans and Elaine Chew at Queen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on a variable neighborhood search algorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.


Main article: AIVA

Created in February 2016, in Luxembourg, AIVA is a program that produces soundtracks for any type of media. The algorithms behind AIVA are based on deep learning architectures[29] AIVA has also been used to compose a Rock track called On the Edge,[30] as well as a pop tune Love Sick[31] in collaboration with singer Taryn Southern,[32] for the creation of her 2018 album "I am AI".

Google Magenta

20-second music clip generated by MusicLM using the prompt "hypnotic ambient electronic music"

Google's Magenta team has published several AI music applications and technical papers since their launch in 2016.[33] In 2017 they released the NSynth algorithm and dataset,[34] and an open source hardware musical instrument, designed to facilitate musicians in using the algorithm.[35] The instrument was used by notable artists such as Grimes and YACHT in their albums.[36][37] In 2018, they released a piano improvisation app called Piano Genie. This was later followed by Magenta Studio, a suite of 5 MIDI plugins that allow music producers to elaborate on existing music in their DAW.[38] In 2023, their machine learning team published a technical paper on GitHub that described MusicLM, a private text-to-music generator which they'd developed.[39][40]


Generated spectrogram from the prompt "bossa nova with electric guitar" (top), and the resulting audio after conversion (bottom)

Riffusion is a neural network, designed by Seth Forsgren and Hayk Martiros, that generates music using images of sound rather than audio.[41] It was created as a fine-tuning of Stable Diffusion, an existing open-source model for generating images from text prompts, on spectrograms.[41] This results in a model which uses text prompts to generate image files, which can be put through an inverse Fourier transform and converted into audio files.[42] While these files are only several seconds long, the model can also use latent space between outputs to interpolate different files together.[41][43] This is accomplished using a functionality of the Stable Diffusion model known as img2img.[44]

The resulting music has been described as "de otro mundo" (otherworldly),[45] although unlikely to replace man-made music.[45] The model was made available on December 15, 2022, with the code also freely available on GitHub.[42] It is one of many models derived from Stable Diffusion.[44]

Riffusion is classified within a subset of AI text-to-music generators. In December 2022, Mubert[46] similarly used Stable Diffusion to turn descriptive text into music loops. In January 2023, Google published a paper on their own text-to-music generator called MusicLM.[47][48]


Further information: Artificial intelligence and copyright

The question of who owns the copyright to AI music outputs remain uncertain. When AI is used as a collaborative tool as a function of the human creative process, current US copyright laws are likely to apply.[49] However, music outputs solely generated by AI are not granted copyright protection. In the Compendium of U.S. Copyright Office Practices, the Copyright Office has stated that it would not grant copyrights to “works that lack human authorship” and “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”[50] In February 2022, the Copyright Review Board rejected an application to copyright AI-generated artwork on the basis that it "lacked the required human authorship necessary to sustain a claim in copyright."[51]

The recent advancements in artificial intelligence made by groups such as Stability AI, OpenAI, and Google has incurred an enormous sum of copyright claims leveled against generative technology, including AI music. Should these lawsuits succeed, the machine learning models behind these technologies would have their datasets restricted to the public domain.[52]

Musical deepfakes

A more nascent development of AI in music is the application of audio deepfakes to cast the lyrics or musical style of a preexisting song to the voice or style of another artist. This has raised many concerns regarding the legality of technology, as well as the ethics of employing it, particularly in the context of artistic identity.[53] Furthermore, it has also raised the question of to whom the authorship of these works is attributed. As AI cannot hold authorship of its own, current speculation suggests that there will be no clear answer until further rulings are made regarding machine learning technologies as a whole.[54]

See also


  1. ^ INFORMS Computing Society Conference: Annapolis: Music, Computation and AI Archived 2012-06-30 at Retrieved on 2010-12-22.
  2. ^ International Computer Music Association - ICMC. (2010-11-15). Retrieved on 2010-12-22.
  3. ^ D. Herremans; C.H.; Chuan, E. Chew (2017). "A Functional Taxonomy of Music Generation Systems". ACM Computing Surveys. 50 (5): 69:1–30. arXiv:1812.04186. doi:10.1145/3108242. S2CID 3483927.
  4. ^ Dannenberg, Roger. "Artificial Intelligence, Machine Learning, and Music Understanding" (PDF). Semantic Scholar. S2CID 17787070. Archived from the original (PDF) on August 23, 2018. Retrieved August 23, 2018.
  5. ^ Erwin Panofsky, Studies in Iconology: Humanistic Themes in the Art of the Renaissance. Oxford 1939.
  6. ^ Dilly, Heinrich (2020), Arnold, Heinz Ludwig (ed.), "Panofsky, Erwin: Zum Problem der Beschreibung und Inhaltsdeutung von Werken der bildenden Kunst", Kindlers Literatur Lexikon (KLL) (in German), Stuttgart: J.B. Metzler, pp. 1–2, doi:10.1007/978-3-476-05728-0_16027-1, ISBN 978-3-476-05728-0, retrieved 2024-03-03
  7. ^ "Handbook of Artificial Intelligence for Music" (PDF). SpringerLink. doi:10.1007/978-3-030-72116-9.pdf.
  8. ^ a b "Research in music and artificial intelligence". doi:10.1145/4468.4469. Retrieved 2024-03-06.
  9. ^ a b c d e f Verma, Sourav (2021). "Artificial Intelligence and Music: History and the Future Perceptive". International Journal of Applied Research. 7 (2): 272–275 – via
  10. ^ Zaripov, Rudolf (1960). "Об алгоритмическом описании процесса сочинения музыки (On algorithmic description of process of music composition)". Proceedings of the USSR Academy of Sciences. 132 (6).
  11. ^ Katayose, Haruhiro; Inokuchi, Seiji (1989). "The Kansei Music System". Computer Music Journal. 13 (4): 72–77. doi:10.2307/3679555. ISSN 0148-9267.
  12. ^ Johnson, George (11 November 1997). "Undiscovered Bach? No, a Computer Wrote It". The New York Times. Retrieved 29 April 2020. Dr. Larson was hurt when the audience concluded that his piece -- a simple, engaging form called a two-part invention -- was written by the computer. But he felt somewhat mollified when the listeners went on to decide that the invention composed by EMI (pronounced Emmy) was genuine Bach.
  13. ^ Mauricio Toro, Myriam Desainte-Catherine, Camilo Rueda. Formal semantics for interactive music scores: a framework to design, specify properties and execute interactive scenarios. Journal of Mathematics and Music 8 (1)
  14. ^ "Open Software System for Interactive Applications". Retrieved 23 January 2018.
  15. ^ Computer Music Group. Retrieved on 2010-12-22.
  16. ^ ChucK => Strongly-timed, On-the-fly Audio Programming Language. Retrieved on 2010-12-22.
  17. ^ Foundations of On-the-fly Learning in the ChucK Programming Language
  18. ^ Driver, Dustin. (1999-03-26) Pro - Profiles - Stanford Laptop Orchestra (SLOrk), pg. 1. Apple. Retrieved on 2010-12-22.
  19. ^ a b "From Jingles to Pop Hits, A.I. Is Music to Some Ears". The New York Times. 22 January 2017. Retrieved 2023-01-03.
  20. ^ a b "Need Music For A Video? Jukedeck's AI Composer Makes Cheap, Custom Soundtracks". 7 December 2015. Retrieved 2023-01-03.
  21. ^ "What Will Happen When Machines Write Songs Just as Well as Your Favorite Musician?". Retrieved 2023-01-03.
  22. ^ Cookson, Robert (7 December 2015). "Jukedeck's computer composes music at touch of a button". Financial Times. Retrieved 2023-01-03.
  23. ^ "Jukedeck: the software that writes music by itself, note by note". Wired UK. Retrieved 2023-01-03.
  24. ^ "Robot rock: how AI singstars use machine learning to write harmonies". March 2018. Retrieved 2023-01-03.
  25. ^ "TIKTOK OWNER BYTEDANCE BUYS AI MUSIC COMPANY JUKEDECK". 23 July 2019. Retrieved 2023-01-03.
  26. ^ "As TikTok's Music Licensing Reportedly Expires, Owner ByteDance Purchases AI Music Creation Startup JukeDeck". 23 July 2019. Retrieved 2023-01-03.
  27. ^ "An AI-generated music app is now part of the TikTok group". 24 July 2019. Retrieved 2023-01-03.
  28. ^ D. Herremans; E. Chew (2016). "MorpheuS: Automatic music generation with recurrent pattern constraints and tension profiles". IEEE Transactions on Affective Computing. PP(1). arXiv:1812.04832. doi:10.1109/TAFFC.2017.2737984. S2CID 54475410.
  29. ^ [1]. AIVA 2016
  30. ^ [2] AI-generated Rock Music: the Making Of
  31. ^ [3] Love Sick | Composed with Artificial Intelligence - Official Video with Lyrics | Taryn Southern
  32. ^ [4] Algo-Rhythms: the future of album collaboration
  33. ^ [5] Welcome to Magenta. Douglas Eck. Published June 1, 2016.
  34. ^ Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279. ((cite journal)): Cite journal requires |journal= (help)
  35. ^ Open NSynth Super, Google Creative Lab, 2023-02-13, retrieved 2023-02-14
  36. ^ "Cover Story: Grimes is ready to play the villain". Crack Magazine. Retrieved 2023-02-14.
  37. ^ "What Machine-Learning Taught the Band YACHT About Themselves". Los Angeleno. 2019-09-18. Retrieved 2023-02-14.
  38. ^ [6] Magenta Studio
  39. ^ [7] MusicLM on Github. Authored by Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghidour, Christian Frank. Published January 26, 2023.
  40. ^ [8] Understanding What Makes MusicLM Unique. Published January 27, 2023.
  41. ^ a b c Coldewey, Devin (December 15, 2022). "Try 'Riffusion,' an AI model that composes music by visualizing it".
  42. ^ a b Nasi, Michele (December 15, 2022). "Riffusion: creare tracce audio con l'intelligenza artificiale".
  43. ^ "Essayez "Riffusion", un modèle d'IA qui compose de la musique en la visualisant". December 15, 2022.
  44. ^ a b "文章に沿った楽曲を自動生成してくれるAI「Riffusion」登場、画像生成AI「Stable Diffusion」ベースで誰でも自由に利用可能". GIGAZINE.
  45. ^ a b Llano, Eutropio (December 15, 2022). "El generador de imágenes AI también puede producir música (con resultados de otro mundo)".
  46. ^ "Mubert launches Text-to-Music interface – a completely new way to generate music from a single text prompt". December 21, 2022.
  47. ^ "MusicLM: Generating Music From Text". January 26, 2023.
  48. ^ "5 Reasons Google's MusicLM AI Text-to-Music App is Different". January 27, 2023.
  49. ^ "Art created by AI cannot be copyrighted, says US officials – what does this mean for music?". MusicTech. Retrieved 2022-10-27.
  50. ^ "Can (and should) AI-generated works be protected by copyright?". Hypebot. 2022-02-28. Retrieved 2022-10-27.
  51. ^ Re: Second Request for Reconsideration for Refusal to Register A Recent Entrance to Paradise (Correspondence ID 1-3ZPC6C3; SR # 1-7100387071) (PDF) (Report). Copyright Review Board, United States Copyright Office. 2022-02-14.
  52. ^ Samuelson, Pamela (2023-07-14). "Generative AI meets copyright". Science. 381 (6654): 158–161. doi:10.1126/science.adi0656. ISSN 0036-8075.
  53. ^ DeepDrake ft. BTS-GAN and TayloRVC: An Exploratory Analysis of Musical Deepfakes and Hosting Platforms
  54. ^ AI and Deepfake Voice Cloning: Innovation, Copyright and Artists’ Rights

Further reading