How fitting to participate in a workshop on AI and music on the same day as the release of Spotify’s 2025 Wrapped! What could be a better reminder of how the lines between creativity and efficiency are increasingly blurred with the incorporation of artificial intelligence in the music industry?

Musicians, teachers, students, and engineers joined in Dodge Hall on Wednesday, December 3rd, for a seminar-style workshop led by Ása Ólafsdóttir. We discussed the ethics of AI in music as well as copyright issues arising from its increasing implementation. Much time was also spent considering AI’s possible transformation of the intention of music. Throughout the workshop, we asked ourselves, are we looking to create music to fill empty noise, or craft sounds that evoke feeling?

From analog to digital techniques, the way we create and listen to music has been adapting and progressing throughout history. At our current point in time, that so heavily involves AI, our discussion group wondered if these other technological progressions, in previous historical periods, invoked similar conversations around creativity and production. Part of the argument against AI use in music relates to the way it values efficiency, often sacrificing some meticulousness in the process. If the beauty of music is, in part, because of the careful steps taken to create it, what is different about transitions from older technology like cassettes to CDs, or even to modern streaming services like Spotify? Could these be argued to be stifling creativity as well, representing an increased distance between the artist’s labor and the listener’s ability to consume their content?

To progress the dilemma presented by these ideas, our discussion began with a breakdown of the different types of ways AI can be used in the music-making process. The consensus of the seminar participants was that AI-generated music is exponentially more stifling on creativity than simply using AI to assist in the music-making process. The former allows no humans to be involved in developing the sounds that become the AI-generated music, while the latter uses AI tools to supplement the music-making process being spearheaded by the human artist. It is undeniable that the tools we’ve been given from AI can make the time-consuming and expensive processes of splicing, sampling, and mixing substantially quicker—but does quicker necessarily mean better content? The mixing and mastering is often the most expensive part, as one of the seminar participants mentioned, which could lean in favor of the idea of AI as promoting accessibility. However, we seemed to align most as a group under the idea that AI would always lack that “human touch,” or the ability to create sounds that resonate with human emotion. There is something more meaningful in the imperfections created by human creation than the smoothed-out sounds generated by a computer.

The format of the presentation put together by Ólafsdóttir featured questions to move the conversation along and center us in our core themes. I appreciated this aspect, since it was easy to find ourselves going off on tangents about technology and creativity in music more generally. Nevertheless, I did especially enjoy the discourse around what the purpose of music should really be, introduced with a quote from Stanley Cavell. Cavell said in “Music Discomposed,” his 1965 article, “We approach such objects not merely because they are interesting in themselves, but because they are felt as made by someone.” Members of the seminar agreed that although music can be reproduced by AI, there is a level of experience and emotion that many argue cannot be replicated by technology, which is precisely what makes music so appealing. The sentiments throughout the classroom that day were heavily concerned with this level of purpose and feeling that is stimulated through human-made music rather than AI-assisted or generated sound.

In the same vein, intentionality in music was a prominent topic of the discussion. We wondered, if we are to feed prompts into AI software to generate music for us, what does that do for the intention of the artwork? I thought about how people often describe music as having a way of saying things beyond words, meaning there is great value in the implicit intentions within music. If everything is made explicit through pre-written prompts for AI-generated songs, one could conclude that this changes the nature of the song completely. Ólafsdóttir did make an interesting point, though, that it also depends on what consumers are looking for. Using autotune as a comparison to AI, we talked about how the removal of vocal nuances with autotune might be similar to the smoothing and retouching done by AI-assisted tools. For some, this could be frustrating to listen to a song that sounds objectively generic, but it isn’t necessarily wrong. What sounds good to a listener would absolutely depend on the kind of experience that the individual is looking for with their music.

There are, however, some less objective issues that we discussed as arising from AI use in the music industry. For one, these AI music-generating software must be trained with existing music to meet the requests of the user. This begs the question: can we argue that this is ethically sound if someone else’s music is being distorted, often without permission, to create new music? As an example, Ólafsdóttir shared a summary of a lawsuit filed against Suno, an AI music-generating company, by Koda, a Danish rights organization. The claim is that Suno has stolen music from Danish artists, wholly without permission or compensation. This is the first lawsuit by a Danish rights organization against an AI company, which is historic in itself for its representation of the increasingly present tension between AI and respect for artistry. Just as notably, the lawsuit underscores AI’s threat to the Danish economy, as Danish music revenue is expected to experience a 28% loss in 2030 if AI-generated music continues to make new music unchecked. The seminar participants and I found this particularly troubling, drawing most of us to continue to lean away from support of AI use in music.

We brought our thoughtful discussion to a close by thinking about streaming services like Spotify that use algorithms and formulas to craft playlists and mixes for each user. I would argue, these technologies are more commonly used today than AI-generated music itself. Although it is less extreme than many of the other examples we discussed, it was especially relevant for thinking about one of our guiding topics of intentionality. Intentionality is not just in how we create music, but how we consume it. Do we get recommendations from friends and family, or do we listen to formulated playlists that match our typical “vibe?” Do we listen to albums with respect to the artist’s intent, or is it okay to make playlists that fulfill our individual purposes? We find ourselves enthralled by “listening ages” and “minutes listened,” with the release of Spotify Wrapped each year; instead of asking ourselves which listening habits and methods of creation are “right” or “wrong,” we should be wondering if the descent into generative technology is even possible without some level of sacrifice of intentionality.

Header via Bwarchives