TLDR
Creating a machine learning model that can predict the mood of Taylor Swift songs.
Outline
Intro
Objective
Dataset
Data cleaning
Model training
Evaluation
Deploying API
Conclusion
Intro
Singer-songwriter Taylor Swift needs no introduction. Her first album, Taylor Swift, was released in 2006, and since then, has gone on to produce 8 more original albums, with 2 re-recordings of her past work: Fearless and Red. With songs that appear to be lifted directly from her diary, it is said by many, or at least by me, that Swift has a song for every mood in regard to relationships. Across every album, Swift’s songwriting puts to words the feelings of love and heartbreak.
With a collection of over 150 original songs, it takes a true Swiftie to be able to categorize every song by the mood it evokes–thankfully, I am one. But still, I wondered if there was a better way to categorize songs into a collection of playlists which represent the moods associated with love gained and love lost.
I landed on creating a machine learning (ML) model to create a statistically-backed playlist. Equipped with only a dataset of Taylor Swift song’s musical properties and very little data science and software development knowledge, I’ll be using
for its low-code model creating ability.
To group certain songs based on their attributes, I’ll be using a
. Categorization, also called classification, is the ML practice of predictive modeling, where a class is predicted for a given value of input data.
Objective
The popularity of Spotify gave rise to a new and interactive form of music consumption: playlisting–which is the act of grouping and labeling songs based on semantic meaning. Playlists have become an integral part of music listening, as songs with the same themes, beats, or emotional value can be grouped in one place.
Creating a playlist that is hyper-specific to a particular mood is a great way to wallow in whatever emotion you’re feeling in the moment, and who better to create that playlist with than Taylor Swift? The goal for creating this model is to deepen an interest in machine learning with an accessible data source and model deployment. If this model is successful, we’ll be able to make predictions on how to categorize new Taylor Swift songs based on mood before giving them a deep listen.
The main question we’ll be answering with this model creation is which factors within a Taylor Swift song have the greatest impact on their emotional impact.
Dataset
The dataset used for this model was created using
’s Kaggle dataset:
. Daghony extracted her data from Spotify Web API using the
.
This included every song from the deluxe version of Taylor’s first 8 albums, with the re-recording of Fearless, Fearless (Taylor’s Version), replacing the original. This dataset didn’t include recorded concerts, featured songs, movie soundtracks, or stand-alone singles. Our dataset was also created before the release of Red (Taylor’s Version). Unfortunately, Taylor’s magnum opus, All Too Well (10 minute version), will be left off this dataset for model training (but once our model is trained, we can still run a prediction with this track). The albums featured in this dataset are:
Taylor Swift (2006)
Fearless (Taylor’s Version) (2021)
Speak Now (Deluxe Package) (2010)
Red (Deluxe Edition) (2012)
1989 (Deluxe) (2014)
reputation (2017)
Lover (2019)
Folklore (deluxe version) (2020)
Evermore (deluxe version) (2020)
This dataset categorized Taylor Swift songs into the following columns:
name–Name of song
album–Name of album
artist–Name of artist/s involved
release_date–release date of album
length–song length in milliseconds
popularity–Percent popularity of the song based on Spotify’s algorithm (possibly the number of streams at a certain period of time)
danceability–How suitable a track is for dancing based on a combination of musical elements, including tempo, rhythm, stability, beat strength, and overall regularity
acousticness–How acoustic a song is
energy–A perceptual measure of intensity and activity
instrumentalism–The amount of vocals in the song
liveness–Probability that the song was recorded with a live audience
loudness–Tendency of music to be recorded at steadily higher volumes
speechiness–Presence of spoken words in a track (if the speechiness of a song is above 0.66, it is probably made of spoken words, a score between 0.33 and 0.66 is a song that may contain both music and words, and a score below 0.33 means the song does not have any speech)
valence–A measure of how happy or sad the song sounds
tempo–Beats per minute
In addition to the 15 columns listed above, I created a 16th column: mood, by labeling each song based on the particular mood the song conveyed. This column is subjective based on my interpretation of the song. The goal of this project is to create a playlist that is personal to one’s own interpretation of a song’s mood, and while there will likely be similarities of interpretation, I encourage those interested to make and label their own dataset with how they interpret the emotions of a song.
For those more experienced with data science and machine learning, using
(NLP) to create a model that can identify a song’s mood based on its lyrics would be a great way to use ML to label a more statistics-based mood detection column.
Here are the classes that songs were categorized on:
Love–new love
–Songs that express the feelings of a new relationship, or new feelings towards someone. Think of this as the honeymoon phase. While scattered throughout Swift’s catalog, her early work has a higher concentration of songs showing this emotion.
Love–already fallen
–Songs that express a deeper experience with love. As Swift herself has grown, her writing style and experience with serious relationships has as well.
Heartbreak–sad
–Songs that deal with the lows of heartbreak. These songs fit the emotions that would match the depression, denial, and bargaining stage of grief.
Heartbreak–mad
–While also dealing with the lows of heartbreak, these songs fall more in-line with the anger and accusatory emotions that can take place after a breakup.
It’s complicated–wanting lov
e–Emotions displayed in these songs express a longing to love or be loved.
It’s complicated–losing love
–Emotions displayed in these songs express the feelings of the inevitable or potential loss of a relationship. .
Songs that didn’t display the mood of a relationship’s emotions were put into these classes:
Nostalgic
Fun with friends
Generally sad
Story
Misc.
Once the dataset was complete, we were left with 16 columns and 166 rows. The data can now be downloaded from Excel or Google sheets by CSV to be uploaded onto a Mage workspace.
Once uploaded, we selected the column that we wanted to categorize: “mood”, the unique feature ID: “name”, and the timestamp: “release_date.”
You can download the complete dataset I used
Data Cleaning
Once you’ve imported your data into Mage and selected the features you’d like to make your prediction off of, Mage will give you suggestions on how to improve your model’s performance.
Applying model suggestions, or cleaning your data, is an essential part of performing ML operations. Having a clean dataset will make your model’s predictions much more accurate, as the model isn’t being influenced by patterns in your data that aren’t conducive to making an accurate prediction.
In the case of this categorization model, we will only be applying a few data cleaning actions. The first is removing duplicate rows in the “artist” column. Since every song names Taylor Swift as the artist, this column can be filtered out, as we don’t want this to influence our model’s final prediction. Keep in mind that in playlists with a greater number of artists, this feature may be more important to the model’s prediction.
The second action we will be applying is removing classes that don’t contribute to the main objective of the project. We will only be focusing on predicting the mood a song represents in regards to a relationship. This will filter out classes that don’t contribute to our objective and/or have less than 10 unique values. This will be: Class[‘nostalgic’, ‘fun with friends’, ‘generally sad’, ‘story’, and ‘misc.’].
One common suggestion that was given by Mage was to remove rows with duplicate values for a musical feature (such as acousticness, danceability, liviness, etc.). These cleaning suggestions can be ignored; differences and similarities in the song’s properties may contribute to the final prediction.
Model training
Once your data has been cleaned, it’s ready to be trained! Mage will take your data through the process of model training; in this case, Mage will learn how to categorize the mood of a song by running predictions on your data.
The algorithm splits your dataset into train and test sets, known as train-test-split. Training data is used to learn to make the predictions, while test sets are meant to determine whether the predictions are correct. This process is known as
.
Evaluation
Our model came back with excellent performance! This means that our predictions were better than the baseline (guessing the most common value) at predicting the mood for each song.
The top 3 features that influenced our model’s predictions were: acousticness, danceability, and tempo. In the “top features’’ section Mage offers an explanation for why each feature affected the prediction. For example, under acousticness “a higher value influences the model’s mood output of Love — already fallen.” For danceability “A higher value influences the model’s mood output of Love — new love.”
Deploying your API
Now that the model has been successfully trained, we can deploy our API by exporting to an external app, embedding the code, or making further predictions in Mage’s playground feature. Access to any of these features will be under “predict.”
The playground feature lays out the defining features of our model. That is, the features that influenced the model’s predictions the most. To make a prediction for the mood of a new Taylor Swift song, we just need to enter new data points into the playground feature. Here’s how to find the new data points based on the one’s used in the model:
Go to
Go to
Log in to, or create a Spotify account
Click “Create app”
Next, go to Spotify
Hit (> Get token), (> Request token), (> Agree)
To find the track ID, go to a song on Spotify, and copy the digits following “track/” in the URL.
Paste it to the “id”
Hit “try it”
The page will generate a list of properties needed to predict a new track with your Mage model.
Copy and paste the new track’s information into the Mage playground workspace. And hit “predict” to generate a prediction on what mood the track fits into. In our case, we ran predictions on acousticness, danciblity, tempo, valence, energy, length_, and speechiness.
To test our model’s ability to categorize songs by mood, we’ll be using the track features from “All Too Well (10 Minute Version)(Taylor’s version)(From the Vault)”. After inputting the relevant data into the playground features and hitting predict, Mage predicted that the mood associated with this song was “Heartbreak–sad.” I consider this to be an extremely accurate prediction.
Conclusion
The success of our model shows that ML can be used to categorize songs into playlists based on the mood they invoke. Until Spotify uses an ML model that would allow users to generate songs into a playlist by their mood, creating your own model that can predict what song fits into a certain mood is not only useful to the way people consume music, but a great introduction to using ML.
For more advanced developers, or to learn how to categorize songs based on other attributes, like genre, you can follow
.
For new developers, or people who just want to generate hyper-specific playlists based on your own mood, you can get started
.
You can also check out:
We just launched our new
for building and running data pipelines that transform you data!
Join our Slack channel!
Come, chat and collaborate with our
.