When computers listen to music, what do they hear?

On his blog Music Machinery, Paul Lameres analysis shows that a song recorded

From The Boston Globe:

Soon after the release of the first iPhone five years ago, an astonishing new ritual began to be performed in cafes and restaurants across the country. It centered on an app called Shazam. When the phone was held up to a radio, Shazam would almost instantly identify whatever song happened to be on, causing any iPhone skeptics in the vicinity to gulp in bewilderment and awe.

There was something unspeakably impressive about a machine that could listen to a snippet of a random hit from 1981, pick out its melody and beat, and somehow cross-reference them against a database that seemed to contain the totality of all recorded music. Seeing it happen for the first time was revelatory. By translating a song into a string of numbers, and identifying what made it different from every other song ever written, Shazam forced us to confront the fact that a computer could hear and process music in a way that we humans simply can’t.

That insight is at the heart of a new kind of thinking about music—one built on the idea that by taking massive numbers of songs, symphonies, and sonatas, turning them into cold, hard data, and analyzing them with computers, we can learn things about music that would have previously been impossible to uncover. Using advanced statistical tools and massive collections of data, a growing number of scholars—as well as some music fans—are taking the melodies, rhythms, and harmonies that make up the music we all love, crunching them en masse, and generating previously inaccessible new findings about how music works, why we like it, and how individual musicians have fit into mankind’s long march from Bach to the Beatles to Bieber.

Computational musicology, as the relatively young field is known within academic circles, has already produced a range of findings that were out of reach before the power of data-crunching was brought to bear on music. Douglas Mason, a doctoral student at Harvard, has analyzed scores of Beatles songs and come up with a new way to understand what Bob Dylan called their “outrageous” use of guitar chords. Michael Cuthbert, an associate professor at MIT, has studied music from the time of the bubonic plague, and discovered that during one of civilization’s darkest hours, surprisingly, music became much happier, as people sought to escape the misery of life.

Meanwhile, Glenn Schellenberg, a psychologist at the University of Toronto at Mississauga who specializes in music cognition, and Christian von Scheve of the Free University of Berlin looked at the composition of 1,000 Top 40 songs from the last 50 years and found that over time, pop has become more “sad-sounding” and “emotionally ambiguous.”

Continue reading the rest of the story on The Boston Globe