"All new music sounds the same."
How often have you heard these words? And how often have you heard these words uttered by someone over the age of 35? And further, how often have you immediately assumed that the person offering up such an overtly generalized statement as if it was indeed a fact was simply old and out of touch?

Perhaps folks who are prone to slamming "new" music are drifting toward the whole "Hey, you kids, get the hell off my lawn!" world view. Or perhaps not.

In late July, Reuters ran a story tracing the doings of a team of scientists in Spain, whose members employed an archive known as the Million Song Dataset to examine music recorded between 1955 and 2010. The Million Song Dataset "breaks down audio and lyrical content into data that can be crunched," according to the Reuters piece.

So what did the team, led by Joan Serra, an artificial intelligence specialist with the Spanish National Resource Council, conclude?

"We found evidence of a progressive homogenization of the musical discourse," Serra told Reuters. The team also noted that recordings have become increasingly louder in recent years, that compositions have become more "bland" in terms of chords, notes and general harmonic information, and that sounds represented on these recordings have grown much more confined in terms of timbre.

Which is a fancy way of saying that music has become much louder while simultaneously being dumbed down.

Wow! Who knew?

Of course, this study and its results are begging to be made fun of – the wasted no time doing as much, embracing the whole "these are old people who just don't get it" approach. Unfortunately, that argument is far too lame to withhold the power of scrutiny.

The fact is, most popular music recorded over the past decade does indeed sound the same; is waaay louder than previously deemed "normal"; employs a paucity of harmonic information; 
and generally sounds really crappy. Most people who care about music already know this. But now – hallelujah! – we have science 
to back us up!

Let's define the concept of "loudness" before we go any further. When it is suggested that new recordings are incredibly loud, this does not mean only that there is greater amplitude, nor should it imply a certain "heaviness," stylistically speaking. What we really mean when we say new recordings are loud – or, let's face it, too loud – is that they have been overly compressed, so that dynamics, or the soft-to-loud spectrum in a recording, have been squashed to the point of eradication.

It's the sonic equivalent of bulldozing a mountain range until it's as flat as a pancake. And then building a cheesy condominium on it.

Revered and well-seasoned producer Eric Sarafin, who ran an anonymous blog known as "The Daily Adventures of Mixerman" before revealing his identity in 2009, has written extensively about this tendency toward overcompression, which is normally a result of computer-based recordings and overtly aggressive action in the mastering process.

"These days," Sarafin-as-"Mixerman" writes, "the mastering engineer views his job as one of placing an identifiable sonic imprint on the record. More importantly, his job is to be sure that the record is ‘loud.' So the mastering engineer proceeds to stamp out every last bit of dynamic range by using what is called a brick wall limiter.

"Imagine what would happen to you were you to be stopped at a high rate of speed by a brick wall. SPLAT! Well, it's no different when music hits a brick wall. The music becomes flattened, all depth is removed, and all changes in volume eradicated."

Note that this isn't some cranky self-appointed pundit jabbering away on a topic he knows nothing about. This is a guy who makes records for a living. He's made an awful lot of popular ones, too.

The reason for this tendency to compress the life out of recordings? Record companies love homogenized product, because it's easier to sell. They've trained a whole generation of listeners to accept this practice as normal. The music suffers greatly, of course. Like the record companies care.

Take a look at this week's list of top iTunes downloads, and you'll find all the evidence you need. In the top 5 alone are Taylor Swift's "We Are Never Ever Getting Back Together" (snappy title, no?), Flo Rida's "Whistle," Maroon 5's "One More Night" and Justin Bieber's "As Long As You Love Me."

Putting aside the rather dubious artistic merits of this collection of tunes, we see (and hear) that all are guilty of every finding in the Spanish scientists' study. The songs have little, if any, dynamic range; they employ tired and worn chord progressions, when they employ any at all; the variety of timbre, or "sound color," is virtually nil; and they are loud as hell, some to the point of digital "clipping," or distortion. Oh, joy.

As an analog, think of the difference between a painting on a canvas displayed before you in all its textural glory. Then imagine that same painting shot by a digital camera, color-corrected via a digital program, flattened in Photoshop, and then printed on cheap paper through a middling laser printer.

Oh, you'd prefer the actual painting-on-canvas? Really? Well, we don't care what you think, pal, so take your cheap Photoshop knockoff and hit the road!

The end result of this for the listener is most likely ear-fatigue and ultimately, one supposes, a certain amount of ennui. There are certainly contemporary recordings being made that don't follow this dumbed-down template, but they are increasingly the exception, not the rule. That's not good for our ears. It's even worse for our culture.