Craig Anderton’s Open Channel: Has Music Technology Made Us…Lazy?

Home – Single Post

Craig Anderton.
Craig Anderton.

Well, I’ll just ask ChatGPT to write a column on the subject.

Kidding! ChatGPT is useless for writing a column like this. And let’s be clear:

Technology making our lives easier is not necessarily the same as making us lazier. Still, consider a question asked by an attendee at a seminar: “What’s the best way to do Automated Double Tracking (ADT) for doubling vocals?” Well, it’s…singing the part again. Automatic, electronic double-tracking mimics a non-automatic, physical process. By definition, a vocal processed with ADT is inauthentic. Sure, you can always insert a plug-in and say all is good. But is that about choosing the best possible option for the part, or being lazy?

It can be tempting to take the easy way out. Hey, why play a part more than once if you don’t need to? Why record a new bass part for the second verse when you can copy the first verse and paste it in? No one will notice anyway. Yeah, that’s the ticket.

Except it’s not.

Of course, copy-and-paste makes sense for parts that are meant to be repetitive. However, in the example above, it’s worth playing that bass part again for the second verse. It will be slightly different, even if playing the same notes. If a choir sound needs 10 unison background vocals, then sing all 10 vocals.

These recommendations aren’t just touting that the old-school way is the best way. It’s because with digital technology, getting lazy can unwittingly allow the technology to attenuate emotional cues like dynamics, pitch differences and tempo changes. When technology gives permission to take shortcuts, that can change the music. Whether consciously or subconsciously, listeners sense inauthenticity just as easily as dogs smell fear.

Check out social media comments and you’ll find plenty of complaints from all generations about the sameness of today’s music. Listeners cite pre-computer classic recordings as “better music”—but were the musicians and engineers better, or was the recording process a factor? Back then, every song started with blank tape. No canned beats. No templates. You never knew exactly where the recording process was going to go.

Today, some producers start with anything but a blank canvas. The end is defined almost as soon as the process begins. They use a DAW template that already exists, add tracks to a beat that already exists, play through keyboard presets that already exist, download samples that already exist (ones you can even hear in other people’s music), run guitars through amp sims with presets that already exist…and then use the same mixing and mastering plug-ins as everyone else so that the music sounds “modern.”

And predictable.

That’s only one step removed from the kind of audio AI generates with the “look and feel” of music. According to Deezer, around 30% of its uploads are AI-generated, yet AI content accounts for only about 0.5% of what listeners stream. Maybe AI’s best use case isn’t for listeners, but for people who want to do more with music than just listen—the same way it’s great that people join a company softball team instead of always watching baseball on TV. Meanwhile, I’m glad Suno changed its marketing pitch to “make any song you can imagine,” rather than imply that AI will write hit after hit. It won’t. The people streaming AI music on Deezer seem to agree.

Still, professionals use AI song programs to give them ideas. And AI is a godsend for pro songwriters who lack mixing or multi-instrumentalist chops. They feed their tracks into a program, prompt it to fill in the missing bits, and voilà—a serviceable demo. That’s cool, but handing over creative autonomy for the sake of expedience risks losing the value of finding ideas from happy accidents, stretching your personal limits and collaborating with humans instead of algorithms.

• Craig Anderton’s Open Channel: Getting the Band Back Together

And why not seek out others? It’s never been easier to do remote collaboration, whether you only need an overdub added to a file or choose to use a collaboration platform like Sessionwire. Collaboration results in living, breathing, human music, something more than “just a demo.”

Or suppose someone plays synthesizer and needs a bagpipe sound, but doesn’t play bagpipes. If the project is a soundtrack for a VisitScotland commercial that will play back just above the noise floor, sure, go to Splice and grab bagpipe samples. But for other projects, making the effort to dive into a synth, come up with a sound that fulfills the same musical slot bagpipes would, and carve out a unique sonic niche means that the music won’t have the same bagpipe sounds as anyone else. When listeners play the music, they’ll hear something innovative instead of imitative.

Discover more great stories—get a free Mix SmartBrief subscription!

AI can only draw from what already exists. Yet listeners crave hearing something new, not just a shuffled version of a generic musical genre. AI has issued a challenge to musicians. Fortunately, we have a secret weapon AI doesn’t have: creativity. But we can’t squander it, or be lazy about it. Don’t just think outside the box; make a bigger box. Use different materials. Decorate it, and light up the insides.

The bottom line? If we don’t make the effort to come up with newer and more creative music and mixes than what’s generated by an algorithm doing statistics-based pattern-matching from a database of pre-owned certified music, then we probably deserve to have AI obsolete us. If you find that concept too scary, there’s an easy solution: Simply do something that’s never done before—because AI can’t do that.

About Us

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Follow Us: