top of page

Moments in Music: 10 Artists that Changed How We Think about Vocals

  • marshallsoundz
  • Sep 24, 2016
  • 7 min read

The history of vocal effects is a surprising one.

When you open your DAW and throw an effect on a vocal track, you might not suspect the history that lies behind that process. For instance: Did you know that the vocoder was invented for the military? I can picture it now…

But like many audio technologies, it wasn’t until artists got a hold of it that it became a groundbreaking creative tool.


Nowadays vocal effects (both hardware and VSTs) are the bread and butter of music production – whether it’s top commercial radio hits or experimental tape releases.

So here is a timeline with 10 artists pushing the boundaries of vocal processing.

Vocal Effects Are Everywhere

Already in 2010 the editor of Future Music magazine Daniel Griffiths roughly estimated that pitch-correction – one of the most widely used vocal effects – has been used in 99% of recorded music.

Like most newly introduced musical technologies, digital pitch correction attracted controversy and hate at first. Researcher Owen Marshall pointed out, “The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity.”  Y’know all the old fashioned luddite stuff.

However, things get most interesting when artists and producers don’t try to hide the use of vocal processing, but exactly the opposite. When they embrace and assert those technologies as a way to push the envelope of what we call music – and what we think of as the human voice.

Alvino Rey – St-Louis Blues (1944)


vocal__0009_alvinorey

The earliest use of what came to be known as the ‘talk box’ is attributed to Alvino Rey and his wife Luise King in the movie Jam Session (1944).

The couple made a puppet sing by directing Rey’s steel guitar sound to a microphone placed on King’s throat (who is hiding behind the curtain in the video). She then shaped her mouth to ‘vocalize’ the sound of the instrument back into a microphone.

Fun fact: Alvino Rey is the grandfather of Win and Will Butler from Arcade Fire.


This effect was later used by Peter Drake, Jeff Beck, Stevie Wonder, Daft Punk and many others.

But it was Peter Frampton who popularized it in 1970s classic rock, with performances like Do You Feel Like We Do.


Notice the tube in his mouth (about 7 minutes into the video). The tube is feeding the sound of his guitar and modifying its tone by shaping his mouth and vocalizing it.

If you’re into ‘talking guitars’ check out this article.

Annette Peacock – Pony (1972)


vocal__0008_anettepeacock

Pony is a track from Annette Peacock’s second studio album. It features her production, vocal and instrumental talents (including acoustic and electric pianos, synthesizers, and the electric vibraphone).

Mostly recorded live, the album blends jazz, rock, blues and electronic experiments. The album is best known for how Peacock used a Moog synthesizer as a vocal effect:


What you’re hearing is the resonance of a Moog filter forming as Peacock plays with the volume of her voice. The effect is a captivating blend between human and instrument voice intermingling as they modulate each other.

Laurie Anderson – O Superman (1982)


vocal__0007_laurieanderson

Laurie Anderson is an American artist, musician and writer who produced an astonishing variety of multimedia artworks. She explores the themes of technology, music, politics and beyond.

O Superman is a track on Anderson’s debut album Big Science from 1982 (reissued in 2007). The album is based on her multimedia performance art project entitled United States Live.


As Joshua Klein wrote on Pitchfork: “Anderson’s ingenious move, musically, was utilizing the vocoder not as a trick but as a melodic tool.”

The use of the vocoder in this track is also particularly relevant to the vocal effect’s historical link to warfare—O Superman is a poignant and emotionally charged critique of American militarism.

Bobby McFerrin – Encore from Tokyo (1984)


vocal__0006_bobbymcferrin

You’ve most likely sung along to Bobby McFerrin’s 80s megahit Don’t Worry Be Happy at some point.

But McFerrin’s vocal talents shine best when he sings unaccompanied.

His vocal style is exceptionally fluid. It jumps from pitch to pitch with great ease. He can sing perfect arpeggios as though his voice was a flute.

In this famous performance Encore from Tokyo McFerrin’s voice is processed through very wet reverb and a delay. It creates an incredible echo effect, emphasizing McFerrin’s agile vocal arpeggios.


McFerrin also uses his vocals and body as percussion, breaking into impressive polyrhythms and improvised scats. Watch him sing with a full arena:


No wonder he’s got ten Grammy Awards under his belt! Although McFerrin’s vocal talent is definitely the star of the show, this demonstrates how an effect used deliberately can take vocals to new heights.

Paul Lansky – More Than Idle Chatter (1994)


vocal__0005_paullansky

Paul Lansky is an American electroacoustic music composer. He used Linear Predictive Coding, granular synthesis and other stochastic music techniques to make the album More Than Idle Chatter.

None of these methods are vocal effects per se, but since we’re on the subject of mind-blowing uses of vocals, I simply had to include More Than Idle Chatter.


Linear Predictive Coding (LPC) is a method for speech signal processing used for modelling speech, often used in linguistics.  

As for granular synthesis, it’s “a method by which sounds are broken into tiny grains [of 1-50 milliseconds] which are then redistributed and reorganised to form other sounds.” Various parameters of the grains are modified (pitch, volume, phrase, etc). The result resembles a cloud of sound or an ambient texture.

Producing his piece in the 90s, Lansky used a DEC MicroVAX II – a computer that has become obsolete. Today, achieving granular synthesis can be done with tools like Ableton’s Granulator.

Cher – Believe (1998)


vocal__0004_cher

The most famous deliberate use of Auto-Tune is without a doubt Cher’s 1998 techno-pop banger Believe – produced by Mark Taylor and Brian Rawling.



The effect heard here is made by setting Auto-Tune to ‘the zero setting.’

As Andy Hildebrand (inventor of the Auto-Tune software) explains: “The “zero” setting causes instantaneous transitions in pitch. It’s an extreme setting. […] We didn’t think anybody would do that, but apparently it’s a popular thing nowadays.”

When Believe came out, people were confused. Is this a desired effect? Or a mistake? Auto-tune has since become a staple of any vocal recording session.

Auto-tune has also changed the economy of recording studios as Hildebrand points out:

“Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.”

Not to mention that it’s a deliberate stylistic effect that has taken over modern hip hop and rap – with T-Pain, Kanye West and Lil Wayne to name but a few examples.

Quasimoto – Low Class Conspiracy (2000)


vocal__0003_quasimoto

Quasimoto (‘Quas’) is an alias of West Coast hip hop producer Madlib from the Stones Throw Records roster. Quas is famous for his cartoonish high-pitched vocal style.

The story goes that Madlib didn’t like the sound of his own voice – he thought is was too deep. So he had the idea to record his vocals on a slowed down tape recorder, rap at a slow pace, and then speed it up on the recording.


It resulted in a clever use of pitch shifting, working with delivery speed and playback speed to obtain the effect of ‘having inhaled helium’ before singing.


For Quasimoto’s return on Yessir Whatever (2013) Madlib used the contrast between the voices of his alter egos – Quas and Madlib – to great effect.

Burial – Archangel (2007)


vocal__0002_burial

Almost a decade has passed since Burial’s second album Untrue was released so it feels like the perfect time to revisit it.

In 2007 the identity of William Bevan hadn’t been revealed yet and rumours were flying left and right. Burial was one of most mysterious acts around.

Aside from deep bass tones, much of the dark atmosphere of Burials emotionally-charged dubstep is owed to the use of vocal effects.


Burial’s combination of pitch-shifting, time-stretching and reverb on the vocal samples and the attuned sense of timing, rhythm and melody are what make his tracks amazing.

Forums abound with producers obsessing over the exact combination of effects Burial used on Untrue. The mystery continues it seems, on a musical level at least!

Holly Herndon – Movement (2012)


vocal__0001_hollyherndon

Holly Herndon has a lot to say about vocal processing in electronic music – both through music and writing.

She wrote her Master’s Thesis at Mills College on this very topic (download and read it here). In it, Herndon argues that “electronically processed voice in live music performance may illustrate an understanding of computer music performance as an embodied experience.”

The relationship between human bodies and technologies is the central theme of her debut album Movement.


By using heavy processing on her voice, she reflects on how our interactions with technologies are not as straightforward as we think – they’re sometimes blissful other times unsettling.

To achieve the vocal effects on Movement, Herndon used Max/MSP – a visual programming language – to create custom instruments and vocal processors.

Laurel Halo & Hatsune Miku (2017)


vocal__0000_laurelhalohatsunemiku

To crown this (very incomplete) timeline of vocal processing, it only makes sense to talk about Hatsune Miku, which is Japanese for “The Sound from the Future”.

Hatsune Miku is a virtual pop star made with Yamaha’s Vocaloid voice synthesis technology. “Just put in a melody and lyrics and your virtual singer will sing for you” says the Vocaloid website.

For next year’s CTM festival in Berlin, electronic music producer Laurel Halo announced that she will be collaborating with Hatsune Miku.

The project is entitled Still Be Here and will be scored by Halo. She and Hastune Miku will be joined by visuals and choreographers as well.


Watch a sneak peak:


Laurel Halo’s incredible 2015 album In Situ was released on London’s Honest Jon’s Records. 

Nothing is Natural About Recorded Music

We’ve all become accustomed to hearing processed vocals on the radio and pretty much everywhere else there’s music.

We’re even used to our own voice being chopped and messed with through the various devices that we use to communicate. Auto-tune apps and vocoder VSTs are but a few clicks away.

So it’s worth looking back at how artists have used such processes in unconventional ways, blending the boundaries between human voices and machines.

Whenever I’m told that some digital audio process is the demise of talent, it’s worth remembering this:

“there is nothing natural about recorded music. Whether the engineer merely tweaks a few bum notes or makes a singer tootle like Robby the Robot, recorded music is still a composite of sounds that may or may not have happened in real time.” – Sacha Freres Jones for New Yorker

Plus, doesn’t this also make you grab a mic and fire up your DAW? I’m already in Ableton Live putting some grain delay on those acapella samples let me tell you.

Thank you to the friends who helped me put together this list via Facebook. You know who you are.

Comentarios


R

© 2023 by Robert Caro. Proudly created with Wix.com

bottom of page