Writer, musician, freelancer.

On mimicry and mapping

One of the things I've learned, as I've started accompanying various local choirs and community theater groups, is that many of the singers aren't actually reading the music.

They're listening to recordings, on their own, and then joining the group and trying to imitate what they remember.

Which, on its own, is not a terrible thing. One should not be required to read music to sing in a choir.

But it's started to become the thing that people do instead of reading the music, even if they know how to read music. When I was in college in the early 2000s, listening to a track in lieu of studying a score wasn't an option; when I sang with Revival Theatre Company and Chorale Midwest in the late 2010s, we were still tasked with learning our parts from the score even though most of us had access to streaming music services on our phones. It wasn't until Larry and I sang with Orchestra Iowa in 2023 that we were given specific instructions to learn our part by listening to it, and now choirs and community theaters regularly hand out links to recordings and/or rehearsal tracks.

Which means we're training people to listen and mimic instead of read and interpret.

This may actually be a terrible thing.

Why?

For starters, it limits whatever you're doing to whatever someone else has done before – and in many cases, "someone else" is either a post-produced popstar or a pre-programmed SAAS synth, neither of which are workable models. To understand how a voice can sound in a room, you need to listen to real voices sing in real rooms; to understand how an ensemble can make music, you need to listen to singers who are listening to each other in real time.

This is where I would launch into the giant Mapping Essay that I keep wanting to write, the importance of knowing how to read a piece of music (for example) and rewrite it in your head so that you can not only replicate but also reinvent it, and the rewards that come when you know how to map vs. the frustration that comes with (for example) playing Zork with someone else's walkthrough in another tab and sort of skimming and glancing and then finding yourself in a twisty little maze with a lantern that's about to go out.

Instead I'll let you write your own Mapping Essay, since you can probably draw plenty of inferences from the above paragraph, and tell you what I'm worried about with MELISANDE:

Larry asked me the other day if we should make sample tracks using artificial singers.

Obviously there are reasons to do this, one of them being that you can give people something to listen to before you have them over to your house to sing through the score, and another one being that you can listen to what you've written without having to work with anyone else's schedules, and a third being that many entities that review new musicals accept artificially-created concept albums, all of which add up to expedience.

Except I went ahead this morning and found a site that let you turn MusicXML files into artificially-performed renderings (artificially-rendered performances?), and then I signed up for the free trial and did it for one of my songs, and here it is:

audio-thumbnail
No 6 I Want rendered in Cantamus
0:00
/86.7

And yes, Cantamus got the job done and I would recommend it to anyone who wants to test how the job might get done.

But as soon as I heard that file I thought to myself "anyone who listens to this is going to mimic it."

Larry and I have this problem with our piano students, me more than he does because he's taught himself not to do it, but it's the thing where if you try to help them understand a tricky rhythm but you speed it up, they play it back as fast as you talked them through it.

And of course anyone who's worked with any theater group knows that everyone wants to imitate their favorite cast album. (Have fun coaching someone to sing "My Favorite Things" without speaking half the lines the way Julie Andrews does in the movie.)

So the point is that if Larry and I were to make a concept album, we'd have to not only get a robot to sing the thing, but we'd also have to make sure that the robot were performing in a way that we wouldn't mind someone else mimicking.

The robot would have to give the definitive performance, in other words, and I'm not sure an AI voice can do that.

(I can hear you all saying "yet." Okay, when we get a robot that understands a piece of text well enough to interpret it in a way that becomes iconic, an intellectual and emotional touchstone for everyone who has always wanted to say what the robot just sang, then, well... um... I'll coach that robot. Until then I prefer to coach humans.)

I also don't want the role to be created before anyone has the chance to create it. Theater is in fact a collaborative art, and if I plopped out a concept album with Vocaloid (or whatever) I would be taking that process away from our first group of collaborators.

You understand where I'm going with this, so I don't need to go any further with it –

except to say –

I think I may need to pitch a class called "Reading Music for Adults," followed by a weekly meeting of the Sight Singing Club.