Uncanny Valley: charting a new era of musical creativity

Uncanny Valley
photo: Courtesy of Google Creative Lab, Sydney, Australia

In 2010, Australian singer/songwriter Charlton Hill and music technologist Justin Shave joined ranks to set up Uncanny Valley, a Sydney-based progressive technology company at the cutting-edge of the music industry. Charlton Hill, who is also head of innovation at Uncanny Valley, discusses the company’s ambitions to speed up, democratize and re-shape music production through the use of artificial intelligence (AI). In 2020, Uncanny Valley and colleague Caroline Pegram formed Team Australia and won the first-ever Eurovision AI Song Contest.

Uncanny Valley generally relates to an uneasy feeling humans have about things that aren’t quite human. How did you come to call your company Uncanny Valley?

My co-founder Justin Shave came up with it. After unpacking its meaning, I embraced the fact that we were destined to be a progressive music tech company in an industry we both knew well.

Justin is a classically trained pianist and a music technologist with a computer science background and I am a songwriter and a singer. We both have a strong interest in innovation. There were shifting sands in the music industry in 2010, when we established the company, so it made sense to work with a forward-looking partner. We have always had an open approach to collaborators and have not confined them to traditional musicians and producers. I think we have grown into the name. You could say that we are trying to surpass the uncanny valley in the field of music, which is probably one of the most interesting challenges of our time.

Tell us about your business model

We have two revenue streams. One is through commissions to create original music or re-mix music (where you take a known, licensed song and recreate it with a new vocalist) and the other is the royalties that come to us when these programs are broadcast. In Australia, we work on a range of projects, including, for example, Australian Survivor, which needs a lot of music to drive it along. These revenues drive the company’s day-to-day operations and fund our more progressive AI and machine learning pursuits.
Tell us about your work on augmented creativity.

It’s incredibly exciting. It started formally in 2019, when we collaborated with Google’s Creative Lab and emerging Australian artists on an experiment using machine learning to build some progressive tools they could use in their songwriting process. Their feedback during the design phase was invaluable.

In general, they enjoyed the process but were quite vocal when they felt the tools were stepping on their toes. For example, our AD LIBBER app, which is designed to spark lyrical ideas, was welcomed by one artist who struggled with lyrics, but did not appeal to another who had a talent for phrasing. Another app called Demo Memo, allowed the artists to hum or whistle a melody and transform it into an instrument of their choice, thereby speeding up the demo process significantly. They all appreciated that.

The experiment was a great opportunity to push and pull at these concepts. We’ve continued to develop them through our music engine, MEMU, which is an ongoing accumulation of our research. With MEMU’s architecture, we believe we can crack the quantification of music and emotion.

MEMU offers musicians the opportunity for their music to be expressed across different modes of emotion and mediums.

Can you explain that further?

Our interest lies in understanding and quantifying the emotional response that music generates and the processes associated with writing melodies and songs. It’s not about cracking the formula for a hit song; it’s deeper than that. We are exploring the juxtaposition of particular lyrics, melodies and chord sequences and the way they make you feel, to better understand the musical fingerprint of a piece of music. It’s the idea of feeling happy/sad and explaining that to a computer. It’s pretty complex. It’s mind-bending that we now have the computing power and smarts to analyze the lyrics and melodies of an artist’s entire body of work and can generate new ideas that might turn into new songs or represent the forward movement of that person’s work.

Tell us more about MEMU

MEMU is a powerful engine for real-time mixing and mash-up of artists’ work. It’s really exciting. It heralds a new era in music production. We see it as an evolving ecosystem of contributors and collaborators that will allow artists to be discovered and to track and be paid for any broadcast of their work. MEMU’s ability to understand and mix an endless flow of music in real time is really quite remarkable.

How are people reacting to MEMU?

Some people find it amazing but are concerned that we’re going to put musicians out of work. That’s not our intention. We see MEMU as a powerful engine to democratize production, by speeding up the process and making it more affordable. Just as Spotify is pursuing the best playlist ever, MEMU is pursuing the best music-scape ever.

How did you develop the software?

It was an interesting process that involved data scientists and creative technologists working with musicians, music producers and a broader team of academics.

At first, we trained MEMU with our own proprietary material. We then dabbled in using copyright-protected material, but to avoid the risk of inadvertent copyright infringement, we began drawing on the works of an extended community of users, including record labels. This enabled us to push and pull at the notion of copyright and re-mixing. We discovered a sliding scale of reactions depending on the notoriety of the artist.

When artists enter the MEMU universe, they agree to allow it to do wonderful and extraordinary things with their art. MEMU tracks the micro-contributions of each artist and how they are used. It is a powerful way to ensure artists are remunerated.

When we needed to, we used open source material to train MEMU, but we typically developed our own proprietary solution to create MEMU’s bespoke architecture, simply because the solutions we needed weren’t available in the market.

Can you explain the different channels of MEMU?

MEMU is malleable and now has a variety of channels that enable us to isolate universes. For example, if we ask a record label for the forthcoming releases of two of their artists for MEMU to mix, we can create a closed universe for that collaboration.

MEMU’s different channels are built into its architecture. At first, we released focused channels to teach MEMU about certain genres, emotions and the aeolian mode of music, which underpin pop music. The technology is evolving rapidly and enabling us to adapt the contributions we receive across genres. For example, MEMU may take a work that naturally sits on a chill-out channel and process it for a high-energy channel.

We are working to speed up the mechanics of music production, improve the trackability and use of music and open up the notion of what a song is so that it can be enjoyed in all sorts of ways. AI can help build that broad landscape.

How does this help musicians?

MEMU offers musicians the opportunity for their music to be expressed across different modes of emotion and mediums. Artists looking to be discovered may allow us to have access to some of their work so it is heard in different ways and leads people back to their catalogue. What artist would not let their music be used in all these extraordinary platforms and ways?

MEMU also democratizes the music production process. It has the ability to take musical works and mash them in a way that we have never really seen before and to remunerate artists. There is a ridiculous hunger for music to complement content in all its forms old and new. MEMU helps meet that demand.

The experiences of Twitch and other platforms show the industry is in a “don’t allow” mode. The future of music, which MEMU represents, is “to allow, attract and remunerate” so everyone wins and can go forward.

What impact do you think AI will have on musicians?

AI tools can democratize the way artists engage with the industry and enable them to generate new revenues from their work. The tools we, and others like us, are developing are designed to integrate progress and technology in an ethical and artist-centric way.

AI complements the tools available to musicians and can break down entry barriers by speeding up the production process and enabling musicians to express themselves in chart-sounding ways.

We are working to speed up the mechanics of music production, improve the trackability and use of music and open up the notion of what a song is so that it can be enjoyed in all sorts of ways. AI can help build that broad landscape.

AI tools can democratize the way artists engage with the industry and enable them to generate new revenues from their work.

AI allows people who do not have the means to still engage with music as a form of expression. That’s probably the most exciting thing that AI can do in the music industry.

Can AI-based tools make music that really moves people?

Yes. AI can certainly help create songs that humans feel, but humans will always be involved in that process. We are not trying to recreate a human performance, even it if what we do leans on a human performance, turns that into data and translates it into another performance. The notion of an artist avatar or performance transfer is already a reality.

I am convinced that one of things AI will do is to allow humans to be more human and to write better music.

In which fields do you think we will see early uptake and adaptation of AI music?

Experimental artists have been dabbling with AI for a long time. AI is steadily moving into the mainstream of music. For example, LifeScore, Abbey Road’s AI music software, recently launched a prototype with Bentley for in-car music, which uses data points like speed and GPS location. That’s very encouraging.

At the end of the day, humans are just looking for interesting, helpful and entertaining ways to engage with life. Music is a big part of that and AI speeds up the music production process. That’s why we use it. AI will certainly augment human performance but it will struggle to replace it.

What’s fueling the growing interest in AI in the music tech industry?

First, the fear of missing out and second, a desire to correct past wrongs. There is a sense that AI’s power can get it right for us and can open the door to pro rata remuneration for artists.

How would you like to see the copyright system evolve?

At times, we have pushed and pulled at copyright, especially in the earlier stages of MEMU’s development, but our current thinking is, “if it ain’t broke,” keep rolling with it. So, we’ll keep playing by the rules until the rules change.

Is there any particular area in which you would like to see the rules change?

I think something needs to be done around the notion of using an artists’ body of work to generate new art or new revenue streams, particularly when technology is so capable of taking it and using it in a valuable way.

At the end of the day, humans are just looking for interesting, helpful and entertaining ways to engage with life. Music is a big part of that and AI speeds up the music production process.

I am quite torn on the subject because I don’t think we suddenly deserve the right to take an artist’s entire back catalogue and make new works with it just because we have the technology to do so. Maybe there is another way - something along the lines of allowing such use in return for contributing to a common pool of funds to support aspiring musicians.

What are your plans for the future?

We gave ourselves one year from winning the AI Song Contest to prove that we have a valid tool for musicians and songwriters. There’s a lot of interest in what we’re doing, and we are genuinely trying to find the right collaborators to develop something that supports the company and the broader music community. In Australia, we are helping to establish Australia’s first music AI hub, which brings together academics, commercial partners, scientists and emerging artists.

And the future of MEMU is to create new and exciting music while generating new revenue streams for artists. If we succeed in that, we will have succeeded in creating a centralized hub for a community of artists to continue the AI and music conversation.

Source:
article by Catherine Jewell, Information and Digital Outreach Division, WIPO