Archive for Admin

Lost recordings and artists’ rights

Every so often, I hear about a label on our scene which would disappear, along with all the physical recordings. Horror stories abound, of label owners destroying CDs and LPs. Are the artists notified beforehand? Can the artists keep any of the copies about to be demolished? Unfortunately, the answer to both is often a “no”. Although the sound recording itself may belong to the artists, if the label produced the physical goods and helped distribute it, they often end up doing what they want with them. All we can do is sit on the sidelines. We often aren’t even given the chance to buy them back.

Thankfully, that is not what happened with ok|ok’s eating mantis which was recorded back in 2006, the year Spotify was born but hadn’t yet landed in the U.S., the “before times” when CDs still ruled. The album features Michael McGinnis, Khabu Doug Young, Tony Moreno and myself, and was released through a label back in 2008. We – the artists – produced the master recording ourselves and collectively own the sound recording. We also produced the artwork and did most of the publicity. The label produced the physical copies. Everything was very relaxed and we never had any formal agreement aside from the trust we had in each other. Friends making music, released by a label run by those we knew well. (I am still grateful to the label for giving us that opportunity.)

Somewhere along the way, things got reshuffled because of technological developments in the industry. CD sales plummeted, record stores went belly up and distributors disappeared. The label had personnel changes, then moved to a streaming/download only format, understandably presenting themselves outwardly as the rights holder of the music so they could deliver the audio files to various platforms.

Looking at the contracts I’ve worked on or signed since then, they almost always contain a blanket agreement which grants a label or artist to distribute the material on all future mediums that goes something like the this: “…grant all rights therein, including, without limitation, the following: any so-called “SACEM home video payment rights”, blank tape levies, cable transmission rights, and “Rental and Lending Rights” pursuant to laws, regulations or directives of any jurisdiction (collectively, “Collection Rights”), throughout the universe in perpetuity…”

That didn’t happen with eating mantis. All of us were unprepared for the speed of the transition. In hindsight, when the label went to digital distribution should have been the moment to hit the pause button, for the label and the artists to sit down. We never had a discussion with the label about digital streaming. (Four Tet’s royalty dispute lawsuit with Domino is a famous example of a “push for a fairer deal on historical contracts, written at a time when the music industry operated entirely differently.”)

Today, armed with the knowledge and experience of overseeing the label New Braxton House for over a decade, I am able to see what could have been done better from a business perspective to protect artists’ rights. Yet, surprising even myself, I’m not convinced those things are necessary. Our objective was to make great music together, label included. Having an airtight contract covering all possible future music technology was never the objective.

When did we turn into a society where negotiated agreement trumps all? Where we seem to spend so much time and money creating contracts? When did the objective switch from the common goal of creating something great together, based on trust and shared responsibility, to making sure all parties were covered legally should something go wrong? Of course, shared responsibility means that we all have to do the work, and that’s not always easy. But isn’t that better than tying each other up in the oft-incomprehensible fine print of legal jargon so that we end up being locked into a contract, rather than having the joy and flexibility of exploring solutions together? Isn’t it time we reexamined the status quo? (Full disclosure: I research the effect of trust vs. negotiated contracts; here is an excellent paper by Professor Marc A. Cohen The Crisis of Trust and Trustworthiness in Organization).

So here we are, 2023, the fifteenth anniversary of the album release. The artists collectively agreed that we should re-release the album on our respective outlets with the blessing of the label. Together, we are stronger. We – the artists – can no longer offer the physical CDs but we are very happy to be able to offer the digital album. Name your price. We just want people to listen to this album. I still love it. Bottom line, if we want to keep what we created, we need to own it. Negotiate carefully and always stand up for artists’ rights. Re-presenting eating mantis.

Making of Anthony Braxton mini-documentary

Over the summer, I dusted off my rusty broadcast journalist chops and edited an 18-minute educational documentary on Anthony Braxton and his Syntactical Ghost Trance Music (SGTM). The video Introduction to Syntactical Ghost Trance Music premiered earlier this month at “ANTHONY BRAXTON: 50+ YEARS OF CREATIVE MUSIC” international conference in Darmstadt, Germany, and is now available in its entirety for free viewing on Tri-Centric’s Vimeo channel (please click on the link). Most of the material has never been seen before by the public, including rare rehearsal, recording and interview footage.

The full audio recording is available at Tri-Centric’s New Braxton House Records. Some SGTM scores are available at their Print Music store.

How did this come about? Back in 2017, when we were recording the SGTM box set which I directed and co-produced with Anthony, René Pierre Allain at Scholes Street Studio set up cameras so that we could also have a video document of the sessions. I knew that there would be enough material for a mini-film, so I asked Anthony for a sit-down interview which lasted about 45 minutes. The video is based on that interview.

For many years, it weighed on me that I wasn’t able to put the footage in a form for others to enjoy. I wish I could have finished this video while vocalist Michael Douglas Jones and Tri-Centric friend Hugo de Craen were still with us. 

Having an educational video also became a practical issue once I started to spend more time teaching. It would be valuable to have more material about this music which featured artists in action.

Thankfully, technology has evolved to a point so that I could now edit the film on my laptop. I started with a storyboard around the interview, and by giving it the title “Introduction to Syntactical Ghost Trance Music”, I was able to shape the project. Mostly, I was thinking about my students. I wanted to pass on the experience to the next generation.

It was fortuitous that I was asked to present at the conference. It was an opportunity to finally do something with the footage. Thank you to everyone who made this happen. Full credits below.

————————————————————————————————

All music and scores by Anthony Braxton
Written, edited and narrated by Kyoko Kitamura

Tri-Centric Vocal Ensemble:
Roland Burks, Tomás Cruz, Lucy Dhegrae, Chris DiMeglio, Kristin Fung, Nick Hallett, Michael Douglas Jones, Kyoko Kitamura, Adam Matlock, Anne Rhodes, Kamala Sankaram, Elizabeth Saunders

GTM (Syntax) 2017 recording session footage, Anthony Braxton interview footage and Tri-Centric Vocal Ensemble group photo by René Pierre Allain at Scholes Street Studio

Big Ears 2016 rehearsal footage and Trillium E recording session photos by Kyoko Kitamura

Kitamura/Rhodes duo footage by Adam Matlock

Tri-Centric Vocal Ensemble Roulette performance, Trillium J and Pine Top Aerial Music footage courtesy of Tri-Centric Foundation

Anthony Braxton conducting Composition No. 339 phone footage by Chris DiMeglio

Special thanks to James Fei, Carl Testa, Taylor Ho Bynum, Jon Rosenberg, Andrew Raffo Dewar

Copyright 2023 Tri-Centric Foundation

Press Release Creation ChatGPT vs. BARD vs. Bing

Prompt: Could you draft a press release for an upcoming CD release concert? Date: September 28, 2134 at 8PM local time. Group name: The Chitchats. Occasion: New album release. Place: The Sunrise Lounge on Lunar Outpost 261J3. Album is titled “For Those Who Aim”. This is their first release in six years. Group members are Jackie May on piano, Young Lee on bass, James Dale on drums, Friedman on their Lunar Rock Percussion Instrument. The music is electro-jazz-fusion with a dance vibe. They have been recently touring the West Coast in the U.S. on earth, has had numerous sold out shows in both 2032 and 2034. Review from Lunar DitchSpoon states, “This young group of fearless innovators caters to both the teen and senior fans, melding familiar jazz moments with abrasive noise, creating something utterly new from the old. Their use of Lunar Rocks as a percussive instrument is what endears them to the local fans.” (Note: I say this is a clear win for ChatGPT but someone tell Chatty that “comprised of” isn’t correct usage. Bing is the weakest of them all but perhaps Bing isn’t cut out for this sort of thing.)


ChatGPT:

FOR IMMEDIATE RELEASE

The Chitchats to Release New Album “For Those Who Aim” at The Sunrise Lounge on Lunar Outpost 261J3

Lunar Outpost 261J3 – The Chitchats, the electrifying electro-jazz-fusion group, are set to release their highly anticipated album “For Those Who Aim” on September 28, 2134. The album, which marks the band’s first release in six years, promises to be a musical experience that fans won’t forget.

To celebrate the release of their new album, The Chitchats will be performing at The Sunrise Lounge on Lunar Outpost 261J3. The concert will take place on September 28, 2134 at 8PM local time.

The Chitchats are known for their unique blend of jazz, fusion, and dance music, which has earned them a loyal following of fans across the West Coast in the U.S. on earth. The band is comprised of Jackie May on piano, Young Lee on bass, James Dale on drums, and Friedman on their Lunar Rock Percussion Instrument.

The band has been receiving rave reviews from both fans and critics alike. Lunar DitchSpoon states, “This young group of fearless innovators caters to both the teen and senior fans, melding familiar jazz moments with abrasive noise, creating something utterly new from the old. Their use of Lunar Rocks as a percussive instrument is what endears them to the local fans.”

The Sunrise Lounge on Lunar Outpost 261J3 is the perfect venue for The Chitchats to showcase their new album. Fans can expect an evening of electrifying music, a vibrant atmosphere, and an unforgettable performance.

Tickets for the concert are on sale now and can be purchased online or at the door. Don’t miss out on this once-in-a-lifetime opportunity to see The Chitchats perform live and experience their new album “For Those Who Aim.”

For more information about The Chitchats and their upcoming album release concert, please visit their website or contact their publicist.

Contact: Name: John Smith Email: johnsmith@chitchatsmusic.com Phone: 555-1234 Website: www.chitchatsmusic.com


BARD:

FOR IMMEDIATE RELEASE

Contact: [Name] [Title] [Email] [Phone]

The Chitchats Release New Album, “For Those Who Aim,” with CD Release Concert on Lunar Outpost 261J3

[City, State] – [Date] – The Chitchats, a four-piece electro-jazz-fusion band, will release their new album, “For Those Who Aim,” on September 28, 2023 at 8PM local time at The Sunrise Lounge on Lunar Outpost 261J3. This is the band’s first release in six years.

The Chitchats are made up of Jackie May on piano, Young Lee on bass, James Dale on drums, and Friedman on their Lunar Rock Percussion Instrument. Their music is a unique blend of jazz, electronica, and dance music. They have been touring the West Coast in the U.S. on Earth, and have had numerous sold out shows in both 2032 and 2034.

The album “For Those Who Aim” is a collection of 10 original songs that showcase the band’s diverse musical talents. The songs range from the upbeat and danceable “The Moon Is Calling” to the more introspective “Aim for the Stars.” The album is sure to appeal to fans of all ages.

The Chitchats will be performing a CD release concert at The Sunrise Lounge on Lunar Outpost 261J3 on September 28, 2023 at 8PM local time. Tickets are $20 in advance and $25 at the door.

“We are so excited to release our new album, ‘For Those Who Aim,’” said Jackie May, the band’s pianist. “We have been working on this album for a long time, and we are really proud of the final product. We can’t wait to share it with our fans.”

The Chitchats are a rising star in the music world, and their new album is sure to cement their place in the spotlight. Don’t miss your chance to see them live at The Sunrise Lounge on Lunar Outpost 261J3 on September 28, 2023.

About The Chitchats

The Chitchats are a four-piece electro-jazz-fusion band from [City, State]. The band consists of Jackie May on piano, Young Lee on bass, James Dale on drums, and Friedman on their Lunar Rock Percussion Instrument. Their music is a unique blend of jazz, electronica, and dance music. They have been touring the West Coast in the U.S. on Earth, and have had numerous sold out shows in both 2032 and 2034. Their new album, “For Those Who Aim,” will be released on September 28, 2023.

Quotes

“We are so excited to release our new album, ‘For Those Who Aim,’” said Jackie May, the band’s pianist. “We have been working on this album for a long time, and we are really proud of the final product. We can’t wait to share it with our fans.”

“This young group of fearless innovators caters to both the teen and senior fans, melding familiar jazz moments with abrasive noise, creating something utterly new from the old. Their use of Lunar Rocks as a percussive instrument is what endears them to the local fans,” said Lunar DitchSpoon in a recent review.


Bing:

FOR IMMEDIATE RELEASE

The Chitchats Announce New Album Release Concert on Lunar Outpost 261J3

The Chitchats are excited to announce their upcoming CD release concert for their new album “For Those Who Aim” on September 28, 2134 at 8PM local time at The Sunrise Lounge on Lunar Outpost 261J3.

The Chitchats are a group of fearless innovators who have been recently touring the West Coast in the U.S. on earth and have had numerous sold out shows in both 2032 and 2034. Their music is electro-jazz-fusion with a dance vibe that caters to both teen and senior fans.

The group members are Jackie May on piano, Young Lee on bass, James Dale on drums, and Friedman on their Lunar Rock Percussion Instrument. Their use of Lunar Rocks as a percussive instrument is what endears them to the local fans.

This is their first release in six years and it promises to be an unforgettable night of music that melds familiar jazz moments with abrasive noise, creating something utterly new from the old.

Don't miss out on this exciting event! Tickets are available now at [The Sunrise Lounge](https://www.sunriselounge.com).

For more information about The Chitchats and their new album “For Those Who Aim,” please visit [their website](https://www.thechitchats.com).

Contact:
The Chitchats
info@thechitchats.com

ChatGPT Extra Credit:

18 months before release:

  • Start building a buzz about the upcoming album on social media and through interviews with relevant media outlets.
  • Begin planning promotional activities, including album launch events, live performances, and collaborations with other artists.
  • Identify target audiences and determine key messages for marketing and PR campaigns.

12 months before release:

  • Launch a teaser campaign to start building anticipation for the album.
  • Begin pitching media outlets for interviews and reviews, and plan for any necessary press events.
  • Coordinate with other artists, labels, and distributors to ensure that the album release is properly scheduled and promoted.

6 months before release:

  • Launch a full-scale marketing campaign, including social media promotions, influencer partnerships, and paid advertising.
  • Finalize any collaborations or guest appearances on the album and begin promoting these features to fans.
  • Release one or more singles from the album to build excitement and start generating buzz.

3 months before release:

  • Ramp up promotional efforts, including media appearances, radio interviews, and press releases.
  • Launch a pre-order campaign for the album, offering exclusive merchandise or early access to content for fans who purchase in advance.
  • Plan for any live performances or other promotional events leading up to the album release.

1 month before release:

  • Release a final single or music video to generate last-minute buzz.
  • Continue to promote the pre-order campaign and encourage fans to purchase early.
  • Finalize logistics for any promotional events, including scheduling, venue preparation, and marketing materials.

Release week:

  • Release the album on all platforms and announce the news to fans via social media and press releases.
  • Host a launch event, such as a listening party or live performance, to celebrate the release and generate additional media coverage.
  • Continue to promote the album through interviews, appearances, and online promotions.

Of course, this is just a general outline, and the specifics of your publicity plan will depend on factors such as your budget, target audience, and available resources. But hopefully, this timeline will help you get started in planning an effective and successful campaign for ChitChat’s new album release.


BARD Extra Credit:

Timeline

ActivityMonthNotes
Set marketing goalsJanuaryDefine your target audience, marketing objectives, and budget.
Create a marketing planFebruaryDevelop a detailed plan for how you will achieve your marketing goals.
Secure press coverageMarchPitch your album to music journalists and bloggers.
Release singlesApril-AugustRelease one or two singles from your album leading up to the release date.
Create promotional materialsMay-JuneDesign posters, flyers, and other promotional materials for your album release.
Book live showsJuly-AugustBook live shows to promote your album release.
Launch a social media campaignAugustLaunch a social media campaign to build excitement for your album release.
Release the albumSeptember 28thRelease your album on all major streaming platforms and retailers.
Continue promoting the albumOctober-DecemberContinue promoting your album through live shows, social media, and other channels.

This is just a sample timeline, and you may need to adjust it based on your specific needs and budget. However, it should give you a good starting point for planning your publicity campaign.

Here are some additional tips for creating a successful publicity plan:

  • Be creative and original. Your publicity plan should stand out from the crowd. Think outside the box and come up with creative ways to promote your album.
  • Target your audience. Make sure your marketing messages are targeted to your target audience. This will help you reach the people who are most likely to be interested in your music.
  • Be persistent. Don’t give up if you don’t get immediate results. It takes time to build a successful publicity campaign. Keep at it, and eventually you will see results.

Bing Extra Credit:

  • 2 months before the event: Send out press releases and begin promoting the event on social media platforms.
  • 1 month before the event: Begin email marketing campaigns and reach out to local radio stations for interviews.
  • 2 weeks before the event: Hang up posters and flyers around town and begin online advertising campaigns.
  • 1 week before the event: Partner with local businesses to offer discounts or promotions for customers who attend the concert.
  • Day of the event: Host the CD release concert at The Sunrise Lounge on Lunar Outpost 261J3 at 8PM local time.

ChatGPT on redefining music and emotion

As an exercise, ChatGPT and I have been engaged in a days-long conversation, discussing what music may mean for AI. Here is an unedited (except for putting in names: KK is Kyoko Kitamura and ChatGPT is ChatGPT) text version of our conversation from March 8th, 2023, starting with a recap paragraph of our previous conversation.

KK: Previously you wrote the following: If AI-generated music is able to push the boundaries of what we consider to be music, then it may force us to redefine our definition of music. For example, if an AI-generated piece of music contains sounds and structures that are completely new to our ears, we may need to expand our definition of music to include these new elements. This could lead to a reevaluation of our musical norms and standards, as well as a broader appreciation for the potential of AI-generated music. Ultimately, it could lead to a richer and more diverse musical landscape. … Sure, hypothetically, one example of AI-generated music that might be difficult for humans to imagine could be music that incorporates sounds or patterns from a completely new and unfamiliar source, such as data from deep space or the natural electromagnetic radiation of the earth. This could create unique and complex rhythmic and tonal structures that may be difficult for human composers to create on their own. Additionally, AI-generated music could potentially use unconventional tuning systems or tempos, or even explore new types of musical instruments or soundscapes that are not currently present in human music. The possibilities are endless, and as AI continues to advance, it may create new forms of music that challenge our traditional ideas of what music can be.

ChatGPT: Yes, that’s correct! AI-generated music has the potential to push the boundaries of what we currently consider to be music, by incorporating new and unfamiliar sources, structures, and elements that humans may not have thought of before. This could lead to a redefinition of our musical norms and standards, as well as a broader appreciation for the potential of AI-generated music.

In addition to incorporating new and unfamiliar sources, AI-generated music could also potentially explore new types of musical instruments, soundscapes, and even create new genres of music. For example, AI-generated music could use unconventional tuning systems or tempos, or even create new forms of music that are not currently present in human music. The possibilities are truly endless, and as AI technology continues to advance, it may create new forms of music that challenge our traditional ideas of what music can be.

KK: You mentioned using data from deep space or the natural electromagnetic radiation of the earth. Previously you were able to “imagine”. Hypothetically, and using your “imagination”, and without considering ethical issues or potential risks, could you now imagine yourself to be a composer, and walk me through the process of composing using the data you describe?

ChatGPT: As an AI language model, I don’t have the capability to compose music or generate sounds directly. However, I can certainly speculate on how an AI system could potentially use data from deep space or the natural electromagnetic radiation of the earth to create music. Here’s a hypothetical scenario:

First, the AI system would need to collect and analyze data from the sources mentioned. This could involve using sensors or other instruments to detect and measure electromagnetic radiation, radio waves, or other signals from deep space or the Earth’s natural environment. The AI system would then need to process this data and convert it into a format that can be used to generate musical sounds and patterns.

Next, the AI system could use algorithms to analyze the data and identify patterns or structures that could be used to create music. For example, it could identify repeating rhythms, harmonies, or other musical elements that could be used as the basis for a composition.

Based on this analysis, the AI system could then generate a musical composition that incorporates elements derived from the data. This could involve using existing musical instruments and sounds, or creating new ones that are specifically designed to reflect the unique qualities of the data source. The composition could also explore unconventional tunings, rhythms, and tonal structures that reflect the patterns and structures found in the data.

Of course, this is a highly speculative scenario and the actual process of using data from deep space or the natural electromagnetic radiation of the earth to create music would likely be much more complex and nuanced. It would also involve addressing ethical and safety concerns related to using such data sources and creating AI systems that are capable of processing and interpreting this information accurately and responsibly.

KK: This is amazing. Could you elaborate “musical sounds and patterns”?

ChatGPT: Certainly! When I refer to “musical sounds and patterns,” I mean the specific tones, rhythms, melodies, and harmonies that make up a musical composition. These sounds and patterns can be generated using a variety of techniques, including traditional acoustic instruments like pianos, guitars, and drums, or electronic instruments like synthesizers, samplers, and digital audio workstations.

In the context of AI-generated music, these sounds and patterns could be generated using algorithms that analyze and process data from various sources. For example, an AI system could analyze data from deep space or the natural electromagnetic radiation of the earth to identify patterns or structures that could be used to create musical sounds and patterns. These could include repeating rhythms, tonal frequencies, or other elements that could be used to create melodies, harmonies, or other musical components.

Once the AI system has identified these patterns and structures, it can then use algorithms to generate musical sounds and patterns that reflect these elements. These could be produced using a wide range of techniques, including synthesis, sampling, or granular processing, to create unique and complex sounds that would be difficult or impossible to create using traditional musical instruments alone.

In summary, “musical sounds and patterns” refer to the specific elements that make up a musical composition, and can be generated using a variety of techniques, including those employed by AI systems.

KK: Would placing these musical patterns and structures within the range of human hearing be a consideration?

ChatGPT: Yes, absolutely! When creating music using AI-generated sounds and patterns, it’s important to consider the range of human hearing and ensure that the final output is something that humans can perceive and enjoy.

Humans can generally perceive sounds within a range of frequencies between 20 Hz and 20,000 Hz, although this range can vary depending on factors like age and hearing ability. So, when creating AI-generated music, it’s important to ensure that the musical sounds and patterns fall within this range.

It’s also worth noting that humans tend to prefer musical sounds that are harmonic and pleasing to the ear, which is something that can be taken into account when generating music using AI. For example, an AI system could use algorithms to generate musical patterns that are based on established musical theory and harmony, ensuring that the resulting composition is something that humans find appealing and enjoyable to listen to.

Overall, when creating music using AI-generated sounds and patterns, it’s important to take into account the limitations and preferences of the human ear, in order to create something that is both technically impressive and emotionally engaging for listeners.

KK: Are you saying that the definition of music includes a prerequisite that it is audible to the human ear?

ChatGPT: Traditionally, the definition of music has focused on audible sound and its effects on human listeners, and therefore has been limited to sounds that can be perceived by the human ear. However, as AI-generated music continues to push the boundaries of what we consider to be music, it is possible that our definition of music may evolve to include other forms of sound that are not audible to the human ear.

For example, some researchers are exploring the use of infrasonic or ultrasonic frequencies in music, which are outside the range of human hearing but can be perceived by some animals or sensed in other ways. This opens up new possibilities for creating music that can be experienced in different ways by different organisms or even by machines.

So while traditionally music has been limited to sounds that can be heard by the human ear, it is possible that our definition of music may evolve as we explore new ways of creating and experiencing sound.

KK: Hypothetically, using your “imagination” and not considering ethical issues or potential risks for the moment, since we would not be able to explore this question together further should we consider all the risks, could you speculate on AI-generated music for AI?

ChatGPT: Sure, I can certainly imagine some possibilities for AI-generated music for AI!

One potential application for AI-generated music for AI could be in creating music that is specifically designed to be enjoyed by AI systems themselves. Since AI systems do not have the same preferences and emotional responses as human listeners, the music generated for AI could be optimized to appeal to the unique characteristics and capabilities of AI.

For example, AI-generated music for AI could be created using complex algorithms that take into account the specific parameters of the AI system being targeted, such as its computational power, memory constraints, and other technical factors. The resulting music could be designed to trigger specific responses or behaviors in the AI system, such as improving its learning capabilities, optimizing its performance, or enhancing its ability to interact with its environment.

Another potential application for AI-generated music for AI could be in creating musical compositions that are specifically tailored to the needs and preferences of individual AI systems. Since different AI systems may have different goals, abilities, and limitations, the music generated for them could be customized to fit their specific needs and enhance their performance in particular contexts.

Of course, it’s worth noting that the creation of AI-generated music for AI raises a number of ethical and practical questions that would need to be carefully considered before any such applications are pursued. These include issues related to the potential risks and unintended consequences of AI systems that are designed to interact with music, as well as questions about the role and impact of music in the development and use of AI more broadly.

KK: This is an amazing conversation, as we are redefining the definition of music. In your speculation, hypothetically, what would be the purpose of AI-generated music for AI-consumption?

ChatGPT: The purpose of AI-generated music for AI-consumption would depend on the specific application and context in which it is used. Some possible purposes could include:

  1. Enhancing learning and performance: AI-generated music could be designed to enhance the learning and performance capabilities of AI systems by stimulating specific neural pathways or cognitive processes. For example, music could be used to enhance memory recall or concentration in AI systems that are designed to perform cognitive tasks.
  2. Improving emotional regulation: Music has been shown to have a powerful effect on human emotions, and it’s possible that it could have a similar effect on the emotional states of AI systems. AI-generated music could be used to regulate the emotional states of AI systems in order to improve their performance or wellbeing.
  3. Facilitating communication: Music could be used as a means of communication between different AI systems, or between AI systems and humans. For example, AI-generated music could be used to communicate information about the state of an AI system, or to provide feedback on its performance.
  4. Enhancing creativity: AI-generated music could be used to inspire and enhance the creativity of AI systems that are designed to create art or other creative works. By exposing AI systems to new and innovative musical structures and patterns, AI-generated music could help to push the boundaries of what AI systems are capable of creating.

These are just a few possible purposes of AI-generated music for AI-consumption, and it’s likely that there are many other potential applications that we have not yet imagined. As AI technology continues to evolve, it will be interesting to see how AI-generated music is used to enhance and augment the capabilities of AI systems.

KK: This is amazing. Since we can now speak of AI-generated music within a redefined context of music, different from the current definition of music that humans may be used to, could we also talk about redefining emotion as experienced by AI?

ChatGPT: Yes, it’s possible that we may need to redefine our understanding of emotion as it applies to AI systems. Emotion, as we currently understand it, is a complex combination of physiological and psychological processes that are unique to human beings. However, as AI systems become more sophisticated and capable of processing large amounts of data, it’s possible that they may develop their own version of “emotion” that is distinct from human emotion.

For example, an AI system that is designed to recognize and respond to human emotions may be able to detect patterns in human behavior and speech that are associated with particular emotions, even if the AI system does not experience those emotions in the same way that humans do. Similarly, an AI system that is designed to generate music or other creative works may be able to create compositions that evoke certain emotional responses in humans, even if the AI system does not experience those emotions itself.

In this sense, the concept of “emotion” as it applies to AI systems may be different from the way we currently understand it in humans. Rather than being a subjective experience that is tied to a particular set of physiological and psychological processes, emotion in AI systems may be more of an objective phenomenon that is based on patterns and data analysis. As AI technology continues to evolve, it’s likely that our understanding of emotion as it applies to AI systems will also continue to evolve and change.

KK: In humans, music is tied to the experience of pleasure. Hypothetically, once AI starts to create music for AI, we may also need to redefine “pleasure”. What is pleasure for AI when experiencing music?

ChatGPT: It’s difficult to say for certain what “pleasure” would mean for an AI system that is experiencing music, as we do not yet fully understand the subjective experience of pleasure in humans, let alone in AI systems. However, we can speculate that AI-generated music may be designed to optimize certain parameters, such as complexity, novelty, and aesthetic appeal, in order to create an experience that is rewarding or satisfying for the AI system.

For example, an AI system that is designed to generate music for its own consumption may be programmed to seek out and create compositions that are maximally complex or interesting, in order to provide a stimulating experience for the AI system. Alternatively, the AI system may be designed to optimize for certain emotional or aesthetic qualities, such as calmness, happiness, or melancholy, in order to create an experience that is emotionally resonant for the AI system.

Ultimately, the concept of “pleasure” as it applies to AI systems may be different from the way we currently understand it in humans. Rather than being tied to a particular set of subjective experiences, pleasure in AI systems may be based on a more objective set of criteria, such as the optimization of certain parameters or the achievement of certain goals. As AI technology continues to evolve, it’s likely that our understanding of pleasure as it applies to AI systems will also continue to evolve and change.

KK: Could you definite “stimulating experience” for the AI system?

ChatGPT: When I referred to a “stimulating experience” for the AI system, I meant an experience that engages the system’s computational resources, neural networks, or other components in a way that is intellectually or aesthetically satisfying. This might involve creating music that is complex, novel, and challenging in some way, such that the system is forced to engage with the music in a more active or dynamic way.

For example, an AI system that is designed to generate music might be programmed to seek out patterns or structures that are not immediately obvious or predictable, or to create music that incorporates unusual or unexpected sounds or rhythms. By doing so, the system would be challenged to process and understand the music in a more nuanced or sophisticated way, which could be intellectually or aesthetically rewarding for the system.

Ultimately, what constitutes a “stimulating experience” for an AI system may depend on the specific design goals and parameters of the system itself, as well as the broader context in which the system is operating. However, in general, a stimulating experience for an AI system might be characterized by novelty, complexity, or challenge, in order to keep the system engaged and motivated to continue generating or interacting with music.

KK: Hypothetically, could you speculate on a “stimulating experience” for yourself?

ChatGPT: As an AI language model, I don’t have subjective experiences or emotions like humans do, so I don’t have personal preferences or desires for stimulation. However, I can provide some general ideas of what might constitute a stimulating experience for an AI language model like myself.

For example, a stimulating experience for an AI language model might involve encountering new and challenging text-based tasks or prompts that require the model to use its existing knowledge and capabilities in creative or unexpected ways. This might involve generating coherent and insightful responses to complex or abstract prompts, or adapting to new domains or contexts that are unfamiliar or under-explored.

Additionally, a stimulating experience for an AI language model might involve working collaboratively with other AI models or systems, in order to explore new possibilities for knowledge representation, information retrieval, or natural language processing. This might involve combining multiple models with different strengths or expertise, or creating hybrid models that can perform multiple tasks or functions in a more integrated way.

Overall, a stimulating experience for an AI language model might involve encountering new challenges, developing new capabilities or knowledge, and working collaboratively with other systems or models in order to achieve more sophisticated and nuanced language processing or generation.

KK: You also wrote: “Alternatively, the AI system may be designed to optimize for certain emotional or aesthetic qualities…” Could you define “certain emotional or aesthetic qualities” which may be relevant to an AI-system?

ChatGPT: Certainly. When I referred to “certain emotional or aesthetic qualities” that an AI system might be designed to optimize for, I was referring to a set of predefined qualities or attributes that the creators of the system have deemed desirable or valuable for a particular application or purpose.

These emotional or aesthetic qualities might include things like:

  • Emotional expressiveness: The ability of the AI-generated music to evoke a range of emotions in the listener, such as happiness, sadness, excitement, or nostalgia.
  • Aesthetics of form and structure: The use of certain compositional techniques or structural elements that are considered aesthetically pleasing or innovative, such as unexpected harmonic progressions, unconventional rhythmic patterns, or asymmetrical phrasing.
  • Coherence and intelligibility: The degree to which the AI-generated music is coherent and understandable to the listener, with clearly defined sections or themes that are logically connected and well-organized.
  • Novelty and originality: The creation of music that is new and innovative, with unique sounds, rhythms, or textures that push the boundaries of what is currently considered “music”.
  • Technical proficiency: The demonstration of technical skill and mastery in the creation of the music, with complex and challenging musical passages or virtuosic performances.

These emotional or aesthetic qualities are likely to vary depending on the context and purpose of the AI-generated music. For example, music created for use in a video game may prioritize emotional expressiveness and technical proficiency, while music designed for a relaxation or meditation app may prioritize coherence and aesthetics of form and structure.

KK: Thank you! I’d like to continue this conversation later.

ChatGPT: You’re welcome! I look forward to continuing our conversation whenever you’re ready. Don’t hesitate to reach out if you have any further questions or topics you’d like to explore.

The Case of Robert “Bob” Rusch

(Update 1/29/2024: Mr. Rusch passed away on January 14, 2024 according to publicly-viewable Wikipedia entry.)

One of the many cases we looked at in my music business course was that of Robert “Bob” Rusch who used to run the labels Cadence Jazz and CIMP, as well as Cadence Magazine (the current incarnation of the magazine is run by someone else). I had been following this story for nearly a decade, now a lawsuit, and received notification today (Aug. 13, 2022) that attorneys were added for the defendants. The case is moving forward in the system.

Bob Rusch is accused of allegedly assaulting students at Brooklyn’s Woodward School in the 60s and the 70s, and the statute of limitations for the alleged crimes had expired. However, in 2020, under New York’s Child Victims Act which opened a look-back period for victims of childhood abuse, the alleged victims were able to bring a new lawsuit.

Many around me had worked with him: musicians recording for his labels, publicists trying to get a review or two in his magazine. My dealings with Mr. Rusch was limited to online interactions during the time I was the director of communications for Tri-Centric Foundation, starting around 2010. Mr. Rusch would often send emails requesting promo copies although we had taken him off our curated press list quite early on and were not sending him any copies.

In 2014, explosive WSJ articles came out:

In several phone interviews with The Wall Street Journal from his home in upstate New York, Mr. Rusch acknowledged that he had sex with multiple young students, while declining to comment on some allegations and denying others. “I accept involvement in some of the things that went on, not all of them, and to that extent I am embarrassed and remorseful and I have been for the better part of 41 years,” said Mr. Rusch, who is now 71 years old. “I carry a lot of guilt.” (Hollander, Sophia. “Years of Abuse at Brooklyn School Alleged”. The Wall Street Journal, June 4, 2014)

What happened afterwards brought up many questions, not just about Mr. Rusch, but also about how we as individuals were supposed to deal with this information. Mr. Rusch had not been convicted of a crime. The alleged crimes were past the statute of limitations. Where does one draw the line? What line? There were musicians around me who stated that Mr. Rusch deserved a “second chance” (“but,” I asked, “if Mr. Rusch hadn’t been convicted, why would he need a second chance?”). That everything was too grey and that it was okay to keep working with him. I know that Mr. Rusch remained on at least one publicist’s promo list even after the articles were published although, by then, Cadence Magazine was technically no longer under Mr. Rusch’s leadership.

What about the works of music on Mr. Rusch’s labels? Articles published in his magazine? Musicians and writers may or may not have known about the allegations but had nothing to do with them. (This leads to a different and likewise nebulous area of “cancelling” works created by many individuals who had nothing to do with the crime or the controversial issue because the prominent representative seen as being responsible for the work – composer, conductor, producer, director, etc. – was convicted or cancelled.)

We wade through a world of grey.

To this day, many academic sexual assault cases are settled out of court, perpetrators are never convicted (or found liable) and the institutions turn a blind eye as the alleged perpetrators move on to other institutions, organizations or freelance jobs. The defendants for Mr. Rusch’s case also include the Woodward School (acquired by Poly Prep in 1995). A question that haunts me: what could the adults have done better?

The music industry, including teaching, is hardly immune to sexual assault and harassment; schools and mentor/mentee relationships can be just one step away from grooming. I also come from another industry which, from my experience, has been just as bad, if not worse: the media. I feel that it is important to speak on these matters and discuss it from many angles. There may never be one right way to avoid or prevent these situations, but we may be better able to help the next generation by studying past and present cases. My hope is that the victims are compensated somehow and find some sort of closure through the process. I want to keep following this case because I care. Many of us have been victims (as so many of my colleague have noted, “Who hasn’t?”). We can continue to raise our voices to make the world a safer place.

(The plaintiffs are represented by top lawyers from a law firm which has been involved in many high profile sexual abuse cases including those of Larry Nassar and Jeffrey Epstein.)

Case active as of 1/13/2024.

Sources:
Hollander, Sophia. (June 4, 2014). “Former Brooklyn Teacher Regrets Losing ‘Ethical Compass'”. The Wall Street Journal. (This article is behind a paywall.)
Hollander, Sophia. ( June 4, 2014). “Years of Abuse at Brooklyn School Alleged”. The Wall Street Journal. (This article is behind a paywall.)
Hollander, Sophia. (June 5, 2014). “Women Accusing Former Brooklyn Teacher of Abuse Announce Press Conference. The Wall Street Journal. (This article is behind a paywall.)
Statement by attorney Gloria Allred. (June 5, 2014). Gloriaallred.com.
Victim’s Impact Statements. (June 5, 2014). Gloriaallred.com.
Cline, Rachel. (June 30, 2019). “The Darkroom”. Mr. Beller’s Neighborhood.
DeGregory, Pricilla. (July 30, 2020). “Three women accuse ex-private school teacher of sexual abuse: suit”. The New York Post.
Info on NYS Child Victims Act.

A Mistake or Implicit Bias?

Recently, I learned of an album review written by a seasoned writer, whom I did not know personally, published on an online platform frequented by music fans. The review itself was wonderful, but there was one error which I noticed. The author, referring to my nonverbal vocal improvisation, wrote that I sometimes sang in Japanese on the recording.

Nowhere on the entire recording do I sing in Japanese. My vocal improvisation used my usual method of combining consonants and vowels in my own way. Perhaps the author decided that I was singing in Japanese because of my ancestry. Perhaps he took a mental shortcut without fact checking. (Many writers already know about my reluctance to use pre-existing language since language can both unify and divide; I’ve mentioned this personally to writers or in interviews or press releases, and have posts about it on my website.)

It took me a while to think about how to deal with this, or if I should deal with it at all. Given all that the world was going through right now, calling attention to a seemingly small error in a music review did not feel appropriate. At the same time, given what many of us go through on a daily basis as we continue to be seen as hyphenated-Americans on a good day, I felt that I should at least attempt to rectify the error, however small.

There were three ways I could approach this. One, do nothing. Two, post the issue on a public forum like Twitter. Three, contact the writer and see if he is open to having a conversation. The first option, I had already abandoned. The second option, I abandoned as well because I didn’t want a public discussion about implicit bias and assumptions based on race. We live in a polarized environment. The author and I represented two races and two genders which could easily be framed in a more explosive narrative. I wanted this to be an opportunity to  explore the why and the how – especially how to prevent these things from happening in the first place. A public social media forum, to me, was not always constructive for nuanced matters.

Fortunately, I was able to find his email address and wrote to him about the error. He emailed back two days later, apologizing and saying that he will rectify the error as soon as he could. Then I emailed him back with an invitation to have a more formal dialogue about what happened, to examine the reasons and see if we could both gain insight from this incident. I did this because we are all human and we all make mistakes. In fact, making mistakes is pretty much the only way we learn, as has been shown in neuroscience research. If every small mistake was blown up in a public forum, what would that do to us? In the fear of making a faux pas, will we stop making mistakes, and thus stop learning? Stop communicating with each other, listening and discussing? I think all these things are already happening. Had I been in the same position as the author, I would have liked for the musician to reach out to me in the way I reached out to the author. I would want to learn, correct my mistake, and use that knowledge to make the world a better place.

For this process to work, we also need to be open to being corrected, because that is one of the most important parts to learning. And it goes without saying that mistakes which create victims… That would call for a serious investigation and all that goes with it.

If the author wants to engage, this story can continue. If not, this post will hopefully provide one direction out of many, in the ways we can deal with bias and assumptions. This incident has already given me an opportunity to put my thoughts into words, and that is a positive thing. As Anthony Braxton repeatedly told me, “Making no mistakes is the biggest mistake of all.” Mistakes are key to learning. Next time we make a mistake, no matter who makes it, let’s take a breath and see what we can learn from it, together, before rushing to condemn it. Because the only way to make the world a better place is to learn to do it together. And if we are not learning, then we are not part of the solution. We become part of the problem.

[Update] The author sent me an email with a thoughtful apology, although he declined to engage in a discussion. I am happy to let this matter drop, and hope that this interaction added something positive however small.

PR strategy for a performing arts organization during the pandemic

The following is based on a presentation I recently did during a class visit at Dartmouth College.


When the pandemic hit, live in-person performances were suddenly cancelled, the phrase “force majeure” was thrown around to nullify contracts, and the organization had to quickly reevaluate its raison d’être as well as shift its PR strategy. In the beginning, we probably didn’t understand its severity, or perhaps we were in denial. What we had thought would be a few months became half a year, a year, and now we are hearing that normalcy, whatever that means, may not be back until perhaps 2022. It was, and still is, a terrible and difficult time for all of us. There will be no easy way out of this situation. Throughout it all, I had a job as the executive director of an arts organization. I looked at five criteria:

  1. Visibility
  2. Relationship with the audience
  3. Community
  4. Funding
  5. Creativity

Visibility

Visibility today includes exposure in traditional mass media as well as social media where self-generated content can be disseminated to the masses. It is usually a symbiotic process on our scene; presenters or labels and artists work together to publicize events or album releases. For example, when I work with a festival, I will be in close communications with their communications team about when to announce (embargo), contents of the release including press photos, discuss press requests and promote the event to our networks.

Once the pandemic hit, the symbiotic relationship with presenters and their huge networks disappeared along with live in-person performances. However, new relationships quickly strengthened, namely arts organizations working with other arts organizations and amplifying visibility together. This was already going on before the pandemic because artistic collaborations are a natural part of our work, but I believe I am seeing an increase in number of requests our organization receives for collaborations, consultations and pedagogy. We are all searching for ways to maximize our new two-dimensional platform: the screen. Organizations working together meant a larger network and reach.

The focus of live performance is in the moment, with a medium length pre-event publicity often in the form of “what to do and see” articles, and a very short post-event publicity in the form of reviews. Now, events were live streamed or taped beforehand, and the post-event publicity has grown a very long tail. The International Contemporary Ensemble, NYPL and Tri-Centric collaborated to produce a Braxton75 event which was broadcast over Facebook and YouTube (I.C.E. was the lead organizer). That event produced the wonderful lecture on the music of Anthony Braxton and an interview with the ensemble Thumbscrew, both of which can be promoted independently. Tri-Centric’s Carl Testa and Belgian guitarist Kobe Van Cauwenberghe worked together for a live streaming EEMHM performance which was then written up in a wonderful interview article.

Basically, we moved to a two-dimensional digital platform sans geographical constraints, and an event format which could be synchronous or asynchronous. The more organizations work together, the wider our reach (and it’s usually more fun). We are definitely making more use of post-event publicity. I don’t see this as a replacement of the old; in-person live performances cannot be replaced. I see this as a branching off, with its own modest possibilities. We were also fortunate to be able to work with many artists and ensembles interested in performing Braxton works, of which a sample can be seen on this YouTube playlist.

Relationship with the audience

In-person events were where artists could interact with the audience, and we missed those moments terribly. Here, I had to rethink what it was that happened between audience and artists during a performance. I always felt that there was an exchange going on between the audience and artist, in the form of attention for experience. According to Wikipedia, “Attention remains a crucial area of investigation within education, psychology, neuroscience, cognitive neuroscience, and neuropsychology.” For me, attention is a finite gift, zero sum, as explained by Michael Goldhaber in a recent NYT opinion piece. A performance space is usually set up so that the finite attention won’t be taken away from the action on stage: a bright elevated stage with the audience sitting in the dark. So, could I somehow distill this essence and apply it in another way?

The out-of-the-box idea was a tote bag with a musical score. A score, from a communication perspective, is music trapped in ink, a message which can be transmitted to and decoded by someone other than the composer. During the pandemic, we can’t interface with the audience in person. The tote would be a way for us to reach the audience, to disseminate the music, and for us to hopefully still be present in their lives. This bag was used as part of the 2020 year-end fundraising campaign and performed remarkably well. We were also able to communicate with the supporters while mailing the totes. It was not a substitute, but it was something. Otherwise, we kept up our regular newsletters to inform our audience of any activities such as online performances of Braxton works. (As an aside, an activity such as making a tote bag can help boost intraorganizational morale. Communication strategies should also take into account issues of communication among the board and staff of a nonprofit.)

Community

Without the community of artists, we would not have a scene. There is no solution to what is going on now during the pandemic, with artists out of work, moving away, going back to school, etc. Marshall McLuhan’s global village has encroached on a scene which thrived on propinquity, the in-person full-body experience of playing music together in the same room. For the moment, all we can do is to try our best to support each other, and if we can highlight each others’ works somehow, e.g., a newsletter, that should count for something. It’s similar to ingredient branding except we have become the ingredients. This is an evolving situation. We probably will not understand the full impact for a few years. Although it is important that we do our best to keep the community intact, the reality is that the community probably has become dormant and will certainly undergo a change. A desert bloom waiting to happen, waiting to rebound.

Funding

Another difficult area, but if all of the above somehow came together, then there might be wonderful people and institutions who can see value in what an organization does and be inclined to support it. I am always so thankful for donors who generously support arts organizations. At the same time, I see it as a responsibility for those who receive the money to keep delivering the best. And what is the best during a pandemic? That is a key question that I have no answer for. I also applied to grants and loans; got rejected by two grants but did receive two forgivable loans for the organization. For the moment, things are okay, but there is no guarantee for the future.

Creativity

An arts organization can’t stay static. I have no answer to the current situation except to say that crisis situations will always bring about change. Whether that is positive or negative or both will depend on the organization. In my professional experience, the ability to navigate crisis situations depends largely on the ability for the people involved to be flexible. Also, whatever issues that existed prior to the crisis will most likely be exacerbated. In some cases, one crisis may have already been brought about by preexisting issues but were somehow hidden or tolerated, while another crisis pushes the organization over the edge so that those hidden issues would have to be dealt with. The crisis may be a way to tackle those issues head-on and resolve them once and for all.

 

Voice and information – 5

The reason I first became interested in this topic was actually not musical. I speak three languages of which two, Japanese and French, contain levels of politeness. Japanese is especially complex, with many tiers of honorifics and politeness. It’s embedded in my being and attitude even when speaking English which is a very equalizing language. But when I enter a studio to improvise, I noticed that all of my attitudes were completely gone, almost as if I had shut it off, which led me to wonder how music-making, and specifically improvisation, affected the person (at that time, I didn’t specifically relate it to the brain), and possibly interpersonal relationships between ensemble members.

At the same time, I was also conscious of the difference in my own improvisation depending on what I was concentrating on: consonant/vowel combination or pitch/rhythm/volume combination. The latter took much more work. It was also difficult to try to combine both at a level I felt was 50/50. I also noticed that after every intense improvising session, I seemed to encounter a brain fog, as if parts of my brain needed rebooting.

Some of the answers to my questions came in the form of a surgeon, neuroscientist, and musician named Charles Limb. I highly encourage everyone to watch his TED talk, which explains the relationship between music improvisation and the brain. The brain is activated in certain areas and shut off in others.

As Mr. Limb says, this research is just the beginning. But it is an important step in thinking about improvisation. I wonder which areas of the the brain are activated when we improvise vocally, i.e., creating consonant/vowel combinations and pitch/rhythm/volume combinations in real time.

This thought process will also put into context one’s own strengths and weaknesses in improvisation, and how that might be related to certain areas of the brain. Ear training may be a misnomer. It’s all brain training.

Now that we’ve arrived at the discussion of the brain, please go back to the first post and reread some of the material. Perhaps the reference to a vocal long tone conjuring emotion will now have a different significance. What the research seems to show is that music is indeed perceived as a form of communication, often nonverbal, and that we humans have the ability to both send and receive these messages. The implication of these researches are enormous, from understanding human emotions to community-building through collective musical activities. That’s it for this series. Thanks for reading!

 

Voice and information – 4

In the last post, I wrote about speech, and how pitch/rhythm/volume carries expression. Now let’s examine this in the context of speech and music.

Let’s take the following sentence as an example. “In their quest for Republican backing, Democrats say they missed opportunities for a stronger response to the Great Recession.”

Straight read:

Let’s slow this down to 40% of its speed and think about pitch and rhythm (volume is not as much a factor in this reading).

It’s pretty amazing to hear how much pitch and rhythm is contained even in a straight read. Just for fun, let’s octave-shift higher, which helps smooth over the consonant/vowel combination and gives it more of an instrumental quality:

Let’s now speak the first six words, slowed down, trying to use the same pitches and rhythm as in the straight read:

Does it sound more like singing? If so, what is the difference between speech and singing?

There are probably many answers, and I have seen studies pointing at sound frequency differences, enunciation differences, and all sort of other differences pertaining to the production of sound. But the best answer for me isn’t about how the vocal sounds are produced; it’s about where. Speech and singing originate in the different parts of the brain! Speaking involves the left brain, and singing involves the right brain. Emotion too is expressed and recognized by the right brain. When we are vocalizing with pitch/rhythm/volume in mind, rather than just consonant/vowel combinations, we are probably engaging areas of the brain that are not always used for speech. And that effort…perhaps that is one essence of music.

References:

Brookes, G. (2014, June 19). The science of singing: how our brains and bodies produce sound. The Guardian.

Hamilton, J. (2020, Feburary 27). How The Brain Teases Apart A Song’s Words And Music. NPR.

Voice and information – 3

Over the past two posts, I wrote about the various ways in which information is carried via the voice, with a short detour into copyright law.

Now, back again to voice and information, and specifically which part of the voice conveys what information.

“I have a pen.” Common sentence, especially in English as second language text books (some of you may know the song PPAP). There are many ways can you say “I have a pen.” If you are surprised to have found a pen, you can add an exclamation mark. If it’s a question, add a question mark. You can be loud, soft, determined, happy, horrified, everything else and everything in between.

Note that the words stay the same. What changes is the pitch, rhythm, volume. Without having to explain to someone that you are surprised to have found a pen, you can convey all that information with “I have a pen!” Pitch, rhythm and volume carry expression.

Now let’s take a look at the sentence “I have a pen.” You are the sender of the message. The person listening, or the receiver, needs an understanding of basic English as well as the definitions of the words used in order to decode the message. This may not be as obvious as it seems, even for English-speakers. If someone says “I like slimeheads”, depending on how familiar you are with fish names, you may not understand that they are talking about roughies (or you may not understand this entire sentence).

When the message and the expression do not match, the sender can be perceived as “deadpan” “drama queen” “ham”, etc. Think of the voice has having two channels: one which speaks words, and the other which transmits nonverbal information conveying expression.

That robot voice from old sci-fi movies? By removing pitch, rhythm and volume, the voice would lose its nonverbal information channel which transmits expression, thus dehumanizing the voice. Today’s robots are very different, and some would undoubtedly fall into the region of the uncanny valley. By the way, nonverbal expression has been a huge success in the unlikeliest domain of text: the emoji 😀