James Earl Jones, known for providing the legendary voice of Darth Vader, is stepping away from the iconic character, but we’ll still get to hear those recognizable tones in future Star Wars media thanks to the magic of AI.
Vanity Fair reports that 91-year-old Jones has given his blessing to artificial recreations of the Darth Vader voice, a role he has occupied for 45 years. The technique was utilized in the Obi-Wan Kenobi series in which workers at Ukrainian startup Respeecher recreated Jones’ voice to make it sound like it did in 1977. The company first teamed up with Lucasfilm on The Book of Boba Fett to recreate the voice of a young Luke Skywalker.
Respeecher uses “archival recordings and a proprietary A.I. algorithm to create a new dialogue with the voice of performers from a long-time ago.” Lucasfilm supervising sound editor Matthew Wood said the company also has an “elusive human touch.” The company says it trains text-to-speech machine learning models with the recordings of actors who can no longer play parts.
Respeecher has been carrying out the work while its home country fights back against the Russian invasion. “Certainly my main concern was their well-being,” Wood added. “There are always alternatives that we could pursue that wouldn’t be as good as what they would give us. We never wanted to put them in any kind of additional danger to stay in the office to do something.”
Wood says the last time he recorded Jones’ voice was for a brief line of dialogue in 2019’s The Rise of Skywalker when Jones said he was looking to wind down the character. Wood showed Respeecher’s work to Jones, and he signed off on using his archival recordings for Vader’s voice.
Wood credits Jones for helping guide Vader’s recreation in Obi-Wan Kenobi, calling him a “benevolent godfather.”
Using AI in this way means we’ll be hearing the perfectly recreated voice of Jones emanating from Darth Vader’s helmet in many more current and future Star Wars franchises. But while this is an example of artificial intelligence being used in a good way, the technology will always bring concerns about its ability to put fake words in people’s mouths, à la Deep Fakes.
Another example of voice synthesis/altering can be found in the recent Top Gun: Maverick. Director Joseph Kosinski said the voice of Iceman was digitally altered for clarity due to actor Val Kilmer’s medical condition.