You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.
“Fear not! For I shall lead ye to riches beyond your wildest dreams!”
Inside a tiny recording booth in downtown Los Angeles, John Peck waited for a verdict from the voice-over engineer: Did the line sound pirate-y enough?
Try again, the engineer suggested, perhaps with more throaty emphasis on “wildest.” It might make the animated character Mr. Peck was voicing — a buccaneer with a peg leg — a tiny bit funnier.
Mr. Peck, 33, cleared his throat and gave it a whirl, prompting chuckles from the production team. A couple of clicks on a laptop later, and an artificial intelligence tool synced Mr. Peck’s voice with a cartoon pirate’s mouth movements. The character was destined for an episode of “StEvEn & Parker,” a YouTube series about rapscallion brothers that attracts 30 million unique viewers weekly.
Just a few years ago, lip-syncing a minute of animation could take up to four hours. An animator would listen to an audio track and laboriously adjust character mouths frame by frame. But Mr. Peck’s one-minute scene took 15 minutes for the A.I. tool to sync, including time spent by an artist to refine a few spots by hand.
Toonstar, the start-up behind “StEvEn & Parker,” uses A.I. throughout the production process — from honing story lines to generating imagery to dubbing dialogue for overseas audiences. “By leaning into the technology, we can make full episodes 80 percent faster and 90 percent cheaper than industry norms,” said John Attanasio, a Toonstar founder.
“This is how you build the next generation of hot intellectual property,” Mr. Attanasio added excitedly.