Testing my new (still virtual) replacement mouth set on original audio... The Sync work was done under Dragonframe and the video was then built from still image using VirtualDub (a nice little free software).
It all start within DragonFrame lip sync tool. You import your audio file, and then you place the pic of the right mouth on a timeline. You can then playback and fine-tune.
At animation time, you rely on the generated "dope sheet" that tells you exactly which mouth piece goes on which shot number.
There are some specialized software tools, like Magpie, that are automating the process further with a voice recognition engine that would pick the right mouth for you. But I don't know how reliable the algorithm is...
Looks great! Is there some degree of automation in the lip syncing or do you need choose the mouth for each point on your own?
ReplyDeleteIt all start within DragonFrame lip sync tool. You import your audio file, and then you place the pic of the right mouth on a timeline. You can then playback and fine-tune.
DeleteAt animation time, you rely on the generated "dope sheet" that tells you exactly which mouth piece goes on which shot number.
There are some specialized software tools, like Magpie, that are automating the process further with a voice recognition engine that would pick the right mouth for you. But I don't know how reliable the algorithm is...