In my previous tutorial, "A Video to Animation Workflow for Character Animator," I took a video clip and, through ManyCam, created animation and lip sync in Character Animator. In this tutorial, I take just video through ManyCam for creating animation, and get the audio from the video clip using Adobe Media Encoder, import that audio into Character Animator, and use the "compute lip sync from scene audio" feature to create lip sync.
I've been playing with the preview version of Adobe Character Animator for a month or so. Animator is designed to create animation from your live movements in front of a webcam and lip sync from your live voice into a microphone. But it can be hard to get everything just right when you have to do it live. I found out on the Adobe forum ( https://forums.adobe.com/message/8400368#8400368
) that you can also create animation and lip sync in Character Animator from a video clip. Then you can either make a video of the animation directly, for example using Adobe Media Encoder, or else bring the animation into After Effects and continue to work on it.
You can download the "puppet" here: