Online Audio Spaces Update: New Features for Virtual Event Organizers
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
Hi gang, this is Ozan, and this is my first blog post. I used to work as an animator in the movies. One of the things I am trying to do here at HF is make live avatars look really amazing — as close to what we see in animated films today.
This is a big challenge — we have to do everything in a fraction of a second without the benefits of an animator (like me!) being able to ‘post-process’ the results of what is motion captured. So I’ve been working on the ‘rigging’: how a live 3D camera and a motion capture package like Faceshift is able to ‘puppeteer’ an avatar. With less accurate data, we have to be clever about things like how we move the mouth to more simplistically capture the phonemes that make up speech. We’re making some pretty good progress, though, as the clip above shows. This is a live unedited session with Emily and me (playing guitar).
Lots still to do, but this is fun work and clearly we are able to capture some of the emotion of a real performance. Look for more posts on how things are going and bigger and better samples of performance like this coming soon.
In the meantime, if you want to see another test, check out this clip of me lip syncing to Bohemian Rhapsody. I was paying special attention to the “A”s and the “M”s so the avatar has clear mouth shapes. Not perfect but getting there. By the way, I’m thinking about what track to do next. Let me know if you have any suggestions.
Related Article:
by Ashleigh Harris
Chief Marketing Officer
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
Subscribe now to be first to know what we're working on next.
By subscribing, you agree to the High Fidelity Terms of Service