Online Audio Spaces Update: New Features for Virtual Event Organizers
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
A key part of our research direction at High Fidelity is to breathe life into the avatar, by capturing data on movement, facial, and gaze information from as many channels or devices as we can and streaming this data, along with very high quality audio, at low latency and high FPS.
This video clip is an early example of our work in that direction. We built a simple test device consisting of gyros and accelerometers (the same parts that are in cell phones, the fuelband, Google glass and many other devices) that are sending their data with very low latency to a computer that is rendering an avatar. As you can see, if the sensors are captured at high FPS and synced to the audio very tightly, the effect is impressive.
The difference between the sort of face tracking you can do with a camera (low FPS) and what you can get from the raw gyros (60+ FPS) is pretty clear. Our goal is to build a platform in which data from many different devices can be simultaneously captured and streamed to create a very compelling version of an avatar that can communicate with a higher degree bandwidth and emotional impact that anything we have seen before.
Related Article:
by Ashleigh Harris
Chief Marketing Officer
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
Subscribe now to be first to know what we're working on next.
By subscribing, you agree to the High Fidelity Terms of Service