Online Audio Spaces Update: New Features for Virtual Event Organizers
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
On February 10th, we scaled a virtual reality server to 110 concurrent users. Here is how the deployment was achieved:
High Fidelity’s architecture allows a single domain to be distributed across a group of networked server components, each of which manage different aspects of the domain environment and the functions needed to create a sense of presence for users.
The load on each component varies based on different factors. Sometimes the work for a component is driven by the amount of content in a domain or the scripts running on it. The load on other components has a direct relationship with the number of concurrent users present and active in a space (e.g. the audio and avatar mixers work increases as audio from different user sources increases).
Of course, the load on all components increases with overall concurrency. Our work has focused on identifying the combination of concurrent factors that cause aspects of the domain experience to stall or fail. As we’ve increased concurrency, we’ve been able to identify and provide remedies to improve the performance of each component, either through additional optimization or increases in the capacity of the hardware running a component. The separation of each aspect of the VR experience on to a separate component has made this optimization work significantly easier.
During our test on February 10, we ran a setup using eight AWS-hosted instances, as follows:
We used Ubuntu 16.04 to run each of the instances.
Our own setup uses Ansible to make deployment to multiple systems quick and flexible, although this is only one of many ways the servers could be deployed.
Deployment for each instance involves distribution of the appropriate binary files (domain-server binary on the Domain server and assignment-client binary on the others). The domain server is then configured with the model content added to the entity server. Each additional server process is then set up with information connecting it to the domain server.
We host our own configuration in AWS us-west-2 (Oregon) which makes the total setup above cost about $1.60/hour plus another $6.40 per month for storage (assuming gp2 EBS volume with 8Gb for each instance).
Depending on the number of visitors, the bandwidth costs end up making up bulk of the event cost. For example, for our 1 hour and 15 minute event with 110 participants on February 10, with the number of concurrent visitors ranging between 90–110, the total bandwidth cost came down to $11.34 for ~126GB of outgoing traffic. Assuming that you stop your domain when you are not hosting an event, this means that you could host 100 person events at a rate of $10.70 per hour plus another $6.40/month for storage fees.
We’ll continue to provide information on how to set up deployments designed to operate at this scale. Stay tuned for more information on how you can run a large scale VR server of your own.
Related Article:
by Ashleigh Harris
Chief Marketing Officer
It’s been about 8 weeks since we launched High Fidelity’s new audio spaces in beta. We really appreciate all the support, particularly if you have ...
Subscribe now to be first to know what we're working on next.
By subscribing, you agree to the High Fidelity Terms of Service