by on / 1 comment

Hey everyone!  Sam here.  I’ve spent this summer interning at High Fidelity with all these awesome developers and here’s some of the stuff I’ve done:

To make controllers easier to add and maintain, I moved multiple devices like the hydras, playstation controllers, and 360 controllers over to the new UserInputMapper, which provides a variety of default actions that will work without you having to worry about anything.  These actions include things like YAW_LEFT, LATERAL_FORWARD, VERTICAL_UP and so on.  Adding a new type of device is a lot easier now and you can look at any of the provided managers for examples.

I exposed all the input mappings and action states to javascript.  This means you can switch around the mappings from a script, which will eventually be used to create a key binding screen, as well as check the current states of the actions.  There are also some actions that don’t do anything which you can use in scripts: ACTION1, ACTION2, and SHIFT.  For examples, check out hmdControls.js, the new, slightly-modified squeezeHands.js, and toybox.js (which may not have been checked in yet and is still in development).  As a proof of concept, I created mouseLook.js which will let you look around without right clicking.  It is definitely still under development but might eventually become the default camera mode.

toyboxI began work on a system for menu and world interaction with hand controllers via the LEFT_HAND_CLICK and RIGHT_HAND_CLICK actions.  These will trigger left clicks.  You can trigger right clicks with the SHIFT action + either of those.  These make it possible to click menus and interact with the world (like with edit.js) using any hand controller!

This new system is controller agnostic, which means it won’t depend on any specific type of device.  For example, toybox.js and squeezeHands.js are now usable with any hand controllers without any extra work!

I messed around with some camera features as requested by some users.  I added zooming, Center Player In View mode, and condensed some menu options into the View->Camera Modes menu, which lets you switch modes.  Independent mode will untether the camera from the avatar so it can be moved separately from a script.  While in independent mode, your avatar won’t move (unless you move it with a script).

just_lie_down

I set up and worked extensively with our new Vive!  I worked on getting the hand controllers to work and did extensive testing of the (recently merged!) “plugins” branch, as well as helping on some related sprints with both teams.  As a part of the work on the plugins branch, I created the concepts of “InputDevices” and “InputPlugins.”  InputDevices communicate with the UserInputMapper, and InputPlugins are toggle-able managers of those devices.  You can find your available InputPlugins in Avatar->Input Modes.  If you don’t have any of the optional SDKs installed, you’ll just see the keyboard.  If you’re trying to use hydras or gamepads, make sure the option is available there and is checked.

It’s been an absolutely amazing summer getting to work on this stuff.  Unfortunately I have to head back to school soon, but I still have a long To Do list of things to work on, including support for the Perception Neuron suit and the Oculus Touch, a better and customizable system for keyboard shortcuts that will warn you if you try to add a duplicate mapping from a script, continuing to develop mouseLook.js and toybox.js, and tons of other stuff.  I’ll be offline for the next couple weeks but don’t hesitate to message me with any questions or suggestions about any of this (or anything at all)!

by on / 1 comment

Hello! I’m Bridget, and I’ve been interning at High Fidelity this summer, working to build some JavaScript content in HF. As a math and computer science major, I had the opportunity to hone my programming skill set, learning from Hifi’s superb team of software engineers and design-minded innovators.

So here’s the culmination of my work this summer: a virtual orbital physics simulation that provides an immersive, interactive look at our solar system.

Screen Shot 2015-08-14 at 10.56.18 AM

The goal: demonstrate what can be built with the JavaScript API, while experimenting with the potential for building educational content in High Fidelity. This project was excellent exposure to coding in JS (as well as some C++), and a very cool glimpse into the many capabilities of building for a virtual reality platform.

The simulation uses real gravitational physics to simulate planets in orbit around a sun. The planets are positioned at a set radius from the sun, each radius scaled accurately relative to earth. I fix reference values for the orbital period, large and small body masses, and gravity (not equivalent to the universal gravitational constant). Then, using GM/r^2, I compute the acceleration necessary to keep each planet in a stable orbit. Abiding by equations of orbital motion, the script updates the motion of each planet to mimic its orbital trajectory about the sun. Their paths are traced using line entities (a relatively new system feature).

While the simulation exploits a somewhat simplified model, namely neglecting the elliptical nature of the planets’ orbits, it can easily be modified to account for additional factors such as the n-body problem.

Similar to cellscience–the virtual cell classroom which Ryan and co. have built–the solar system simulation offers an interactive model that could be used by educators–in a high school physics classroom, for example–to demonstrate gravity and orbital physics.

Screen Shot 2015-08-14 at 10.57.48 AM

Another fun aspect of the project was implementing UI to create possibilities for exploration and experimentation within the simulation. A panel with icons lets you:

  • Pause the simulation and show labels above each planet revealing its name and current speed
  • Zoom in on each planet
  • Play a “Satellite Game” (think Lunar Lander, but with a satellite around the earth), where you attempt to fling a satellite into stable orbit
  • Adjust gravity and/or the “reference” period, and see what happens!

Screen Shot 2015-08-14 at 10.58.04 AM
Here’s the script.

by on / 1 comment

High Fidelity received many wonderful submissions for its STEM VR grant challenge and has selected two recipients. They are, in no particular order:

TCaRs – An awesome racing game where you get to interact with JavaScript to customize your car’s handling, create unique power ups and optimize performance through editing the program code with the use of the Blockly API.

TCars_VR_Challenge

Planet Drop – A networked multi-player game that leverages the benefits of social VR through “cooperative asymmetrical gaming”: the virtual environment is shared by the players, but each has specific information related to his or her chosen STEM specialty, provided by individualized HUDs. However, unlike a traditional “information asymmetry” game, like a card game, the goal is not to use unshared information to a player’s advantage to win, but rather to share that information as quickly and effectively as possible to allow the team to solve challenges and advance through a story arc of increasingly impressive accomplishments.

FTL Labs PlanetDrop-VR

Thank you to all who submitted proposals. We look forward to playing and sharing these games when they are delivered and doing something like this again in the future.

by on / 2 comments

A fun first today: the HTC Vive and Oculus Rift live in the same virtual space – touching hands.  And a third person on desktop, for added greatness.  The Razer Hydra is doing the hand-tracking for the Oculus and Desktop, hopefully soon with a Touch.  And some pictures of our new office and crypt-like Vive cave.  Next up will be getting those those virtual bodies moving correctly to follow the controllers.

by on / 9 comments

Screen Shot 2015-07-13 at 2.24.24 PM“The Metaverse is a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds,augmented reality, and the internet.” – Wikipedia

 

The Quick Answer:  If all the desktop computers with broadband access in the world today were linked together to create a single Virtual World, that place would be already be the size of earth, able to provide concurrent access to every living internet user, and doubling in size every two years.  What we need to realize this goal is the software to create the network (that’s what we are working on), and lots of HMD’s to create easy immersive access.

The Longer Explanation:

First, how many PC’s:  Gartner estimates PC sales of about 400M units per year, 2011-2014.  Of these modern/fast machines, let us conservatively assume that 30% are in use and connected to broadband internet connections, for a total of about 500 Million devices.  Of course there are many additional tablet and mobile devices that are connected to power and Wifi/broadband (often at night) but for this analysis we will consider only the PCs.

Available bandwidth:  A number of different surveys of broadband internet speeds like this one suggest an average global download speed of 20-30Mbps and an upload speed of about 10Mbps.  The upload speed is the most important one, since this analysis assumes that we are using these PC’s both as clients and servers of the virtual space.

There are about 3 Billion internet users.  This means that each of these PC’s would need to provide streaming data for about 6 people at once.  At that 10Mbps uplink speed, this would let each connected user receive about 1.5Mbps of data from the virtual world – more than enough to provide a deeply immersive experience – about the same amount of streaming data as watching an HD movie.

So what about the size of the space we’d all be together inside?  The earth covers 197 million square miles, of which 70% is water, leaving 60 million of land.  With 500 million PC’s to handle that space, that means each one would need to be responsible for about 1/10th of a square mile.   To put it in better perspective, 1/10 of a square mile is about 64 acres, or about 10 city blocks.  If you’ve played and explored a modern open-world 3D video game, you know how much area can be covered at a very high level of detail.  Even though a virtual world has somewhat higher detail and storage requirements than a video game, what this demonstrates is that each PC would be called on to store and serve and simulate a much smaller area than a typical video game.  So, for example, we would easily have enough capacity to cover this entire surface area of earth with a single unimaginably large city, or conversely the open forests we could explore could be detailed down to the bugs and individual blades of grass.

So, just like earth, this Metaverse would be already be so vast that we could not begin to hope to explore it all.  Even with the power of teleportation (as compared to plane flight), you couldn’t even come close to seeing it all in a lifetime: if you lived to be 80 years old, you’d get less than one second to examine each of those city blocks.   And, unlike earth, our Metaverse is expanding:  both internet bandwidth per user and device compute speed are conservatively doubling every two years (yes, bandwidth grows more slowly but only a little bit more).   Meaning that 20 years from now, the Metaverse will be 1000 times the size of earth.  So the fictional Oasis from Ready Player One, with thousands of detailed virtual planets floating in digital space, is something we can actually build – right now.