STRANGVR THINGS

OTHER TEAM MEMBERS

Brendan Cecere  | Sally Xia | Cookie Nguyen

 

Roles: UX Designer, 3D Artist

Skills Demonstrated: Storyboarding, Documentation Creation, Experience Design, Unity level editing and asset creation, 3D Modeling

 

A VIRTUAL REALITY EXPERIENCE SET IN THE WORLD OF THE NETFLIX TV SHOW "STRANGER THINGS"

While working in Janet Murray's Project Studio class, we were given chances to explore the world of Virtual Reality (VR) by reading about it, talking with experts about the field (such as Unity and Oculus employees, and VR researchers), and by gathering in teams to build VR Prototypes. My team decided to work on extending the experience of the TV Show "Stranger Things" by allowing the user to go through an experience that happens before the show's timeline starts, and is not shown on screen (Eleven's escape from the laboratory).


FIRST STEPS

RESEARCH AND PREPARATION

We started by learning about how VR experiences are made, both from a design standpoint and a technical standpoint. We learned about design principles for VR by reading reading about VR Design (both in published papers and online articles), and experiencing VR through Google Cardboard, Gear VR, Oculus and Vive.

As a team, we also started to formulate what our experience was going to be. We quickly settled on making an experience for "Stranger Things", as we thought the storyline would lend itself to interesting mechanics to explore in VR (because of the Telekinesis, parallel universes, and the unease about being in the same environment as it's monster).

After many discussions and a long time thinking about it, we came up with an experience with a narrative set in the show's world with specific tasks for the user to accomplish.

 

STORYBOARD

storyboard1.jpg

 

PREPARATORY DOCUMENTATION

Using screenshots from the TV show, I created mood boards showing the general color, shapes and details that can be found in the lab in the TV show to help me recreate the environment in 3D.

I also made a map of the environment to know where to position the elements.


FIRST PROTOTYPE

UNITY PROTOTYPE FOR THE VIVE

We learnt how to create prototypes for the Vive through local workshops organized by GA Tech, online classes and resources, and openly available example scenes for Unity VR developers. I was mostly responsible for blocking out the scene and environment and level design.

 

FIRST BLOCK-OUT

 

FIRST PROTOTYPE

In our first prototype, we focused on the main mechanic that we wanted users to experience: the telekinesis. We created the level and placed very blocky versions of our final objects in the environment. We also tested some motion illusions (mainly the room extender, which gives the impression that we are walking further than we really are) that we wanted to use to solve problems around locomotion in VR (the main one being: how do we extend the relatively small playspace of the Vive so that the experience doesn't feel limited to a small room?)

 
 

 

TESTING AND FEEDBACK

We had the opportunity of testing our prototype with a variety of users at GA Tech's GVU Demo Day, in which the college opens it's doors for anyone to come discover current research projects.

Seeing users going through our experience allowed us to learn a lot about was was working and what wasn't working in our current prototype.

Users greatly enjoyed the telekinesis and enjoyed the general aspects that Room Scale VR bring (body movement equals digital movement), but we also got to learn about a lot of issues about designing in VR, including problems with how to give directions, some motion sickness with our locomotion solution, how to attract the user's attention to important elements.

We came up with solutions to some of the issues we encountered while refinining the elements of the prototype further for a more advanced prototype.


SECOND PROTOTYPE

REFINING THE UNITY PROTOTYPE FOR THE VIVE

In order to iterate on the prototype, we used information that we had gathered when making and testing the rough prototype. Our largest problem centered around locomotion: the room extender was difficult to explain and understand, and the ability to toggle it on and off confused people. We decided to make it's use mandatory to plan the level size more easily, and got rid of the hallway part of the experience. We re-designed the interactions to all happen within the laboratory, and have the elevator arrive directly to it. We also simplified the interaction to be more in scope with what we could accomplish in the given time.

 

NEW MAP

 

I made a new level map, in which the walkable area is smaller than the full level so that the room feels larger than the space the user can move in. In addition, with telekinesis, users can still interact with objects out of their reach.

 

NEW STORYBOARD

 

FINAL PROTOTYPE

In our final prototype, we managed to implement all the interactions in the revised storyboard. I created more refined 3D assets for the prototype, which worked well at conveying a general aesthetic. My teammates were able to find sound effects for most interactions, and although we were not able to find an animated monster like the Demogorgon of "Stranger Things", we found a good placeholder. We also found assets for the various particles in the Unity Store. This gave us a good prototype to test how successful we could be at creating an experience that would extend the experience of "Stranger Things". Brendan was responsible for most of the programming integration, and I was still able to get a good introduction into scripting in Unity and C#.

Unity Screenshots

Unity Screenshots

 
 

FINAL TESTING AND FEEDBACK

We were again, able to test our prototype on a fairly large pool of participants (about 20), at Janet Murray's research projects presentation. We learned a lot in that last round of testing, and have a good idea of our prototype's successes and failures.

The largest challenges we encountered when designing for Room-Scale VR so far revolved around:

  • locomotion: what are the best ways to move the user around? We need to consider that their movement is limited by the physical playspace of their actual setup.
  • focus: how do we get the user to pay attention to the element that matters? Rules of composition depend heavily on the picture frame, visual design for VR is a whole new beast. Users can choose to look in the direction they want, we need to find how to direct their gaze without a frame.
  • learning: how do we teach the user how to do what they need to do? The whole media is new, users aren't used to any of it, and users can't look at instructions once they're in the virtual world.

All these problems were interesting and challenging. It was interesting exploring the different ways in which people have been attempting to solve them, and try our own hands at existing and new methods. The room extender wasn't a very successful solution to increasing the playspace: although many users didn't mind, some would get wildly disoriented, and others would be bothered by it. Lighting was our most successful method for attracting the user's attention, and although we also tried shiny bright objects, they were not always successful at attracting the user's attention. As far as explaining the controls to users, we tried making them as simple as possible, but discovering them was definitely more difficult than expected. Many had a difficult time memorizing the buttons, and it's impossible to explain further once they are in the virtual world. So it was a good experience in learning the importance of having all the information easily available within the virtual world.

I greatly enjoyed designing an experience for VR, and expect that I will be continuing exploring similar projects in the future.

Back to Portfolio