Optimizing desktop VR for mobile the Hard Way

By Landon Butterworth

It's been about two months since we launched Yana Virtual Relaxation on the Samsung Gear VR. But before that, Yana was originally made for more powerful, desktop VR setups and had to go through an extensive optimization process to work on mobile devices. Of course, at the outset we didn't realize just how extensive that process would be.

We chose to port over this experience not only because people seemed to like the desktop version, but we were looking ahead to our Lost Cities VR project (now in full swing!) and thought that the process of optimizing a desktop VR experience for mobile would teach us everything we needed to know about the mobile development process.

I wanted to write a brief overview to give other developers and users some insight into what it took to produce this type of experience, learn from our mistakes, and share some tricks that we came up with along the way.

A little bit about the old Yana project

Yana was the first virtual reality project The Campfire Union created; it actually predates and was started by Les's former company, and was completed under Campfire. It was one of the first virtual relaxation apps on Oculus Share and currently has over 3,000 downloads and some really positive reviews.

All of this is well and good, but it doesn't change the fact that it was an old project that no one had looked at for about a year, it was created through a process of trial and error, had very little documentation, and almost no attention had been given to efficiency. I dusted off the old Plastic SCM repository, pulled down a copy, and started poking around to see if this was even possible. To break it down for you:

Old project specs:

  • 3.7 million tris
  • 1700 draw calls
  • Dynamic reflection
  • Transparency on everything
  • Day/night cycle

New maximum specs:

  • 100k tris
  • 50 draw calls
  • No reflection
  • Minimal transparency
  • only 1 dynamic light allowed

Ignorance is bliss

Like anyone else attempting to do this for the first time I began researching and found several videos and blog posts that other developers had made after completing their porting process. Almost everyone said that you shouldn't port your game over and you should instead start from scratch.

Naturally I took the other developers advice with a grain of salt and the recommended maximum specs as rough guidelines rather than hard rules, otherwise the project would have been dead right then and there. I thought the other developers were exaggerating and that we would be able to push it. I thought we would be able to keep Unity's standard water and the transparency if we just cut back enough in other places.

The beach pre-optimization The beach pre-optimization

The beach post-optimization The beach post-optimization

Jumping in

After creating a backup of the old project and creating a new branch for the repository, I converted the Unity project from desktop to Android and started removing everything that wasn't essential. I switched over to the Oculus Mobile SDK and imported the recommended project settings. I replaced all the shaders that I could with their mobile counterparts and did a build to see where we were at.

This was the first time there were any red flags. At one point the app chugged so hard it was like someone hit the 'F' key on the old OVRPlayerController - total freeze frame. There really wasn't any simulator sickness caused by the experience because it was less of an immersive 360 degree animation and more of a series of still images.

How could we take something that was rendering around 5 frames per second to 60 frames per second?

The teardown

I started exploring the scene more in depth and cutting things that I had originally thought we would be able to get away with keeping (like the seagulls). I turned refraction off on the water so it was no longer transparent, but kept reflection on.

I took out more than half of the trees on the beach and realized their shadows were still present when I ran it in the editor. It turns out it was using projectors to create the shadows that move across the beach. I removed the projectors and turned cast and receive shadows off on every game object in the scene.

I removed all lighting except the one that follows the rotation of the sun and changes from day to night. I did another build to check where we were at... there's the sim sickness I was looking for! We weren't using ADB to get an accurate Frames-Per-Second (FPS), but my guess is it was around 20-25 FPS and still spiking at points.

Time to set up a proper test environment

Although we could see progress from the changes we were making, we didn't have a reliable metric to compare with previous tests and we had no clue how much further we had to go.

I started researching the best way to remotely access the stats on the phone and came across a few possible solutions. The one that seemed to be the most popular was ADB wireless by Henry. We had some problems with it initially and sometimes there are still problems connecting to the mobile device but for the most part it's a great tool.

After setting that up we could just type adb logcat –s "UnityPlugin" in the command prompt to get an accurate read out of the current FPS. We also used the ADB wireless to connect to the Unity profiler.

The whole scene pre-optimization The whole scene pre-optimization

The whole scene post-optimization The whole scene post-optimization

Diving into the scripts

At this point we had trimmed all the excess stuff that was easy to take care of and it was time to start evaluating the efficiency of our scripts.

One thing that was particularly interesting was a NodRecognizer script they had used for user interaction in the desktop version. When I started profiling I saw that Physics was taking up a huge chunk of the CPU usage, but only sporadically, which didn't make sense to me because I thought I had removed all the Rigidbodies.

It turns out that when a nod was recognized, a bunch of Raycasts were being used immediately after to determine where the user was looking after the nod. Thankfully because we were switching the menu navigation to "tap to continue" I just scrapped this script and we saw a major improvement because of it.

When we turned on the deep profiler we saw some scripts that should only be called every once in a while taking a small percentage of CPU every frame. It turned out to be empty Update() methods within those scripts. Even though they aren't executing any code, they are still called and each call takes a little bit of time. This made me go through every script we had and delete any empty Update() and Start() methods.

Trimming the excess

I realized that because the Gear VR doesn't have positional tracking we could use that to our advantage. Anything that you can't see from the position of the camera doesn't need to be included in the scene at all. Since the water is no longer showing refraction, anything under the level of the water could also be erased. I stripped out the water walls, cut the bottom, back, and sides off of the sand, and completely cut the back out of the stone archway as well as the part that was below the water level.

I did some of the trimming manually, but I also used a tool called Simplygon to reduce the number of tris and for some of the more complicated processes. This got us all the way up to 42 frames per second! We were well on our way, but on the other hand we were running out of things we could get rid of.

Necessary sacrifices

Early on I had cut out the seagulls and while we had originally intended on bringing back one or two it was becoming obvious that it wasn't going to happen. Somewhere along the path of optimizing for mobile you have to come to terms with the fact that you can't have everything you want. Sacrifices must be made!

For us those sacrifices really began adding up. We had to ditch the dark colouring around the water on the sand because it was using a diffuse-detail shader and the resolution on the detail texture was killing our texture memory.

We also had to ditch more than half of the trees as well as the animations on the trees behind the user, and switch them from skinned mesh renderers to mesh renderers.

Even after these sacrifices we still weren't fast enough. We racked our brains on what else we could cut, combing through all the shaders and scripts. We reduced all the textures in the scene to the lowest resolution we could get away with and created texture atlases wherever we could. No matter how hard we tried the highest FPS we hit at this point was 51.

We finally accepted the fact that the water had to go.

It just wasn't going to work on mobile. Sure the mobile version of the water worked fine, but without the reflection the experience crossed over our lower bound of quality that we were not willing to accept.

Creating water from wine(ing about the limitations of the Gear VR)

We knew we needed to mimic the effect the water had as closely as possible and we knew that dynamic reflection of any kind would kill our frame rate.

We started by exploring the Unity Asset store for a solution. Most of the options we found still used dynamic reflection. Then we moved to using a camera in the same position as the camera rig but reflected over the (x, z) plane to create a render texture of what would be the reflection. Not only did that not work very well but it also destroyed our texture memory, so every frame it was basically paging (kinda, sorta, not really) for textures, which ate up time and destroyed our frame rate.

We were out of ideas and I spent a couple days poking around other elements in the scene trying to cut and slash to no end. I came up with the idea of just duplicating all the objects in the scene that would be reflected and flipping them upside down, basically faking the reflection by having another physical object in the scene instead!

Right after I had that eureka moment I did a quick Google search and realized that's what they used to do on old gaming systems and I might not be as much of a genius as I thought.

So there it was, a tried and true method of faking reflections that worked on systems with way tighter restrictions than the Gear VR. I duplicated and flipped all the objects that should appear in the reflection: the arch, the sky sphere, the tree to the left, and built it. 60 FRAMES PER SECOND!!!

The sun sets on the celebration

I was so proud, we had finally done it! It was running at a solid 60 FPS with a reflection that looked convincing. A bit glassy, but convincing.

I started thinking about how we could improve the look of the reflection by blending a darker colour into the shader of the reflected object when I noticed the sun was setting and the sky started to turn pink and as I watched the sky switch from day to night and the horizon glow disappeared the sun was still visible well under the horizon line.

Then I realized I hadn't duplicated the sun... or the moon... or the shooting stars... or the catamaran... or even the horizon glow.

Not only that, how do you make a sun set when there is nothing for it to set behind? The shooting stars and horizon glow are easy: just duplicate and scale their game objects by -1 and voila, but how do you fake a sunset?

I wrestled with the idea of deforming the meshes of the sun and moon objects as they approached the horizon line but I decided that would be too complicated and probably wouldn't look right.

Into the weeds

I came up with the idea of creating a mask that would appear in front of the sun or moon, and would show what was behind the sun or moon, effectively masking just the sun or moon. I could simply stagger the original animation and the reflected animation and manually change the order of the render queue.

Using a very simple custom shader, we created the appearance of a sun setting over water. The mask just below the water line would hide the "real" sun as it sets downward, and the mask just above the water line would hide the reflected sun as it sets upward. And we could simply repeat the same thing later on for the rising moon as well as the sunrise.

Here's the actual code involved:

// DepthMask.shader
Shader "Custom/DepthMask" {
    Properties {

    }
    SubShader {
        Pass {
            Blend Zero One
        }
    }
}
// setRenderQueue.cs
using UnityEngine;
using System.Collections;

public class setRenderQueue : MonoBehaviour {
    public int renderNumber;

    // Use this for initialization
    void Start () {
        renderer.material.renderQueue = renderNumber;
    }
}

Unity mask inspector

The Unity Mask Inspector - Note that the mask's renderNumber must be above 2000 so that it's rendered with the rest of the geometry, but below the renderNumber of the object being masked.

Almost there!

We were finally running at 60 FPS for the entire experience except for three sections: when the sun was setting and rising, and during the shooting stars.

The problem with the shooting stars was that I had simply duplicated the animation and flipped it, meaning there were twice as many particles as there were previously. I went through and reduced the number of particles on each of the shooting stars and removed one or two entirely because there were three stars in the sky simultaneously (which of course means 6 because of the reflection).

We were still using transparency for the horizon glow during the sunset and now with the masks using a Blend Zero One in the shader, that meant a lot of pixels affected by the transparency.

The horizon glow on the old project was a band that stretched around the entire horizon, so I reduced it to occupy just enough space surrounding the sun to make it convincing.

One last push

At this point everything that was being reflected was a separate object from the original and using a separate shader because we needed to tint the reflected objects a little to make it look like water. That means that for every object we were reflecting (the arch, the sky sphere, the tree to the right, and the rock to the left) we were adding an additional draw call.

Ryan Hill just happened to stop by to talk about becoming a member of the Campfire team and he recommended combining the meshes and using vertex colours to darken the reflected part. That would mean each object and its reflection would be one mesh with one material and therefore one draw call.

As I started thinking about where to start and looking through similar scripts online, I received an email with a script attached with the subject line "here, use this". I threw it into Unity, read the README.txt, combined all the reflected objects with their original objects, and voila! A solid 60 Frames-Per-Second through the entire experience!

Performance stats pre-optimization

Yana's performance stats pre-optimization

Performance stats post-optimization

Yana's performance stats post-optimization

Getting Yana submission-ready

We had hit our 60 FPS mark, now all that was left to do was get the experience ready for submission. This actually took a considerable amount of time too because it was the first app we had ever submitted.

We added in a stereoscopic logo screen with a little animation, a stereoscopic loading screen, gentle fade-ins and fade-outs on scene changes, attached the standard back button functionality and the universal menu, changed the Android manifest to the required settings, and created a single screen version so we could take the necessary screen shots.

My part was done! I handed the project off to our Chief Creative Officer Rachael Hosein to create the description and branding for the experience.

In the end

Yana has been doing better than we ever expected. We have received messages from people who are using it for their daily naps, people using it for pain and anxiety therapy, and other people who say it's one of the experiences they show others as an introduction to VR.

Yana has also been in the top ten downloads on WEARVR for four weeks in a row now and we could not be happier! It was a grueling process porting it from desktop to the Gear VR and Google Cardboard, but the response from people has made it all worthwhile.

- by Landon Butterworth

 
Keep up with us on our development newsletter:

We want to hear from you!

Please enter your contact info to get in touch.

Please enter your name.
Please enter your email address.
Please enter a message.