A few minutes after Google CEO Sundar Pichai finished speaking to the crowd at a Google developer conference one sunny June morning in 2014, Jon Wiley made his way over to a newly unveiled booth inside San Francisco’s Moscone conference center, where the event was being held. He’d been struck by something near the end of Pichai’s remarks. Pichai had mentioned, almost in passing, that everyone in the audience would be getting something called Cardboard. It had something to do with virtual reality.
This was the first Wiley had heard of the project, or anything having to do with Google and VR. You might think he’d get some early notice, since he’s one of the company’s most celebrated and long-tenured designers, perhaps the person most responsible for the Overall Google Aesthetic. But you’d be wrong. So he waded through the scrum at the Cardboard booth, grabbed a headset, and tried a few demos. He doesn’t remember now why he had a friend take a video, but he’s glad that he did. After a few minutes, he walked away and kept doing what he was actually at I/O to do: extol the virtues of Google’s new material design guidelines. But Wiley couldn’t stop thinking about Cardboard.
For the several years leading up to this, Wiley’s job had been designing the company’s search products, like voice search and the iconic Omnibox. One thing he always loved was how well search hid its underlying complexity: A simple, straightforward interface on top of massively complex computer science and engineering. Something about Cardboard struck him the same way. “When you look at Cardboard, it’s very unassuming,” he says. “I mean, it’s literally made of cardboard!” But having played around with VR during its last almost-moment in the ’90s, he knew how complex the system really was. His mind raced with the design possibilities.
A few months after the I/O conference, a fledgling group of VR junkies within Google approached Wiley and asked if he wanted to work with them on the side. VR tech was improving fast, but no one was really thinking through how a virtual world should actually look and feel. Maybe it could be his 20-percent project, that infamous Google-sanctioned side hustle. Wiley spent a weekend thinking it over and said no. He wanted to do it full time. In early 2015 he became Google’s director of immersive design and set about figuring out how billions of people will use VR both now and in the decades to come.
After years at the helm of one of Google’s most mature products, Wiley is now working on one of its newest and most experimental. There are few agreed-upon rules or norms for VR, at least other than “don’t make people puke.” And everything that works on paper looks, well, flat in this immersive new place. So Wiley has hired architects and sculptors alongside his engineers. Together, the team has worked to rid themselves of everything they know in order to figure out the true way forward. If they get it right, Google’s VR designers are in a position to help usher in the most immersive, natural, human version of computing ever conceived.
The User as Interface
The VR user experience begins well before any screens turn on. It really begins the moment you pick up a headset. Right now, headsets are universally weird: bulky face-huggers that at best look ridiculous and at worst give you back problems. Google couldn’t fix all that, at least not yet, but it did take the experience in a decidedly more comfortable direction. The Daydream View headset is made of fabric, not plastic, and it’s much lighter and simpler than an HTC Vive or an Oculus Rift or even a Samsung Gear VR. The View is as a result far less powerful or customizable than some headsets, but it feels less like an alien object and more like something you might actually want to wear.
Google’s emphasis on comfort over technology comes from a principle the Daydream team developed early on, as they began to comprehend the massive responsibility that comes with asking users to put on a headset that literally tricks their brain into believing something fake is real. It sounds almost obvious: You have to make the user comfortable. “You’re going to be in this environment,” Wiley says. “You’re totally surrounded by it, and also you’re wearing this thing on your head. What do you wear? And where do you dream of being?”
Jessica Brillhart, Google’s principle VR filmmaker, has taken to calling people “visitors” rather than “viewers,” as a way of reminding herself that in VR, people aren’t watching what you’ve created. They’re living it. Which changes things. Over time the Daydream team became less and less interested in space stations and Holodecks and more excited by the idea of starting their VR exploration in a much more realistic place.
Making VR that feels like the real world might sound unambitious, or even pointless. (If you want the woods, here’s a novel idea: Go outdoors, doofus.) But Google’s designers maintain that it’s the right way to start. Because VR is so immersive that your brain truly believes what you’re seeing is real, the penalty for getting things wrong is radically higher. And wrong doesn’t just mean laggy or vomit-inducing. It can mean emotionally uncomfortable, confusing, even scary. One of Google’s own VR experiences used to include a door in the background, to make it feel like part of a bigger world, but people kept anxiously turning around, worried something was coming through the door. An early version of the Photos app set everything in an attic, as if you were combing through family records. It felt … wrong, somehow. “Everyone imagines the attic differently,” says Joshua To, the team’s design manager. “And you end up feeling like you’re encroaching on someone else’s space. It doesn’t feel like your space.” Everything that’s happening in VR is brand new, so the least Google (or anyone) can do is make parts of it a little more comfortable.
When you first put on a View headset and turn on the software, you’re dropped into Daydream Home, which is really more like Daydream Woods Picnic or Daydream Outdoor Nap Spot. It’s a pristine, quiet forest scene, with roaming critters and a babbling brook. This virtual world is a lot like the real one, except the weather’s always perfect and nobody ever gets bug bites. When you pick a movie in Google Play, you do so in an airy room filled with mini-monuments to classic films: Harry Potter’s robe hangs on the wall, next to Forrest Gump’s box of chocolates and a T-rex snarling at an overturned Jeep. Once you pick a flick, you’re taken to a mossy clearing, where the film plays on a screen hung taut between trees. Eventually, To says, Google plans to give users more control over what things look like, but for now most environments feel like you’re getting private access to some eccentric billionaire’s play places. It’s wonderful.
Because VR is so immersive that your brain truly believes what you’re seeing is real, the penalty for getting things wrong is radically higher.
This isn’t just Google’s approach, either. When a team at Hulu began experimenting with virtual reality, they latched on to the idea that it might be cool if you watched a TV show in the same place the show happens. Their first prototype, which involved the History Channel show Vikings, had viewers sitting onboard an ancient ship with the show playing on the sail. But that was weird, and distracting, and maybe going to make people seasick. “I want to focus on watching TV here,” says Julian Eggebrecht, Hulu’s vp of technology. So they settled on what Eggebrecht calls “hyper-real” environments for watching TV: places you might already go, only better. “If you want to be in a movie theater, it has to be the largest movie theater you’ve ever experienced. If you’re in a living room, it has to be in a really cool city. If you’re going to be on a beach, it has to be an exceptionally relaxing beach.” Before VR takes us somewhere entirely new, it’ll perfect the world we already know.
At one point during our conversation, Wiley leans back in his chair and stares pensively at the ceiling. “This is going to be heresy,” he says, “but I kind of want to bring back the heavy skeuomorphism.” That’s become a dirty word in tech, code for lazily copying the real world in the digital one. Smartphone software is becoming more natively digital and abstract—it feels silly to have computer desktops with virtual coffee cups or to line the top of your phone’s notepad with torn legal paper. But in VR, Wiley argues, “buttons are buttons! You can walk up to something and press it.” The very ergonomics of a door handle, for instance, tell you something about how you use it. Why try and reinvent it when it already works? Wiley wants to take advantage of humans’ ability to move through the world rather than force them to adapt to yet another computer interface.
That’s the plan for now, anyway, until people are comfortable enough in this new place that they begin to push at its boundaries. There’s a different team at Google working on that part.
In July of 2015, as Google began to shift virtual reality from a box-cutting hobby to something far more ambitious, Rob Jagnow and a small group of other Google engineers asked their colleagues a really broad question: “What do you want to know about VR that we haven’t explored yet?” They got 220 responses. Some people wondered about motion, others about reading. A weirdly large number were curious about VR shopping.
Jagnow’s team is called Daydream Labs, and it has one job: make stuff. They take a straightforward question or hypothesis—How do you leave a comment in VR? What should people’s legs look like? Is virtual horticulture fun?—and build the simplest prototype they can to test it out. Then, once a week or so, they invite everyone on the team over to try it out and tell the Labs crew what they think. They’ll strap a headset on someone, show them a bunch of tiny variations on an idea, and pepper them with questions. Their first project was text, testing how far away and how big copy should be to be readable. Drums were second. People liked drums.
When I meet Jagnow, a friendly and fit guy in a Daydream branded T-shirt, it’s early November, just before Daydream launched to the public. Almost everyone who has ever tried the Daydream View has done so inside a white-walled testing room like the one he leads me into. The Labs crew, he says, is working on its 90th project. “We’re doing copresence animation systems,” he says, “with physics simulation shared between clients.” Before I can ask what any of that means, Jagnow straps a Vive headset over my eyes and starts a demo he calls Keyboard Drums. It’s a sort of typing test, but instead of touch-typing or pointing at letters I have an array of typewriter-style keys below me and two mallets in my hand. One answer they found to “how do you make typing not suck?” was to make it feel like drums. I type words like I’m playing a xylophone. It’s fabulous.
VR has all the requisite capabilities to be the most intuitive and natural computing system ever. But it’s going to take some work to get it right.
Over the course of an hour, Jagnow puts me through a series of demos. I’m a thousand feet tall, playing putt-putt across all of Central Park. Then I’m standing in an austere supervillain-quality house, building a model home to see how freely a VR system can let you move stuff in space. Next I’m standing among sculptures in a museum as a tiny device records my voice and movements. When I finish, it appears next to me like a snow globe, endlessly replaying my diatribe. I guess that’s the VR equivalent of a comment. It’s as horrifying in virtual reality as it is on YouTube.
Nothing Jagnow shows me feels fully realized or polished, which he says is by design. There’s no point in wasting time with beautiful backgrounds when they’re trying to figure out how to let you move around a space without making you nauseous. “One of our rules of prototyping,” Jagnow says, is to get it in VR as soon as possible. It’s one thing to sketch out a paper prototype, but as soon as you get it into VR, you’re like ohhh!’” He says they’ve never once figured out the right answer on the first try.
The “how do you do X in VR?” question will take a long time to fully answer and will change constantly as screens, processors, lenses, and controllers change. All of which seems to annoy Wiley. He has this sense of what it’s all going to look like 10 years from now, how great and natural it’s going to feel. “Right now,” he says, “we’re doing a lot of interfaces that mimic things people understand and know. But what’s next is probably something that feels like magic, and feels like superpowers, and like telekinesis and telepathy and mind-reading.” He is even convinced the necessary tech is coming. It’s just not coming fast enough.
The part that changes the most between here and there will be the controller. “I think headsets are actually much farther along than input,” Wiley says. Headsets have work to do, certainly, but projecting light onto your eyes is a well-understood challenge. Gesture detection, less so. And what about eye tracking, body language, and even the force with which you press that skeuomorphic button? “We have a huge range of expression through our hands,” he says, “and we’re nowhere near being able to leverage that for immersive experiences yet.” Wiley, a former theater major, is obsessed with these nuanced movements. He can’t wait to stop thinking about controllers and start thinking about bodies. Google is already hard at work on the matter, having acquired eye-tracking company Eyefluence in October. Google’s Daydream controller is neat and useful, but it’s nothing like the magic wand Wiley wants it to be.
This more nuanced method of input is coming quickly, from Google and everyone else. Oculus’s Touch controllers, which recognize not only when you’ve pressed a button but when you’ve raised a finger, are a start. Microsoft’s Hololens recognizes a limited set of gestures. And companies like Leap Motion are building sensors directly into headsets that let watch your hands move freely in space. “That’s a giant design opportunity, bringing more humanity to computing,” says Mark Rolston, founder of Argo design. He’s most excited about what these inputs could mean when we’re not seeing this digital reality on a screen but projected on top of our existed world. “That’s what the promise of augmented reality is. Bringing computing into the situations we already find ourselves in naturally.”
As controllers do get better, that will force VR designers to rethink everything all over again. And as VR becomes more social, so users can occupy a virtual space together, it’ll shift even further. “The challenge, I guess,” says Hulu’s Eggebrecht, “is, how do we take advantage of all this but make it so intuitive and so simple that you don’t get scared?” VR has all the requisite capabilities to be the most intuitive and natural computing system ever. But it’s going to take some work to get it right.
It’s worth the effort, Wiley finally (and somewhat reluctantly) admits, because he knows what’ll happen next. It’s what his whole design career has been reaching toward. “Our job as practitioners of design is to bridge the gap: Computers on one side, humans on the other.” The DOS prompt, the graphical user interface, the touchscreen, voice control, even Google’s simple search box, were all fundamentally about making computers feel a touch more human. “The computer has come a little bit across the chasm to us,” he says, “and each time this happens, people just enjoy using computers more.” Once the computers really understand users, seeing and responding to their bodies and brains and words and emotions, all within worlds without the boundaries of meatspace, not even Wiley can imagine what might be possible. But it’ll be beautiful.
Go Back to Top. Skip To: Start of Article.