One of the interesting talks during Magic Leap’s laborious three-hour keynote presentation last week was by Jared Ficklin, creative technologist and partner at Argodesign, a product design consultancy. Ficklin’s company signed on to help Magic Leap, the creator of cool new augmented reality glasses, in the creation of the next generation user interface for something called “spatial computing,” where digital animations are overlaid on the real world and viewable through the AR glasses.
At the Magic Leap L.E.A.P. conference, Ficklin showed a funny video of people looking at their smartphones and walking into things because they aren’t seeing the world around them. One guy walks into a fountain. Another walks into a sign. With AR on the Magic Leap One Creator Edition, you are plugged into both the virtual world and the real world, so things like that aren’t supposed to happen. You can use your own voice to ask about something that you see, and the answer will come up before your eyes on the screen.
For Ficklin, this kind of computing represents a rare chance to remake our relationship with the world. He wants this technology to be usable, consistent, compelling — and human. Ficklin spent 14 years at Frog Design, creating products and industrial designs for HP, Microsoft, AT&T, LG, SanDisk, Motorola, and others. For many years Jared directed the SXSW Interactive opening party which served as an outlet for both interactive installations and a collective social experiment hosting over 7,000 guests.
After his talk last week, I spoke with Ficklin at the Magic Leap conference. Here’s an edited transcript of our interview.
VentureBeat: Tell me what your message was in the talk today.
Jared Ficklin: Yesterday there was an announcement that Argodesign is now a long-term strategic design partner with Magic Leap. One of the things we were brought in to work on was creating the interaction model for this kind of mixed reality computing, for the Magic Leap One device. How is everyone going to use the control and the different interaction layers in a consistent manner that’s simple and intuitive for the user?
In a lot of ways it’s the model that plays a big role in what attracts people. We had futurephones for 10 years. They had all kinds of killer apps on them. But iPhone and iOS came out with a new model for handheld computing and everyone jumped on board. We had computers for 40 years before the mouse really came along. It suddenly had a model that users could approach and everyone used it.
Right now, in the world of VR and AR, you don’t have a full interface model that matches the type of computing that people want to do. It’s in the hands of specialists and enthusiasts. What we’re trying to do with Magic Leap is invent and perfect that model. The device has all the sensors to do that. LuminOS is a great foundation, a great start for that. It’s going to be simple, friction-free, and intuitive. We’re using a lot of social mimicry for that, looking at the way people interact with computers and the real world today, how we communicate with each other, both verbal and non-verbal cues. We’re building a platform-level layer that everyone can use to build their applications.
VentureBeat: What’s an example of something that helps a lot now as far as navigating or doing something in Magic Leap?
Ficklin: A great example that’s coming, that’s going to be different—we announced we’re turning on six degrees of freedom in LuminOS. This means you have pitch, yaw, and roll from the control, inside out. You don’t need any other peripherals, just the control. That means you can invoke a ray off the end of the control and use it to point at objects, grab them with the trigger, and place them. You can interact with digital objects in the same way you do in the real world. But you can also calmly swipe and select objects.
The use cases for this are anywhere from going through a menu interface, like picking a Netflix show, to moving objects that you’ve used to decorate your space. It’s going to be really important when you start combining depth in the context of a room. Think of Spotify as a mixed reality app. You may have a menu where you’re going to put your playlist together, and then you could collapse the thing down to be this cool little tree that you set on your coffee table. When go to leave you pick it up and it follows you like the Minecraft dog. You always have your music with you. We need that kind of model.
Another thing we have to handle here is, there’s a bit of an old-is-new-again situation that I think is fascinating. There are entire generations that have grown up and never done multi-function computing. They’ve had smartphones. It’s always one app at a time. Multi-function is a layer over that where you copy and paste.
VentureBeat: Having 20 browser tabs open, that kind of thing.
Ficklin: Right. If you think of the classic fear of missing out, I was talking to someone last night, someone younger than me, telling a story. They were a very smartphone-centric person. I said, “Imagine having Facebook, Twitter, Instagram, and WhatsApp open at the same time!” You and I laugh, because we grew up with personal computing, laptops and desktops. But there are kids like my son – he’s eight – who have never done it. The idea that you could have all four of those open at once is revolutionary.
We have handheld mobile computing right now. Another way to describe this is wearable mobile computing. It’s really convenient, really cool. We’ve seen the game market, the engaging 3D mixed reality market, but I think there will be another reason people put this on. That’s when they remember the delights of multi-function computing. That’s why this input model is so important. They’re going to be moving between apps.
On our laptops, one of the common things we do is we’re always minimizing and maximizing and refocusing windows. In spatial computing, you do a lot more shifting around in the room. You have to spread stuff out and bring it together, like you do on a desktop. Any time you’re working at a desk, it’s a matter of putting this over here and bringing that close to where I’m working. We have to make those types of maneuvers really easy for people. Then they’ll engage in the ways they’re familiar with, swiping through a menu and pulling a trigger to select.
VentureBeat: John Underkoffler, the designer who created the computer interface for Minority Report, spoke at our conference recently. He put out this call to game developers: you’ve been working in 3D worlds for so long, so can you help us invent the next generation of interaction with computers? He was struggling in some ways to come up with how to navigate this kind of computing.
Ficklin: There’s a couple of reasons for that, I would offer. One is that we’re not dolphins. We see stereographically. We don’t really see in 3D. Take a library. We don’t arrange our libraries in 3D. A 3D library would be a cube of books, and we wouldn’t be able to see the books in the middle. We’re not bats or dolphins. We arrange our libraries in two dimensions. We translate ourselves around these lined-up shelves of books.
That’s one challenge. We have to deal with our perception of what 3D is, from a human standpoint, versus what’s possible in a digital space. They don’t always line up. You have to respect the real world, real world physics. But at the same time, what’s magical about these flat screens is how low-friction they can make the data interchange. I don’t have to walk around a library to get all the books anymore. There’s a line where you want to bring that magic, that science fiction to the user, that low friction, and then a certain line where you want to be truly more 3D.
We were just talking about spatial arrangement. That’s going to be really important for shuffling where your focus is at the time. The second thing we like to say is, Dame Judi Dench will not be caught fingering the air. What that means, there’s a certain social acceptance to what people will do with their hands. We have to be very respectful of that, so they feel comfortable having the device on and interacting with it. Those layers will be ones they feel comfortable doing in front of other people or with other people. It can’t be too tiring. You need comfortable moments of convenience.
It’s a very powerful gesture system, because it’s from the perspective of the user. We’ll continue to advance it. They don’t have to be a conductor so much as they can just be human. The combination of all of this is something that I would call conversational computing. It sounds like voice computing, except we’re having a conversation now, and both you and I are using non-verbal gestures to communicate. You’re slowly nodding your head right now. You’re looking in different directions. I could say, “Hey, could you go get me that?” and you know what I’m talking about because I pointed at it.
When you take the work Underkoffler did, which has a lot of gesture manipulation, and you begin combining that with voice, the context comes together into a really human, intuitive interface. The device uses your gestures and non-verbal cues to establish half the context. You only have to establish the other half with voice. That’ll be a really interesting interface for navigating these spaces.
The other thing is that this is room-sized. I can point across the room, even though I can only reach as far as my hand goes, before I have to begin using a tool or projecting a ray. That will be done. I can say, “Hey, bring that over here.” When you combine that with the verbal command structures, it’ll be a really intuitive interface. Objects are going to participate, which was a disadvantage you saw in the Minority Report concept. It was a great system, but it was pinned to a wall. Why don’t we start playing with more objects? I have this little terrarium here. My system can recognize what it is and hang a lot of cool stuff on that. You could have all kinds of digital information about the plants in here.