By: Cuyler Gibbons, Images: Courtesy The Ayzenberg Group
“For the last 50 years we’ve been looking at media through a flat screen,” says Joey Jones, lead designer and director of the “a.new reality” team at Pasadena ad agency the Ayzenberg Group. Jones’ mission is to change that paradigm and bring media—and the stories his clients hope to tell—into a virtual 3D experience that can integrate with the real world.
It turns out that the virtual reality ecosystem is something of a continuum defined by the degree of “virtual” reality that the experience provides and the amount of flexibility to move between realities available to the user. On one end virtual reality, or VR, provides a sight-occluded experience that is an artificial, computer-generated simulation or re-creation of a real-life environment or situation. With VR, users immerse themselves, and divorced from the real world, experience the simulated reality firsthand, primarily though stimulation of vision and hearing. Alternatively, with augmented and mixed reality (AR and MR, respectively), the virtual elements are integrated into the user’s actual reality.
I’ve come to the Ayzenberg Group to get some insight into where this technology is headed. “Today we’re in the baby steps of seeing all that [digital content] come out of those flat screens so that we are able to interact with it in real time. It’s not only a processor being able to generate 3D objects, but AI (artificial intelligence) being able to have those objects come out and understand the weather, the space, where you are at, where you are looking, and what you’re saying,” Jones tells me.
While gaming technology, by virtue of a massive game market that dwarfs even the movie industry, helps push the demand for faster processors and higher fidelity, other commercial implications are far-reaching and perhaps more profound. “Architects use it to study scale models that then they can easily blow up to a 1-to-1 scale, and medical students are using it … to simulate operations on holographic cadavers,” Jones says.
Jones himself comes to the tech world as a self-described storyteller, via architecture school. As a student he was deeply involved with the development of what was, at that time, a nascent software program called Form-Z, intended to allow architects and engineers to design in 3D. Realizing he was more enamored with software development than the practice of architecture, Jones soon found himself working in 3D animation. It’s a path he deems linear.
Jones now leads a team developing Moonbloom , a mixed-reality experience app engaged with through the Magic Leap virtual retinal display. Using a digital light field projected into the user’s eye, Magic Leap superimposes computer-generated 3D imagery over real-world objects. It’s leading-edge technology that allows real-world interaction with virtual objects and spaces in a mixed-reality experience.
With Moonbloom, Jones and his team have created a world within a world containing flowing water, gleaming towers, arches, and waterfalls in a hypnotic, rainbow-hued milieu that exists in space, while keeping the real world visible “outside.” An original story, the narrative asks you and a lone fox to find and restore the pieces of the shattered moon. And in doing so, you walk around while interacting with the virtual environment. No controller or special sensors are required, as the system recognizes natural hand gestures, allowing you to manipulate virtual objects with intuitive movements.
“The laws of this game follow how you engage with objects in your natural life,” Jones says. Moonbloom and its creators are at something of a “If you build it, they will come” moment as they wait for the retinal display technology to truly take off. “When these devices get out, when they become ubiquitous, clients will become more confident … Moonbloom is a proof of concept,” Jones says. “When enough people want these experiences, we want to be able to create enough content.”