Publishers need to think about a 3D reader experience

Publishers need to think about a 3D reader experience

In these early days, no one is quite sure how publishers will turn virtual and augmented reality to their advantage. But one thing is sure - any success in the immersive entertainment space will require a mastery of user experience in three dimensions. If they want to tap into modern storytelling, publishers must learn to think in 3D.

Starting out as a video games journalist in the mid-90s, I recall the design innovations necessitated by the shift from two-dimensional to three-dimensional gaming. 3D rendering posed new problems for game designers. They needed to develop new tools and skills, but they also had to rethink their products in terms of 3D control schemes, the redesign of gameplay around greater freedom of player movement, consideration of the user’s point-of-view, and keeping the action coherent - not to mention fun.

The developers who mastered the challenge of this extra dimension - "the Z-axis" - thrived. The first games to comprehensively answer that challenge, like Super Mario 64 and Tomb Raider, became early models for the rich open-world games that have become massive franchises today.

One could argue that publishers, in looking at how best to exploit virtual and augmented reality, are facing a ‘Z-axis challenge’ of their own. To solve it, they need to marry new technical skills with a creative understanding of how to build information products from the ground up in three dimensions. 3D has already been used in book apps: HarperCollins pioneered extensive use of 3D modelling on the iPad for Brian Cox’s Wonders of the Universe. But in VR, 3D isn't just a feature - it's intrinsic to every aspect of product design.

Brian Cox’s Wonders of the Universe iPad app used a 3D engine and user interface

Technologically, the challenges are being overcome rapidly. The first generation of ‘high end’ VR headsets - the Oculus, HTC Vive and PlayStation VR - have decisively addressed visceral issues like nausea and dizziness. ‘Room scale VR’, accompanied by versatile positional tracking controllers, has provided a model for rich user interaction within the virtual world.

At the other end, mobile VR is entering a second generation, with new phones improving visual performance, and both Samsung and Google introducing a motion-sensing controller enabling experiences to be properly interactive. Apple may crash their party with their own rumoured VR offer, perhaps in the next iPhone. That should drive further penetration and adoption at the mainstream end of the market.

This means there will be a division in VR between high and low end platforms for some time to come. Publishers will have to choose between focusing on either products for the casual market with more limited features, or fully featured products with higher development costs and greater risk in an unproven market. Yet perhaps there is sweet spot of products which perform well on both classes of device. Tools like Unity and Unreal already simplify simultaneous development across multiple platforms. Publishers who find that mid-range solution will have access to the broadest base of consumers. 

All of this may make AR - augmented reality, currently in its second generation - more appealing for product development at this point, as consumer AR is firmly focussed on one class of consumer product, the smartphone. Here also, though, publishers need to think about the extra dimension.

First generation AR involved limited text and image processing, mostly to display overlay effects and render models. Even if those were in 3D, effectively there was just a foreground and background. The second generation of AR is technology that enables the foreground models to interact meaningfully with the camera background, creating a true ‘mixed reality’. This is made possible by ‘smart camera’ software in newer mobile phones capable of sensing the depth of the viewed environment and better synthesizing the virtual and real elements of a scene. Both Facebook and Apple made recent announcements at their developer conferences of tools which put that advanced image processing in the hands of widespread AR designers.

This will make it easier to create something like a virtual board game that sits convincingly on a real table viewed via the phone camera, from any angle in the room without glitching. The challenge will be to turn that capability into a compelling user experience - or indeed a story. One answer is to go beyond individual models and interactions and think of AR at scale: entire synthetic worlds populated by characters imbued with a semblance of ‘life’. The technology can now provide the scale, but it requires the ambition. These Wonderlands will need their Alices. Where VR is a portal to other worlds, AR will be a bridge for the denizens of other worlds to come into ours.

So: how does one go about three dimensional thinking? By turning to maps. VR and AR may remain two distinct product categories for some time, but technologically they exist on a continuum: from a wholly synthetic world viewed in virtual reality, to a partially synthetic one intersecting with our own. Put another way, both are essentially canvas technologies, where the 2D canvas of the page expands into three dimensions, requiring us to think how we present content as narrative in 3D space - and the relationship users have with it.

VR presents an entirely blank canvas, where not only the visual representational of the world, but the physical laws it operates under are in the gift of the designer. AR has our physical world, and its laws, as a base upon which to layer any number of additional models. There, it's the quality of that integration that presents the creative challenge.

There is a cartographic element common to the design of any complex VR or AR product, given that they involve the navigation of information through space. At scale, this requires technologies which can create large-scale information maps and position users within them. The term ‘user journey’ is a widely used metaphor, but for VR/AR it will become a literal aspect of product design.

Emerging platforms like Improbable’s SpatialOS make this commercially feasible. Built for large-scale synthetic world building, they could easily be repurposed for non-gaming media applications. Products which make use of three dimensional freedom must embrace non-linear designs that enable users to explore stories and information structures with greater freedom, whilst maintaining their coherence.

They will also be the base for massively multi-user online virtual experiences: social VR and AR. However rich a synthetic world a single user inhabits, it’s always going to be a lonely planet. Social VR could change all that. The form is in its early stages, but Facebook’s VR Spaces shows the direction of travel. Social VR may also solve the problem of populating augmented reality with sufficient content to make it appealing to spend extended time there. Facebook’s decision years ago to encourage the geo-tagging of all content looks forward-thinking when viewed from that perspective.

Social VR will pose publishers with a virtual challenge similar to that of theme park owners. How will they manage behaviour in their online world? How much interactivity between users will they permit? What modifications will they be able to make? In this respect again, games publishers have greater experience, dealing with the challenge of managing modding communities for popular games like GTA V.

I’m confident we will see breakthrough publishing products in VR and AR in the next few years, taking advantage of technology advances and creating new classes of digital product. These will emerge from product design philosophies that are firmly rooted in three-dimensional, ‘Z-axis’ thinking. Forward-thinking publishers would do well to brush up.