4 Ways Augmented Reality Can Add Value to Your Print Right Now

The tech industry loves to speculate about the potential uses and capabilities of developing technologies. Will we have computer chips in contact lenses that pop up Yelp reviews whenever you look at a building? Will all cars have HUDs that project your GPS directions onto the road in front of you? Will Pokémon roam the grasslands? That’s all very exciting, but right now you can put augmented reality (AR) technology to work leveraging your existing print to create magical experiences for your customers.




As you probably know, customers are more likely to purchase a product once they’ve picked it up off the shelf. The trick is actually getting them to pay attention to your product over the one next to it. Augmented reality provides a great hook: it can deliver compelling content like exclusive videos, gamified interactive experiences, and helpful product visualizations all tied to the packaging itself.


Check out these boxes from General Mills seasonal Halloween Monster cereals. Count Chocula, Boo Berry, and FrankenBerry announce, “We’re Alive!” creating an unmissable call to action across the front of the box. Viewed through the Blippar app, the characters come to life, speak, and move.  The back of the boxes feature games and extra content that kids can play with at home. It’s a one-two-punch with a strong call-to-action to get you to pick the product up, and additional value when you bring it home.


Brochures & Catalogs 

Printed sales material can only show so much. AR lets the customer dive deeper. For the last few years Ikea has been augmenting their catalog to allow customers to see furniture in their own home, browse related design inspiration, choose additional options, and order online. This is a perfect example of leveraging an existing print resource that they were already spending the money to create, and adding an additional layer of information and customer engagement.

Looking ahead, there are many potential uses for this tech that take advantage of existing print media to create new value for customers. From to-go menus with photos, availability, and nutritional info of all of the food, to real estate flyers that connect to a virtual tour and real time price and availability updates.

Let’s say you own a theme park or resort. You already print tons of paper collateral: maps, brochures, etc. Imagine your flagship roller coaster popping out of the map in three dimensions. Imagine a ski map that updates with current snow conditions. Imagine a photograph that transforms into an immersive first-person video experience of your steepest runs and most exciting rides. You’ve already got brochures, you have exciting content on your website, AR can  connect the two to give your guests a memorable experience before they’ve even set foot in your park.


In Store

To compete with e-commerce, retailers need to give customers a compelling reason to come to the store.

American Apparel has an incredibly customer friendly app that turns their in-store signage into a portal to the full range of color and fabric options, a video of the garment in action, customer reviews and notes on fit, and the ability to share a selection with friends.

They’ve taken the problem of keeping every color and size in stock, and turned it into an opportunity to deepen their conversation with the customer.

Another great in-store experience is the LEGO Digital Box Kiosk. Instead of a downloadable app on the shopper’s smartphone, Lego installed kiosks in all of their stores. Shoppers hold a lego box in front of the kiosk’s camera and see, in real time, a model of the assembled kit come to life, complete with working drawbridges, dragons, and minifigs running around. This dedicated hardware solution can be even better than relying on the user’s smartphone. There’s no app to install and not much room for user or device error. They just walk up to the kiosk and it works every time.

And remember what we said about picking up the package?




Print advertisements in magazines are another exciting opportunity for your existing print media to do double duty with AR.

Maybelline gained valuable insight into their customers when they ran an ad that let readers virtually try-on different colors of nail polish. Not only did the ad create a memorable, multi-sensory experience for customers, by looking at which colors users tried on the most, Maybelline was able to predict the popularity of different shades and stock local stores accordingly, leading to a measurable boost in sales.

How about this ad from Norwegian footwear company, Viking. The printed ad itself is eye catching, yet simple. It almost begs the customer to complete the image. What does the boot look like? Viewed through a smartphone the ad shows a realistic 3D model of the boot that can be seen from any angle, much better than a photograph. Ads can be augmented to show 360* product views, link to order pages, show a full range of style options. The possibilities are endless.


There’s one more great thing about AR, not only does it provide your customer with a great experience, it also reports back to you with all kinds of useful metrics. Just like visits to a website, AR views are trackable: they can report on unique users, repeat visitors, average engagement time, social shares, even which part of the packaging the customer is looking at the longest. This kind of data can help evaluate the effectiveness of a campaign, and in some cases, as with the Maybelline ad, directly affect sales decisions.


This may seem like science fiction but these tools are available now, your brand could be putting your print media to work delivering valuable and magical experiences to your customers.

If you’d like to talk about what’s possible, we’d love to hear from you.

Linear Color: Why it Matters

Do you want to maximize the quality and realism of your augmented reality graphics? Of course you do. Look at these two spheres, rendered out of the same 3D package, with the same materials and colors. Which one looks better?

No contest, right? The color on the left becomes oversaturated and creates ugly banding artifacts around the highlight. The version on the right, on the other hand, has smooth gradients and no ugly artifacts. Why is that?

The first image is rendered in default gamma-corrected color space, the way most 3D packages behave out of the box. The sphere on the right is rendered in linear color space. I’ll get to exactly what that means in a minute.

It may look nicer, but is it more “real”? You bet. Look at this photograph of the moon.

Below, our humble sphere has been textured and lit to match that photograph. It’s the same comparison between a default gamma-space pipeline on the left, and a proper linear pipeline on the right.

Notice how the lighting gradient on the linear version matches the behavior of the real-world object. You want to simulate reality? You need to be managing your color and working in a linear pipeline.

So, what is going on here? In a nutshell, almost every image you see on your computer has been “gamma-corrected.” The brightness curve has been modified so that the distribution of values matches the way your eye perceives brightness. Your screen expects images to be gamma-corrected for display.

That’s a good thing, usually: using gamma-correction means that the limited amount of brightness information in an 8-bit (JPG, PNG, etc) is used efficiently. Notice how, in this simplified ramp shown below, the gamma-corrected ramp shows many more “brightness steps” in the dark values.

The reason it’s a problem, the reason all of my examples on the left look so bad, is that whenever you’re rendering 3D graphics, be it a game, film, or augmented reality app, you’re simulating the behavior of light. And light behaves linearly. Two equal spotlights shining on the same surface will create a result that is two times as bright as one of them alone.

What makes this counterintuitive is that the result does not look 2x as bright to our eyes. (What do you think? Does the circle on the right appear middle grey, or light grey?) Our eyes have a logarithmic, and not linear, response to brightness. We’re more sensitive to changes in the darker tones, and that’s what lets us see into the shadows on a sunny day.

Your 3D program is simulating the linear behavior of light BUT the inputs and outputs are gamma-corrected--they’re not linear--and that causes the incorrect color math that leads to the ugly results I showed you.

So, how can you fix this in your project? Good news: the solution is not difficult, it just takes a little bit of care. Every color value that goes into the renderer needs to be in linear color space. And the result at the end needs to gamma-corrected the way your display expects it to be.

(Those of you with film compositing or similar backgrounds will protest that I am oversimplifying. Yes, there are many exceptions and other factors, but these basics will serve for a lot of CG rendering.)

Remove the gamma from every image and color as it goes into your rendering pipeline (that includes the video background for AR apps on a mobile device). Most images are encoded with a 2.2 gamma (the standard), so apply the inverse, a gamma correction of 1/2.2, or 0.4545. Same goes for colors. If you copy an RGB value from Photoshop, that value also needs to be linearized.
(Textures like as bump and normal maps, are not “color textures” and should be left alone.)

Let the renderer do its thing.

Just prior to display, re-apply gamma so that it displays correctly on screen. Those are the basics of a linear workflow that will instantly improve the look of your 3D renders. Are there exceptions, aternatives, and complications? There are. But this is the core of it.

Next, I’ll briefly cover how to do this in a few specific situations.

Maya, prior to 2015 Extension 1, using mental ray

In the Render Settings Common tab is a checkbox marked “Enable Color Management.” Enable that. Set Default Input Profile to “sRGB” and Default Output Profile to “Linear sRGB”. You’ll also need to change Render Settings: Quality / Framebuffer to “RGBA (Float) 4x32bit”.

Now Maya will assume that all textures are sRGB, and will remove gamma automatically. For textures that aren’t “color”--bump maps and normal maps, for example--you’ll need to open the File node and set “Color Profile” to “Linear sRGB”. You don’t want Maya messing with the values in those maps.

Also, Maya does not automatically correct color swatches. If you’re, for example, using a solid color for diffuse color, you’ll need to assign a gamma node to the texture slot (I like “mip_gamma_gain” for its simplicity), assign the solid color to the value in that node, and set the gamma value there to 0.4545.

Finally, to view the rendered result on your screen, you’ll want to re-apply gamma. Select the render camera, scroll down to mental ray/lens shaders and apply “mia_exposure_simple”. The default settings will correctly adapt a linear rendering result to gamma-corrected space, and it’ll even do a nice bit of tonemapping to soften highights so they have a film-like rolloff. 

Maya 2015 Extension 1

Extension 1 introduced a new color management toolset, SynColor, that (should) correctly interpret all incoming textures, all color swatches, and display both viewport and final render images with correct gamma. It’s activated in scene preferences. It also adds robust support for other color spaces and the OCIO standard, a big subject that’s a topic for another time. I have not had time to spend with this feature, but I think Autodesk is on the right track here.


Unity has made this very simple: there is one switch (in the Pro version, anyway). Bam! All input textures have gamma removed, all output images have gamma applied. (Feature not currently supported on all platforms, though.) 

Image from Unity documentation

OpenGL/Custom code

If your team is writing their own rendering code, this chapter from Graphics Gems 3 explains how to set up a linear color pipeline. 


This concept took me quite a long time to wrap my head around.  For most of us, this is not intuitive, and a career spent working with gamma-corrected values in Photoshop and 3D hasn’t helped.

I am indebted to the patient and mind-meltingly thorough explanations of my friend and colleague Brad Friedman, as well as numerous others, notably John Hable and Stu Maschwitz.

If it all sounds complex, just remember, remove gamma prior to rendering, re-apply gamma after rendering. Your renders will thank you.

AR is the Future of Mobile

Let’s think three to five years ahead. Let’s posit that smart glasses for augmented reality, light enough to be worn all day and powerful enough to justify wearing them, exist. Let’s also posit that virtual reality headsets are likewise powerful and cheap. Where do people spend most of their time?

They’re spending their time in AR all day long. Everything they see is overlaid with useful augmented content. Occasionally, they dip into VR for work or play, but they don’t spend all day there. Why? Because you can use AR everywhere. It’s a companion, just like your smartphone. You can use it walking down the street, in a cafe, any time at all.

VR, on the other hand, is a destination, like your PC today. Moreso, even: when you’re using VR, you are extremely vulnerable. You have to be in a trustworthy space. If ever there was a device you would not want to use in public, in an uncontrolled space, it would be something that blocks out your senses.

Pictured above: sitting ducks.

Pictured above: sitting ducks.

Stratechery lays it out in this insightful article about Facebook’s acquisition of Oculus:

“A virtual reality headset is actually a regression in which your computing experience is neatly segregated into something you do deliberately… To put it another way, mobile is a big deal not because we use computers more, but because a computer is with us in more places.”

And that’s exactly what AR has the potential to be. A computer that’s with us in even more places, helping us in more ways. VR has a lower total potential claim on our time than AR, for the same reasons PCs have a lower claim than mobile. And mobile is where all the growth is right now. More people using it, more of the time. More potential apps, more potential market. AR is the future of mobile.

Magic Movie vs. Magic Mirror: Shopping with Augmented Reality


Virtual try-on is an area of intense interest and development. Showing a customer how a garment or item fits their body is one of the best business cases for augmented reality yet. But what’s the best way to do it? Is the “magic mirror” the best way to try on glasses or clothes virtually? Is holding your head steady in front of a webcam a fun way to shop for anything?

Put another way: what’s the longest you've ever maintained an AR experience before getting tired?

What’s Magic Mirror?

In the classic augmented reality application, the user looks into a webcam or points the camera on their mobile device at a marker or object and sees digital content overlaid and tracked into the real scene in real time. It really is novel, magical, and delightful to see the real and the virtual coexisting and changing in real time. Let’s call this “active augmented reality”.

The magic mirror is cool, no doubt. But in terms of customer experience, when you’re actually shopping or browsing a catalog, magic mirror is an inferior experience to the “magic movie”.

Magic Movie?

With a magic movie-type augmented reality application, the user pre-records their face, body, or other subject, capturing a digital version that they can view later and interact with at their leisure. We might also call this “Passive Augmented Reality”, versus the “Active Augmented Reality” of the magic mirror.

For example, our own Glasses.com app: The customer performs one recording session, which can take less than a minute, and instantly they’ve got a permanent, reusable, catalog of eyewear featuring them. The customer can browse the virtual catalog any time the mood strikes them, which lowers the barrier to repeat use and sales conversions.

For example, our own Glasses.com app: The customer performs one recording session, which can take less than a minute, and instantly they’ve got a permanent, reusable, catalog of eyewear featuring them. The customer can browse the virtual catalog any time the mood strikes them, which lowers the barrier to repeat use and sales conversions.

Recording is hard, even when it's easy

Let’s be frank: capturing the user for virtual try-on is a bit of a hassle in any case. The AR tracking engine can be picky, the camera and lighting must be positioned just so, and it requires a lot of unusual, novel behavior from the user. This will get easier as the technology matures, but what won’t change is the engagement and activity required for an active AR approach.

It’s a lot of effort to “perform” for an AR camera. The user must hold their device steady while holding their head/body in just the right way. You’re supposed to make an important purchasing decision while doing all this cyborg yoga?

Perfect! Now hold that pose!

Holding a device steady leads to the infamous “gorilla arm” problem that stymied early touchscreen technology. Anyone who’s ever grown tired of holding their smartphone while viewing an AR application (i.e. pointing the phone at a poorly recognized QR code or marker) knows what I’m talking about.

What’s worse, the quality of real-time tracking is still far from perfect. Real-time rendered clothing and eyeglasses bounce around on your image as the tracker tries to converge on an ever-changing target, never quite finding alignment.

And have you ever tried to see what you look like in profile in a mirror? Or how a pair of jeans make your butt look? Hard, right? Same problem when a webcam is your mirror.

Yes, computer vision is getting better all the time. With faster hardware, better tracking algorithms, and predictive models of human movement, the awkwardness on display in these videos will eventually be solved. I know the illusion will eventually become almost perfect. But it will always be a performance. The user is never able to relax in active AR.

Record once, browse anywhere

The passive AR approach, on the other hand, allows the user to perform once at the beginning of the experience, and then enjoy the fruits of their labor again and again as they browse potential purchases at their convenience. Spend a few minutes getting it right, and never fuss with it again.

The performance is easier, too: the user is walked through the recording process by the app. They can evaluate the result and retake if necessary. The results are processed offline (on device or in the cloud) so the tracking is higher quality and has fewer distracting errors.

Now here’s the real advantage of passive AR: any time the user feels like shopping or showing off looks to their friends, their avatar is only a click or two away. They can be anywhere: shopping during their morning commute, getting advice from friends over lunch, or picking out new glasses while relaxing in bed.

Shopping for prescription glasses is practically the perfect use case for passive AR. Anyone who has tried to pick out glasses after having their pupils dilated at the optometrist can relate.

And the problem of seeing yourself in profile I mentioned? With passive AR, it’s easy. You've already recorded your profile, maybe even your backside too, and you can contemplate it without an awkward sidelong glance out of the corner of your eye.


Oh, yes, one other use case where passive AR will be a superior shopping experience: kids. Taking the kids to the department store to shop for clothes… do your kids like this? Because mine don’t. It’s possible they’d be more engaged in a magic mirror, but it’s far more likely that parents and kids would both prefer the record once, shop at leisure, approach of passive AR.

Future tech

I expect active AR technology to improve in dramatic ways. New kinds of devices like smart glasses will eliminate the ‘gorilla arm’ and make it easier to use AR for longer periods. New computer vision and simulation technology will close the gap of realism. Magic mirror, once it matures, will be a fantastic sales tool. Nothing is more amazing than to see that illusion happening in real time.

But passive AR will always have a use. The shopper doesn't always want to perform, maybe isn't even able to perform when they want to shop, and there are some things you just can’t observe during an active AR experience. Developers of virtual try-on and similar shopping apps should think about whether their consumer experience would be better if it were designed as a passive AR experience.





An Inconvenient Ad

We’re not going to name names here, because this article isn’t about one particular augmented reality application, or one particular brand. It’s about a common affliction that I want to save you from: marketers chasing the ‘next big tech thing’ without taking the time to think about what makes that particular technology unique or useful, or how it can provide value for customers.

In the example that prompted me to write this article, the user downloads an AR app, searches for the brand within the app, adds it to their list of “subscribed” brands, then points their smartphone camera at the brand’s beverage label. A short film that’s basically a well-produced TV spot plays, floating in front of the label. If the camera loses sight of the label, the commercial stops, meaning the user has to hold their phone in place for a full minute to see the whole spot.

Think about what I’ve just described to you. Does that sound like something your customers are going to make it through, or show off to their friends? How is this better than a well-executed spot on YouTube? It’s like hiding YouTube inside of a tedious technical challenge.

Again, let’s cut the parties involved some slack. They are experimenting in a new medium. But I think they fell in love with the shiny new tech without thinking through what that tech is truly good for. They failed to have empathy for the customer and what the customer actually wants.

Augmented Reality has a lot of potential to create useful, engaging experiences for consumers, from virtual try-on to deep information to uses no-one has imagined yet. I really believe that. There are a lot of opportunities for smart brands to differentiate themselves. But take a gut check: is your AR project basically just an inconvenient way to watch a TV commercial?