Fast Forward: Everything Brands Need To Know About Google 2017 I/O Event

This is a special edition of our Fast Forward newsletter, bringing you a summary of the major announcements from Google’s 2017 I/O developer conference. A fast read for you and a forward for your clients and team.

The highlights:

  • Google Lens brings computer vision to Google Assistant and Photos
  • Google Assistant receives major upgrades & branches out Into connected cars
  • Expansion of the Daydream VR platform propels VR development forward
  • Android O brings a more fluid user experience, with Android Go targeting the “next billion mobile users”

On Wednesday, Google kicked off its annual I/O developer conference at the Shoreline Amphitheater in Mountain View, CA. CEO Sundar Pichai took the stage to lead the main keynote address, where he laid out the key developments in several of Google’s areas of interest, including AI, voice assistants, virtual reality, and more. TechCrunch has a comprehensive round-up of everything that Google announced, but we have an exclusive take on what it means for brands.

Google Lens Adds Computer Vision To Google Services

The most significant announcement coming out of this year’s Google I/O conference is the debut of Google Lens, a set of computer vision features that allows Google services to identify what the camera captures and collect contextual data via images. Google has been using similar technology in the Google Translate app (built off their 2014 acquisition of World Lens) to automatically translate words that the camera captures in real time. Now, Google is adding this feature to Google Assistant and, later this year, to Google Photos as well.

Equipped with computer vision capabilities, Google Assistant gains the “eyes” it needs to see what the users are looking at and understand their intent. Google demoed several such scenarios on stage, including pointing the camera at a restaurant’s storefront to receive standard business information and reviews of that restaurant surfaced via Zagat and Google Maps, pointing it at an unidentified flower to ask Google Assistant to identify it, or pointing it at a concert poster to prompt Assistant to find how to buy tickets for the event. Lens allows Google Assistant to tap the smartphone camera as an input source, to inform user intent and create a more frictionless user experience.

For Google Photos, the addition of Google Lens’ computer vision capabilities makes the cloud photo storage service better at identifying the people in your photos and picking out the best shots in your photo library. This facilitates one new feature called Suggested Sharing, in which Google Photos will prompt you to share some AI-selected photos with the people that are in them with a simple tap. Users on the receiving end of the shared albums will also be prompted to add the pre-selected photos to the mix.

One additional feature powered by Google Lens is the Visual Positioning Service (VPS), which works like an indoor GPS, allowing Android devices to map out a specific indoor location and help them find a specific store in the mall or a specific item in a grocery store with turn-by-turn navigation. VPS is already working in select partner museums and Lowes home improvement stores if you happen to have one of two Tango-enabled devices. This advanced AR feature will also appear in the next Tango device, the ASUS ZenFone AR due out this summer.

The introduction of Google Lens brings the search giant up to speed in the consumer-facing AR development. Two of Google’s biggest competitors, Facebook and Amazon, recently unveiled their own take on the “camera-as-input” trend with the launch of Camera Effects Platform and Echo Look, respectively. For Google, the launch of Lens is all the more significant, as it officially branches Google’s core function, search, into the physical real world and opens the door for more offline use cases, which, in turn, massively increases the addressable market of searchable data and creates a virtuous cycle for Google to leverage those image data to fuel its AR and machine learning initiatives.

Google Assistant Grows More Capable With New Features

Beyond the major addition of computer vision capabilities, Google Assistant is getting some other new features to help it stay competitive against Amazon’s Alexa and other digital voice assistants. Among the slew of new features announced on stage, two stood out to us for their versatile uses cases and accessibility for developers.

First up, Actions, Google’s version of ‘skills’ or ‘apps’ for Google Assistant, added support for digital transactions. This allows Google Home and some Android phone users to shop online by conversing with Google Assistant, which will access payment methods and delivery addresses stored in Android Pay for a seamless checkout experience. The feature will launch first with Panera as a third-party partner.

This crucial update will allow more businesses to build mobile ordering and online shopping features into their Google Actions. Previously, Google Assistant could only make orders from partnering Google Express retailers, such as Costco, Whole Foods Market, Walgreens, PetSmart, and Bed Bath & Beyond. It also added the ability to check the inventory at local stores for product availability before users take a trip to the store.

Second, Google Assistant can now respond by sending visuals to your smartphone or TV via Chromecast. Dubbed “Visual Responses,” this important addition enables developers to surface texts, images, videos, and map navigations to user requests. Allowing for a variety of responses helps diversify Google Assistant’s replies beyond voice and add texture to the user experience. Supporting multiple displays entends Google Assistant to more platforms, allowing users to choose the optimal screen to engage with. This new feature comes just a week after Amazon unveiled Echo Show, which also introduced a visual component to Alexa’s voice-based conversational interface.

Beyond these two key updates, Google Assistant is also gaining several other features that make it smarter and more useful. They include:

  • A “proactive assistance” feature that allows Google Assistant to automatically alerts you about travel, weather, and calendar updates by silently showing a spinning light-up ring on Google Home. Users can hear the updates by asking “OK Google, What’s up?” It is unclear when this notification-lite feature will roll out.
  • Hands-free phone calls to U.S. and Canada numbers. It works similarly to Amazon’s recently released Alexa voice calling, but with the added ability to dial real phone numbers. Unlike Amazon, only outbound calls are supported for now because Google says it wants to be “mindful of customer privacy”.
  • New entertainment integrations including the free tier of Spotify, SoundCloud, HBO, Hulu, CBS All Access, and some other popular music and video content streaming services. This allows users to ask Google Assistant to play a specific show or song, provided they have installed the corresponding apps on their devices.
  • Text input for Google Assistant, which allows users to interact with the Assistant on Android devices by typing out their requests instead of speaking them out loud.
  • Google also reminded the audience that Google Assistant will be coming to connected cars, as the company announced on Monday that Volvo and Audi are building new models that will run on Android systems.

Beyond these new features, Google is also aggressively expanding the Assistant to more platforms by announcing it will become accessible on Android TV OS later this year as well as iPhones and iPads via Google’s iOS app. The update to the Android TV platform will be accompanied by a brand-new launcher, allowing users to use voice command to access the over 3,000 Android TV apps available in the Play Store. According to Google, the Assistant is currently available on over 100 million devices. Notably, that’s a fraction of the 2 billion Android devices on the market, and doesn’t reflect user adoption. (For comparison, Apple’s Siri is currently available on 1 billion devices.)

In addition, Google is also following Apple’s lead to process AI-powered apps locally on mobile devices as well as in the cloud. This improves app performance and security, and also enables Google Assistants to adjust to a user’s specific preferences more quickly.

Standalone Daydream VR Headsets Aim To Broaden Consumer Appeal

It’s been a full year since Google unveiled its VR platform, Daydream, and so far, only a handful of compatible handsets have been released.  Facing mounting competitors in the VR space, Google is taking another stab at virtual reality with new  Daydream-enabled phones from partners, and a new standalone headset form-factor.

On the handset front, Google announced that Daydream will be supported by the new Samsung Galaxy S8 phones later this summer. As the best-selling line of Android phones, it’s’ a big win for Google, even if Samsung continues to support their own platform, GearVR, which is powered by a rival, Facebook’s Oculus. Plus, the upcoming flagship phone from LG will also support Daydream VR, making the platform considerably more accessible for mainstream users.

Google is teaming up with HTC Vive and Lenovo to build an untethered, standalone VR headset, allowing an immersive experience without additional phone or PC hardware. The headsets will support inside-out tracking, using the “WorldSense” technology from its Tango AR platform to track virtual space and making sure your view in VR matches up with your movements in the real world without the need for additional cameras or sensors. This move puts Google in the company of Oculus and Intel, both of whom have showed off early standalone headsets with self-contained tracking systems.

Fluid UI Design For Android O & Android Go For Emerging Markets

Near the end of the opening keynote, Google turned the attention to the next Android mobile OS, Android O. The preview highlighted a more fluid UI design, which includes features such as a Picture-in-Picture mode for multitasking while watching videos or during video calls, a more customized notification dots system, and a machine learning-powered smart text selection that makes it easier to choose the texts to copy and paste.

In addition, Google also launched a new data-conscious version of Android O named Android Go, targeting emerging global markets where mobile connectivity is still in development. Android Go is a modified version of Android for the lower-end handsets, completed with apps optimized for low bandwidth and memory. Google says Android devices with less than 1GB of RAM will automatically get Android Go starting with Android O. It is also committing to releasing an Android Go variant for all future Android OS. Google previously created a similar low-cost Android OS to serve the emerging markets called Android One, which initially rolled out in Pakistan, India, Bangladesh, Nepal, Indonesia, and other South Asian countries in 2014.

What Brands Need To Do

Google’s announcements at this year’s I/O event are very much covered by two trends emphasized in our Outlook 2017. The introduction of Google Lens marks Google’s official entry into camera-based mobile AR feature (the Tango AR platform is too inaccessible to count), a leading element in the current meta of Advanced Interfaces. The notable updates that Google Assistant received, in particular the computer vision capabilities that Google Lens brings, make the voice assistant a more helpful and intuitive Augmented Intelligence service for users. And the expansion of the Daydream VR platforms shows Google’s continued investment in virtual reality, another facet of the evolution of advanced digital interfaces.

The integration of Google Lens in Google Assistant poses some exciting new opportunities for brands to explore. For example, CPG brands may consider working with Google to make sure that Android users can use Lens to correctly identify your products and receive the correct information. For retailers, the addition of the VPS feature holds great potential for in-store navigations and AR promotions, once it becomes available to a higher number of mobile devices.

The new features coming to Google Assistant makes it a more capable contender in the fight against Amazon’s Alexa. In particular, the support for handling transactions and the “Visual Responses” should offer brands great opportunities to drive direct sales and engage customers with a multi-media experience. For auto brands, in particular, the integration of Google Assistant into some of the upcoming connected cars bring new use cases for engaging with car owners via conversational experiences. The addition of Visual Responses means it is now possible to deliver additional content, be it videos or images, about your products via Google Asistant, adding a visual component that is crucial for marketing fashion and beauty brands.

In terms of VR, Google’s initiatives should help expand the accessibility of its VR platform and get more users to watch the 360-degree and VR content available on YouTube and other Google platforms. For brands, this means increased opportunities to reach consumers with immersive content on Google-owned platforms. As more mainstream tech and media companies rush into VR to capitalize on the booming popularity of the emerging medium, brand marketers should start developing VR content that enhances your brand messaging and contributes to the campaign objectives.

How We Can Help

While mobile AR technologies and standalone VR devices are still in early stages of development, brands can greatly benefit by starting to develop strategies for these two emerging areas. If you’re not sure where to start, the Lab is here to help.

The Lab has always been fascinated by the enormous potential of AR and its ability to transform our physical world. We’re excited that Google is bringing computer vision to android devices and it allows us to develop AR experiences delivered by Google Assistant reach millions of users. If you’d like to discuss more about how your brand can properly harness the power of AR to engage your customers and create extra value, please reach out and get in touch with us.

The Lab has extensive experience in building Alexa Skills and other conversational experiences to reach consumers on smart home devices. So much so that we’ve built a dedicated conversational practice called Dialogue. The Zyrtec AllergyCast Alexa skill that we collaborated with J3 to create is a good example of how Dialogue can help brands build a voice customer experience, supercharged by our stack of technology partners with best-in-class solutions and an insights engine that extracts business intelligence from conversational data.

As for VR, our dedicated team of experts is here to guide marketers through the distribution landscape. We work closely with brands to develop sustainable VR content strategies to promote branded VR and 360 video content across various apps and platforms. With our proprietary technology stack powered by a combination of best-in-class VR partners, we offer customized solutions for distributing and measuring branded VR content that truly enhance brand messaging and contribute to the campaign objectives.

If you’d like to know how the Lab can help your brand figure out how to tap into these tech trend coming out of Google I/O this year to supercharge your marketing efforts, please contact our Client Services Director Samantha Barrett ([email protected]) to schedule a visit to the Lab.