Google announced a ton of things from several areas of its business, including AI, machine learning, VR, and AR, on day one of the annual Google I/O developer conference in San Francisco.
The biggest announcement to come out of Google’s keynote came in the form of Google Lens – which received a massive response from the crowd. The Lens uses ‘vision-based computing capabilities’ to figure out what it is the user is looking at through their camera. The Lens can be used to identify buildings and places of interest – providing information about them – and automatically log people into wi-fi by scanning the back of routers.
“Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information,” said Sundar Pichai, Google CEO. “The key thing is, you don’t need to learn anything new to make this work—the interface and the experience can be much more intuitive than, for example, copying and pasting across apps on a smartphone.”
Google says it will first bring its Lens capabilities to Assistant and Google Photos, rolling it out to other products in the future.
The Lens using Google’s Tensor Processing Units (TPUs) which work on the company’s Cloud, enabling Google to run machine learning algorithms ‘faster and more efficiently’. To further accelerate this, Google also unveiled the second-generation Cloud TPUs.
In addition, Google has made updates to its Assistant. The AI virtual assistant can now schedule calendar appointments and create reminders, as well as being able to make hands-free calls in the US and Canada. Furthermore, Google Home devices will now be able to stream from Spotify, SoundCloud, and Deezer. Google has also added HBO, Now, CBS All Access and HGTV to Netflix for the streaming of video. Moreover, Google said it will add visual responses from Assistant on users’ TVs with Chromecast.
Google has also announced that it is looking to use its machine learning technology to accurately predict medical events. The tech giant has partnered with medical researchers and bioinformaticians at UC San Francisco, Stanford Medicine and University of Chicago Medicine to see how the combination of clinical expertise and machine learning could ‘avoid costly incidents and save lives’.
Moving on from AI and machine learning, Google announced that it is partnering with HTC, which developed the Vive headset, and Lenovo to create standalone Daydream VR headsets that don’t require a smartphone to be added to make them work.
Google has given no word on potential release dates for the headsets, which created with the help of Qualcomm, but says the headsets will use location tracking technology called WorldSense to identify objects around the user, so they can safely walk around.
“These devices build on what’s already great about smartphone VR, and they make the whole experience even easier and more comfortable,” said Clay Bavor, VP of virtual and augmented reality at Google. “Standalone headsets don't require a phone or PC. Instead, getting into VR is as simple as putting the thing on.”
Daydream will also be making its way to devices, including the Samsung Galaxy S8, S8+, and LG’s next flagship device due later this year, this summer.
Finally, Google announced that there are now 2bn monthly active Android device users, across all screens, around the world.
To celebrate this, the tech giant says it will launch Android O later this year to ‘bring more fluid experiences to your smaller screen’, as well as improving battery life and security.