Thanks to Google Lens, Now AI gets a better Visual Cortex Featured

Since the CEO of Google, Mr. Pichai presented his keynote on Google Lens at Google I/O 2017. Google Lens is the buzz word in the loop. In fact, a lot of People were thinking, talking & writing about what Google Lens can offer humans. On the flipside, Google made a breakthrough contribution to the AI ecosystem. The Internet giant enhanced the visual cortex for the yet to be born AI super robots.

Google CEO Sundar Pichai highlighted the tool as an example of Google being at an “inflection point with vision”.

All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission.

Sundar Pichai about Google Lens

Google Lens is a smart camera app that can read and understands your images. Google Lens is an image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further.

A few things Lens can do:

  • Tell you what species a flower is, by viewing the flower through your phone’s camera;
  • Read a Wi-Fi password through your phone’s camera and log you into the network;
  • Offer you reviews and other information about the restaurant or retail store with a snap.

In addition, Pichai showed how Google’s algorithms could clean up and enhance photos. When you’re taking a picture of your child’s baseball game through a chain-link fence, Google could remove the fence from the photo. Or if you took a photo in a low-light condition. Google Lens could enhance the photo to make it less blurry.

Google Lens is added to Google Photos and the personalized AI Software Assistant. The idea is to leverage Google’s computer vision and AI technology from your phone’s camera. The integration of Lens into Assistant can also help with translations. Google’s Scott Huffman demonstrates this by holding up his camera to a sign in Japanese, tapping the Lens icon and saying “What does this say?” Google Assistant then translates the text.

Google Lens is a giant leap in computer vision, thus AI systems can act and react to the visual world. I see AR would be the early adopter of Google Lens features.

Tagged under: , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top