Every year in Google IO, the Google team just blows our minds out by the innovative new products and software that they present. In recent years though, the hype has grown tenfold by the incorporation of Google’s Deepmind which is the core Artificial intelligence for the search engine giant and the huge undertaking by Google in an implementation of the same in its products and services.
One such innovative application is the Google lens, that got demoted in the IO 2017 event. Google lens is the true manifestation of augmented reality, paired with AI, that will give users the capability of getting info about a store or a product by just placing the camera towards it. Bixby vision from Samsung tried this a couple of months back, but then we all know how that fared out. Now the hyped feature is getting rolled out in the Pixel smartphones and it would be exciting to see if this really does hold up to its name or is just another gimmick that gives a 50 – 50 result.
What is Augmented Reality
Many companies have been working on their version of AR as and when we hear about them in the news or updates from big events like CES. Augmented Reality is nothing by a mixed reality of sorts, where the device camera is used to capture the real world and then use sophisticated technology to place virtual objects or identify real objects in the world and use the visual data to relay back actual information about the aforementioned object.
Google did try out an app earlier in the days of KitKat called the Google Goggles. Google Goggles was the first of its kind, where it took a picture of an image and then did a reverse lookup on the image and try to find similar images or identify texts. Back then, the visual data gathered and the responses obtained were not optimized and were definitely not helping out many. I mean if you look at a red toy car and Goggles returns similar red toy cars instead of showing what the actual model is and what is the price on it, then of what use is the app anyways.
Back then we didn’t have AI to help us out. Today neural networks work at a staggering pace and understanding the world in greater depth and detail. They keep getting better as and when we interact with the world and teach it what we want it to show. This has helped Google gain an advantage to reach a stage where we walk up to a store on the street, look up at the store banner and Google relays the AR data along with location information and provides us with the hours, ratings, reviews and everything that we get on the Google search page.
Much like what Google Lens is promising, the Galaxy S8 phone did try the same thing with Bixby vision but then it wasn’t a huge hit, although this was driven by their own AI. Then even Google did try to get a wearable out which gained a lot of hype. This was called the Google Glass which were actual glasses that you wear and it pretty much showed you the map, location hours and more. But then that was not a success either as it became distracting to wear the lens and plus it wasn’t true augmented reality along with very less inventory of apps that the lens support.
We have other companies like Microsoft that did try the wearable approach. They did bring out the HoloLens, which is the first time that a wearable was actually using Augmented reality to spatially place the object in front of you. But then the production did take a lot of time and we still don’t have the product commercially available. Also, the steep price doesn’t make it one of the favorites.
Promises from Google Lens
As promised in the IO event, Google lens is going to use all innovations like ARCore, Google Deep mind, Google Translate and the search algorithms to determine the exact search result that you’re looking for, when looking at an image or a 3D object or a store banner or even text in a different language. The Google Lens would be so powerful that it would be able to integrate all the existing and evolving technologies to give you the exact result as you desire.
What is more exciting is that you can point the Lens at a flyer, say for a concert or an event, Google will reverse lookup the flyer and identify the booking URL, then proceed to also book tickets for you with only the data of a single picture of a floating flyer around. How awesome is that and how many can claim to do that?
Google lens came out in beta version on the Photos app but now the Google Assistant is getting the feature added. This means that every other phone using Google’s voice-based assistant, be it android or IOS would be eventually getting it and everyone can use it.
I am really excited to see this super smart feature on my phone. The thought that this will not end up being a Pixel only feature really excites me. We hope that the search engine brings innovation to all fronts and for everyone to use.
What is your opinion on this amazing feature? Has your Pixel unit received it? Let us know in the comments section below!
Stay tuned to Mr. Phone app and website for all the latest updates. Also, if you wish to write on our website, check the second edition of our Blogging Contest out.