Google Lens is the feature that, through an algorithm of artificial intelligence, allows you to derive information from an object portrayed in a photo or through the camera lens: from the type of plant to the bottle of wine. Now available in Beta and only for Pixel 2, Lens will be integrated into Google’s voice assistant. The application will only be available for the Pixel smartphone line.
The new feature will arrive in the coming weeks and will be available in English in the United States, the United Kingdom, Canada, Australia, India and Singapore.
According to Google Assistant product manager Ibrahim Badr, Lens “will allow quick help for everything that can be seen through the camera lens”. In short, unlike the first version, which had to be activated by reading the photography from artificial intelligence, now just click on the new feature (available in the voice assistant) and aim the camera lens
In its final version, Lens will allow to recognize and obtain information from the texts: in the case of a business card it will save the contacts it contains or if it displays a telephone number, it will be possible to call it.
But above all with regard to monuments and other landmarks of a city, just point the camera to get news on their history, curiosities and anecdotes. Finally, in the case of books, films or works of art, the algorithm will not only be able to recognize them but will provide a brief summary of the plot (whether it is novels or cinematographic works) or further information about the artist a painting or a sculpture.