Take a picture and ask. Where can I buy the thing in the photo? Google understands queries for displayed objects

Posted by

Google has launched a successful feature in its application of the same name, which it introduced last autumn. This is the so-called Multisearch, an advanced search that takes this process to the next level. It eliminates the time-consuming process of having to explain the entire query when searching.

Click for larger imageClick for larger imageClick for larger image
Multisearch in Google can understand the context of a photo and respond to relevant search queries.

For example, all you have to do is scan a certain object with a camera and then just write a question that interests you about the object in the picture. Google uses advanced search technology called MUMMultimate ATnified Model that can understand much more complex questions. At last year’s presentation, we could see an impressive demo, when all you had to do was take a picture of a broken derailleur by bike and just write a question about how it can be repaired. The phone recognized the bike and was able to find directions.

However, the application is also trained for other tasks. The specialty is, for example, shopping. For example, just take a picture of a piece of clothing in one color and ask if it exists in another and the application should be able to find it. It can also be useful for gardeners who will be able to take pictures of the plant and quickly find out how to take care of it. The possibilities of use are much wider and will definitely increase.

What may have given up as a distant future in the fall is now coming in the first beta version. The final Google app will soon be available for both Android and iOS users, and will traditionally be available to users in the United States first. But since Google is a global issue, it’s only a matter of time before it’s usable.

Google Lens Multisearch Demo:

Source: Google

Source link

Leave a Reply

Your email address will not be published.