Google on Thursday began rolling out a new search feature to the US market that will allow users to search for information using text and images simultaneously. This new multiple search functionality is part of the American giant’s strategy to use AI to “create information experiences that are truly conversational, multimodal and personal”, as it has been put it. recently argued Sundar Pichai, CEO of Google.

The multiple search function is integrated into Google Lens, the image recognition tool accessible via the Google application. For now, the feature is only available in beta in the US market for users wanting to search with English text. It is also intended for purchase research.

“At Google, we want to find new ways to help you find the information you’re looking for, even if it’s tricky to express your needs. That’s why today we’re introducing a whole new way to search: using text and images simultaneously. With multiple search in Google Lens, you can go beyond the search box and ask questions about what you see,” the US giant said in a blog post.

A new search mode

For example, a user can take a screenshot of an orange dress and add “green” to their query to try to find the same dress in that color. This feature is also useful for non-commercial research; a user could take a photo of a rosemary plant and add the query “care instructions” to find out how to care for their new plant.

In its blog post published on Thursday, Google said it is also exploring ways to improve this functionality with MUM (Multitask Unified Model), Google’s latest AI model. The tech giant recently shared how it uses MUM and other AI models to more effectively deliver crisis assistance information to people seeking help.

In February, Google management indicated that it wanted to invest heavily in AI models enabling multimodal search. “In 2022, we will remain focused on evolving our knowledge and information products, including Search, Maps and YouTube, to make them even more useful,” Sundar Pichai said at the time. “Investments in AI will be essential, and we will continue to make improvements to conversational interfaces like the Assistant.” “From MUM to Pathways to BERT and more, these deep investments in AI will help us be at the forefront of search quality.”

Google: this feature will change your life

Google reigns as the undisputed master of search engines on the web and does not want to lose its place. The Mountain View firm continues to develop its solution to make it easier for us to find answers to our questions. Already capable of performing keyword or image searches with Lens, Google wants to go even further by combining the two.

The Internet giant has just introduced “multisearch”, a multiple search function that combines visual search and keyword search. Until now, it was possible for Google to analyze an image and provide information or display similar images. However, it is not necessarily possible to refine this search by integrating keywords. You can try the experiment on a computer, but the results most often lack relevance.

How it works ?

With its latest feature, the firm is interested in complex searches. Concretely, the tool can help find a garment or a pair of shoes in another color. Example: you have a crush on a yellow dress and you would like to find it in another color; just search by image with Google Lens and then add the keyword “blue” or “green”. The search engine will then offer the same dress, but in the color of your choice.

Google’s innovation could therefore save you time during your search. It is directly integrated into the Google application for Android or iOS and uses the Lens function. To use it, you have to press the camera icon and search by taking a photo or selecting a photo from your phone (saved image, screenshot, etc.). Next, Google added a new button at the top of the screen called “Add to your search”. It is from this location that the user can add text to refine his search on Google.

Unfortunately, the feature is currently only available in beta in the United States. Google explains that its solution is especially effective for shopping and that it is based on artificial intelligence. “All of this is made possible by our latest advances in artificial intelligence, which make understanding the world around you more natural and intuitive,” says Belinda Zeng, product manager of Google Search. The manager adds that the Californian firm is studying how it could improve “multisearch” with MUM. This new model of artificial intelligence (Multitask Unified Model (MUM)) could be used to “improve the results for all the questions you could imagine asking”.

Multisearch, the novelty that will revolutionize your searches on Google

Google is constantly improving its search engine to make it easy for us to find answers to our questions. Today, in addition to keyword searches, the Mountain View firm also offers a visual search engine: Google Lens.

As a reminder, Lens analyzes the photos or screenshots that you send to it, and returns information on the object or objects present in the image. For example, if you photograph a plant, Google Lens will give you its name, as well as links to pages allowing you to have more information on the subject.

Multisearch: the Google feature that will save you a lot of time

And this week, Google is announcing a new feature called “multisearch” that combines visual search with keyword search.

“At Google, we’re always thinking up new ways to help you find the information you’re looking for, even if it’s hard to articulate what you need. That’s why today we’re introducing a whole new way to search: using text and images at the same time. With multi-search in Lens, you can go beyond the search box and ask questions about what you see,” says Belinda Zeng, Product Manager, Google Search.

Google Multisearch: how does it work?

Unfortunately, for the moment, this search which allows you to combine an image and a keyword is still available in beta only in the United States. According to Google, the feature is currently offered on Google apps for iOS and Android.

When the user opens the Google app, they can do a visual search by tapping the Google Lens icon. He can then either take a photo or search from an image saved on his smartphone.

The novelty is that as part of this beta test in the United States, Google has added a “+ Add to your search” button, which allows you to add text to Google’s search.

For example, if the user found a dress on an e-commerce site, they can search for that dress’s image on Google Lens and then add the text “green” to their search. Taking this keyword into account, Google Lens will therefore search for the same dress, but in green.

The user can also take a photo of their dining room, search on Lens, and add the keyword “coffee table”. The search engine will then look for links to matching tables.

Another example indicated by Google: the user can photograph a plant, search for it on Google Lens, and add the keyword “maintenance”.

According to the firm, if this functionality is possible, it is thanks to its recent advancements in artificial intelligence. But unfortunately, for the moment, we don’t know when the multisearch will come out of its beta. And we don’t know when this new way of doing research online will be available in French.

Google Lens improves with the arrival of multiple search

Google is announcing a new multi-search feature on Google Lens, its image recognition app. It combines at the same time the search by image and by text. Google Lens can display information related to objects, which it identifies through visual analysis. In concrete terms, you can do research using your smartphone’s camera and this will provide you with different information: identify a plant or an animal, translate a word you see in front of you, have data on a monument or a restaurant, scan a code… Today, Google announces in a blog post the arrival of multiple search (multisearch) on its Google Lens application. A mix between visual search and textual search This new feature on Google Lens combines both image search and text.

Categorized in: