Tech

Search text and images together

Google is releasing a new feature for its search engine that tries to mimic how we ask for things in the real world.

Instead of just typing in a search box, you can now present an image using Google Lens and then customize the results with follow-up questions. For example, you could submit a picture of a dress and then ask to see the same style in different colors or skirt lengths. Or if you spot a pattern on a shirt that you like, you can ask to see the same pattern on other items like curtains or ties. The feature, dubbed “Multisearch,” is now rolling out to Google’s iOS and Android apps, though it doesn’t yet include Google’s “MUM” algorithm demonstrated last fall as a way to transform search results.

Google Director of Search Lou Wang says multisearch tracks the way we ask questions about things we look at, and an important part of that is how Google sees the future of search. It may also help Google maintain an edge against a wave of more privacy-focused search engines, all of which remain focused on text-based queries. (It is also reminiscent of a four year old Pinterest feature which allows users to search for clothes based on photos of their wardrobe.)

“A lot of people think the search is over and all the cool, innovative stuff was done in the early days,” says Wang. “We’re beginning to realize that that couldn’t be further from the truth.”

Adding text to image searches

To use the new multisearch feature, you need to open the Google app on your phone and then tap the camera icon on the right side of the search bar to bring up Google Lens. From here you can use your camera’s viewfinder to identify an object in the physical world, or select an existing image from your camera roll.

[Image: courtesy of Google]

Once you’ve identified an item, swipe up to see visual matches, then tap the Add to Your Search button at the top of the screen. This opens a text box to narrow the results.

While Google Lens has been available since 2017, the ability to filter your search using text is new. It’s not just about matching an image to similar images, but also understanding the characteristics of those images so users can ask more questions about what they’re seeing. As you might expect, this requires a lot of computer vision, language learning, and machine learning techniques.

“People want to show you a picture, and then they want to tell you something, or they want to make a follow-up request based on that. That’s exactly what multisearch enables,” says Wang.

Currently, Google says the feature works best with shopping-related searches, which is what many people use Google Lens for initially. For example, the company demonstrated a visual search for a pair of yellow heels with a tie around the ankle, then added the word “flat” to find a similar design without heels. Another example could be taking a picture of a dining table and looking for matching coffee tables.

But Belinda Zeng, product manager for Google search, says multisearch could be useful in other areas as well. She gives the example of finding a nail sample on Instagram and looking for instructions, or photographing a plant you can’t remember the name of and looking up care instructions.

“We definitely see this as a powerful way to search beyond just shopping,” she says.

Beyond the keyword search

Making multisearch a natural part of people’s search program will have its challenges.

For one, Google Lens isn’t available in web browsers, although Zeng says Google is looking into browser support. Even on the Google mobile app, Lens is easy to miss, and swiping up image results and tapping another button isn’t the most intuitive process.

Wang says Google has “a lot of different ideas and research” on how to make Lens better known, though he didn’t go into specifics. However, he noted that Google now answers more than 1 billion image requests per month.

“Even now, people are just waking up to the fact that Google can search images or your camera,” he says.

The bigger challenge will be to answer these visual queries competently enough that multisearch doesn’t just feel like a gimmick. One use case Google is interested in is helping people with repairs around the home, but identifying the myriad devices and components involved — and then finding relevant instructions on how to fix them — is still an elusive problem .

blank
[Image: courtesy of Google]

“There are definitely some use cases that we’re excited about but are definitely a work in progress in terms of quality,” says Zeng.

Ultimately, however, Google hopes to change users’ perception of search so that it’s no longer exclusively text-based. This would give the company a greater advantage over alternative, privacy-centric search engines such as DuckDuckGo, bold search, Neeva, Home pageand You.com.

Wang says Google doesn’t think about multisearch in terms of competition with other search engines. Still, the antiquated state of text-based searches might explain why newer upstarts believe they even stand a chance against Google. Leveraging on computer vision and machine learning to fundamentally change the way people search can help Google, whether it cares about alternative search engines or not.

“Over time, as Google gets better and better at understanding images and combining those things to qualify search,” says Wang, “it’s going to be a natural part of how people think about search.”

https://www.fastcompany.com/90738851/google-multisearch-image-text-search?partner=feedburner&utm_source=feedburner&utm_medium=feed&utm_campaign=feedburner+fastcompany&utm_content=feedburner Search text and images together

JACLYN DIAZ

USTimeToday is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@ustimetoday.com. The content will be deleted within 24 hours.

Related Articles

Back to top button