The latest Google Lens update might bring Circle to Search to many more phones

Google seemingly has plans to expand its Circle to Search feature to other Android phones via Google Lens. In a recent deep dive, news site Android Authority found clues to the update within the recent Google app betas files and compiled them all together. 

What’s particularly interesting is they managed to get the tool working on a smartphone, possibly hinting at an imminent release. According to the report, they even managed to get a popup notification informing users that the update would appear.

It tells people to hold down the home button to access Circle to Search, much like the experience on the Galaxy S24. Upon activation, a three-button navigation bar appears at the bottom, and an accompanying video shows the tool in action as it looks up highlighted portions of the Play Store on Google Search. The UI looks, unsurprisingly, similar to how it does on Galaxy phones, with search inquiries rising from the bottom.

Clashing with Gemini

You may notice that the rainbow filter animation is gone, having been replaced by a series of dots and lines. Well, that’s the old beta, and the newer version has the animation and the Translate button, which shows up in the lower right hand corner next to the search bar.

At a glance, it seems Circle to Search on Google Lens is close to launching, although it is still a work in progress with a few issues to iron out. For example, how will it work on a smartphone housing the Gemini app as holding down on the home button launches the chatbot? Google might give Circle to Search priority in this instance, so long pressing opens the tool rather than the AI. However, at this point, it’s too early to tell.

New navigation option

Android Authority also found “XML files referring to pill-based gesture navigation.” If you don’t know what that is, it’s the oval at the bottom of Android displays. The shape lets you move between apps with basic gestures. Google Lens could offer this option, allowing users to ditch the three-button navigation bar, but it may not come out for a while as it doesn’t work in the betas.

Circle to Search on Google Lens will most likely stick to the three buttons, though. The original report has a theory about this, as they believe implementing the pill navigation would systemic OTA (over-the-air) updates to millions upon millions of Android smartphones, which may or “may not be feasible.” So, to get Circle to Search out sooner to people, the navigation option will have to be pushed back a bit. The three-button solution is easier to implement.

There is no word on when the update will arrive, but we hope it’s soon, as it is a great feature and currently a highlight for the Galaxy and Pixel devices that have it. 

While you're here, be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google teases new AI-powered Google Lens trick in feisty ChatGPT counter-punch

It's another big week in artificial intelligence in a year that's been full of them, and Google has teased a new AI feature coming to mobile devices just hours ahead of its Google I/O 2024 event – where we're expecting some major announcements.

A social media post from Google shows someone asking their phone about what's being shown through the camera. In this case, it's people setting up the Google I/O stage, which the phone correctly identifies.

User and phone then go on to have a real-time chat about Google I/O 2024, complete with a transcription of the conversation on screen. We don't get any more information than that, but it's clearly teasing some of the upcoming reveals.

As far as we can tell, it looks like a mix of existing Google Lens and Google Gemini technologies, but with everything running instantly. Lens and Gemini can already analyze images, but studying real-time video feeds would be something new.

The AI people

See more

It's all very reminiscent of the multimodal features – mixing audio, text, and images – that OpenAI showed off with its own ChatGPT bot yesterday. ChatGPT now has a new AI model called GPT-4 Omni (GPT-4o), which makes all of this natural interaction even easier.

We've also seen the same kind of technology demoed on the Rabbit R1 AI device. The idea is that these AIs become less like boxes that you type text into, and more like synthetic people who can see, recognize, and talk.

Based on this teaser, it looks likely that this is the way the Google Gemini AI model and bot is going. While we can't identify the smartphone in the video, it may be that these new features come to Pixel phones (like the new Google Pixel 8a) first.

All will be revealed later today, May 14: everything gets underway at 10am PT / 1pm ET / 6pm BST, which is May 15 at 3am AEST. We've put together a guide to how to watch Google I/O 2024 online, and we'll be reporting live from the event too.

You might also like

TechRadar – All the latest technology news

Read More

Google Lens just got a powerful AI upgrade – here’s how to use it

We've just seen the Samsung Galaxy S24 series unveiled with plenty of AI features packed inside, but Google isn't slowing down when it comes to upgrading its own AI tools – and Google Lens is the latest to get a new feature.

The new feature is actually an update to the existing multisearch feature in Google Lens, which lets you tweak searches you run using an image: as Google explains, those queries can now be more wide-ranging and detailed.

For example, Google Lens already lets you take a photo of a pair of red shoes, and append the word “blue” to the search so that the results turn up the same style of shoes, only in a blue color – that's the way that multisearch works right now.

The new and improved multisearch lets you add more complicated modifiers to an image search. So, in Google's own example, you might search with a photo of a board game (above), and ask “what is this game and how is it played?” at the same time. You'd get instructions for playing it from Google, rather than just matches to the image.

All in on AI

Two phones on an orange background showing Google Lens

(Image credit: Google)

As you would expect, Google says this upgrade is “AI-powered”, in the sense that image recognition technology is being applied to the photo you're using to search with. There's also some AI magic applied when it comes to parsing your text prompt and correctly summarizing information found on the web.

Google says the multisearch improvements are rolling out to all Google Lens users in the US this week: you can find it by opening up the Google app for Android or iOS, and then tapping the camera icon to the right of the main search box (above).

If you're outside the US, you can try out the upgraded functionality, but only if you're signed up for the Search Generative Experience (SGE) trial that Google is running – that's where you get AI answers to your searches rather than the familiar blue links.

Also just announced by Samsung and Google is a new Circle to Search feature, which means you can just circle (or scribble on) anything on screen to run a search for it on Google, making it even easier to look up information visually on the web.

You might also like

TechRadar – All the latest technology news

Read More

Google Bard just got a super-useful Google Lens boost – here’s how to use it

Google Bard is getting update after update as of late, with the newest one being the incorporation of Google Lens – which will allow users to upload images alongside prompts to give Bard additional context.

Google seems to be making quite a point of expanding Bard’s capabilities and giving the chatbot a serious push into the artificial intelligence arena, either by integrating it into other Google products and services or simply improving the standalone chatbot itself.

This latest integration brings Google Lens into the picture, allowing you to upload images to part, identify objects and scenes, provide image descriptions, and search the web for pictures of what you might be looking for.

Image 1 of 2

Screenshot of Bard

(Image credit: Future)
Image 2 of 2

Asking Google Bard to show me a kitten

(Image credit: Future)

For example, I asked Bard to show me a photo of a kitten using a scratching post, and it pulled up a photo (accurately cited!) of exactly what I asked for, with a little bit of extra information on why and how cats use scratching posts. I also showed Bard a photo from my phone gallery, and it accurately described the scene and some tidbits of interesting information about rainbows.

Depending on what you ask Bard to do with the image provided, Bard can provide a variety of helpful responses. Since the AI-powered chatbot is mostly a conversational tool, adding as much context as you possibly can will consistently get you the best results, and you can refine its responses with additional prompts as needed. 

If you want to give Bard's new capabilities a try, just head over to the chatbot, click the little icon on the left side of the text box where you would normally type out your prompt, and add any photo you desire to your conversation. 

Including the image update, you can now pin conversation threads, get Bard to read responses out loud in over 40 languages, and get access to easier sharing methods. You can check out the Bard update page for a more detailed explanation of all the new additions.

TechRadar – All the latest technology news

Read More

Google Lens and Bard are an AI tag team that ChatGPT should fear

Google Lens has long been a powerful party trick for anyone who needs to identify a flower or translate their restaurant menu, but it's about to jump to the next level with some Bard integration that's rolling out “in the coming weeks”.

Google teased its tag-team pairing of Lens and Bard at Google IO 2023, but it's now given us an update on how the combo will work and when it's coming. In a new blog post, Google says that within weeks you'll be able to “include images in your Bard prompts and Lens will work behind the scenes to help Bard make sense of what’s being shown”.

The example that Google has shared is a shopping-based one. If you have a photo of a new pair of shoes that you've been eyeing up for a vacation, you can ask Bard what they're called and, unlike standard Lens, start grilling Bard for ideas on how you should style the new shoes.

Naturally, the Lens-Bard combo will be able to do more than just offer shopping advice, with huge potential for travel advice, education, and more. For example, imagine being able to ask a Lens-powered Bard to not only name a holiday landmark but build you a good day trip itinerary around it.

This isn't the end of Google Lens' new tricks, either. It's also tentatively jumping into the health space with a new feature that helps you identify any skin conditions that have been nagging you (below). To use the new feature, Google says you can “just take a picture or upload a photo through Lens, and you’ll find visual matches to inform your search”. 

It can apparently also help identify other nagging issues like “a bump on your lip, a line on your nails, or hair loss on your head”. Naturally, these won't be proper diagnoses of conditions, but they could be a start of a conversation with your doctor. 

If you aren't familiar with Google Lens, it's pretty easy to find on Android – it'll either be built into your camera app or you can just download the standalone Lens app from the App Store. On iPhone, you'll find Lens within the official Google app instead.

Next-gen Lens

A phone screen on an orange background showing a Google Lens search for a skin condition

(Image credit: Google)

The budding Google Lens and Bard partnership could be a match made in search heaven, given that Lens is the most powerful visual search tool around and Bard is improving by the week. And that combo could be a powerful alternative to ChatGPT.

ChatGPT itself has basic image recognition powers and Microsoft did recently bring AI-powered image recognition to its Bing search engine. But the integration of the two isn't quite as powerful as the incoming Lens-Bard integration, at least from what we've seen from Google's demos.

Unfortunately, Google's extreme tentativeness around Bard (which is still labeled an 'experiment') means we might not see its full potential for a while. For example, the huge potential power of this Lens and Bard combination will be limited by the fact that there's still no Google Bard mobile app.

Google could change its stance in the future, but right now we're limited to using Bard in our web browsers – and that's far less convenient for visual search than scanning the world with a smartphone and its built-in camera.

So while the integration of powerful Google apps like Lens with Bard has massive potential for how we search the world for info, ChatGPT will rest a little safer in the knowledge that Google is taking a glacial approach to unleashing its full AI-powered potential.

TechRadar – All the latest technology news

Read More

Multisearch could make Google Lens your search sensei

Google searches are about to get even more precise with the introduction of multisearch, a combination of text and image searching with Google Lens. 

After making an image search via Lens, you’ll now be able to ask additional questions or add parameters to your search to narrow the results down. Google’s use cases for the feature include shopping for clothes with a particular pattern in different colors or pointing your camera at a bike wheel and then typing “how to fix” to see guides and videos on bike repairs. According to Google, the best use case for multisearch, for now, is shopping results. 

The company is rolling out the beta of this feature on Thursday to US users of the Google app on both Android and iOS platforms. Just click the camera icon next to the microphone icon or open a photo from your gallery, select what you want to search, and swipe up on your results to reveal an “add to search” button where you can type additional text.

This announcement is a public trial of the feature that the search giant has been teasing for almost a year; Google discussed the feature when introducing MUM at Google I/O 2021, then provided more information on it in September 2021. MUM, or Multitask Unified Model, is Google’s new AI model for search that was revealed at the company’s I/O event the same year. 

MUM replaced the old AI model, BERT; Bidirectional Encoder Representations from Transformers. MUM, according to Google, is around a thousand times more powerful than BERT.

Google Lens Multisearch

(Image credit: Google)

Analysis: will it be any good?

It’s in beta for now, but Google sure was making a big hoopla about MUM during its announcement. From what we’ve seen, Lens is usually pretty good at identifying objects and translating text. However, the AI enhancements will add another dimension to it and could make it a more useful tool for finding the information you need about what you're looking at right now, as opposed to general information about something like it.

It does, though, beg the questions about how good it’ll be at specifying exactly what you want. For example, if you see a couch with a striking pattern on it but would rather have it as a chair, will you be able to reasonably find what you want? Will it be at a physical store or at an online storefront like WayFair? Google searches can often get inaccurate physical inventories of nearby stores, are those getting better, as well?

We have plenty of questions, but they’ll likely only be answered once more people start using multisearch. The nature of AI is to get better with use, after all.

TechRadar – All the latest technology news

Read More

Multisearch could make Google Lens your search sensei

Google searches are about to get even more precise with the introduction of multisearch, a combination of text and image searching with Google Lens. 

After making an image search via Lens, you’ll now be able to ask additional questions or add parameters to your search to narrow the results down. Google’s use cases for the feature include shopping for clothes with a particular pattern in different colors or pointing your camera at a bike wheel and then typing “how to fix” to see guides and videos on bike repairs. According to Google, the best use case for multisearch, for now, is shopping results. 

The company is rolling out the beta of this feature on Thursday to US users of the Google app on both Android and iOS platforms. Just click the camera icon next to the microphone icon or open a photo from your gallery, select what you want to search, and swipe up on your results to reveal an “add to search” button where you can type additional text.

This announcement is a public trial of the feature that the search giant has been teasing for almost a year; Google discussed the feature when introducing MUM at Google I/O 2021, then provided more information on it in September 2021. MUM, or Multitask Unified Model, is Google’s new AI model for search that was revealed at the company’s I/O event the same year. 

MUM replaced the old AI model, BERT; Bidirectional Encoder Representations from Transformers. MUM, according to Google, is around a thousand times more powerful than BERT.

Google Lens Multisearch

(Image credit: Google)

Analysis: will it be any good?

It’s in beta for now, but Google sure was making a big hoopla about MUM during its announcement. From what we’ve seen, Lens is usually pretty good at identifying objects and translating text. However, the AI enhancements will add another dimension to it and could make it a more useful tool for finding the information you need about what you're looking at right now, as opposed to general information about something like it.

It does, though, beg the questions about how good it’ll be at specifying exactly what you want. For example, if you see a couch with a striking pattern on it but would rather have it as a chair, will you be able to reasonably find what you want? Will it be at a physical store or at an online storefront like WayFair? Google searches can often get inaccurate physical inventories of nearby stores, are those getting better, as well?

We have plenty of questions, but they’ll likely only be answered once more people start using multisearch. The nature of AI is to get better with use, after all.

TechRadar – All the latest technology news

Read More