These new AI smart glasses are like getting a second pair of ChatGPT-powered eyes

The Ray-Ban Meta glasses have a new rival for the title of best smart glasses, with the new Solos AirGo Visions letting you quiz ChatGPT about the objects and people you're looking at.

Unlike previous Solos glasses, the AirGo Vision boast a built-in camera and support for OpenAI's latest GPT-4o model. These let the glasses identify what you're looking at and respond to voice prompts. For example, you could simply ask, “what am I looking at?” or give the AirGo Visions a more specific request like “give me directions to the Eiffel Tower.”

Another neat feature of the new Solos glasses is their modular frame design, which means you can change some parts – for example, the camera or lenses – to help them suit different situations. These additional frames start from $ 89 (around £70 / AU$ 135).   

If talking to a pair of camera-equipped smart glasses is a little too creepy, you can also use the camera to simply take holiday snaps. The AirGo Visions also feature built-in speakers to answer your questions or play music.

While there's no official price or release date for the full version of the AirGo Visions, Solos will release a version without the camera for $ 249 (around £200 / AU$ 375) in July. That means we can expect a camera-equipped pair to cost at least as much as the Ray-Ban Meta glasses, which will set you back $ 299 / £299 / AU$ 449.

How good are AI-powered smart glasses?

While we haven't yet tried the Solos AirGo Visions, it's fair to say that smart glasses with AI assistants are a work in progress. 

TechRadar's Senior Staff Writer Hamish Hector recently tried the Meta AI's 'Look and Ask' feature on his Ray-Ban smart glasses and found the experience to be mixed. He stated that “the AI is – when it works – fairly handy,” but that “it wasn’t 100% perfect, struggling at times due to its camera limitations and an overload of information.”

The smart glasses failed in some tests, like identifying trees, but their ability to quickly summarize a confusing, information-packed sign about the area’s parking restrictions showed how useful they can be in some situations.

As always, with any AI-powered responses, you'll want to corroborate any answers to filter out errors and so-called hallucinations. But there's undoubtedly some potential in the concept, particularly for travelers or anyone who is visually impaired.

The Solos AirGo Visions' support for OpenAI's latest GPT-4o model should make for an interesting comparison with the Ray-Ban Meta smart glasses when the camera-equipped version lands. Until then, you can check out our guide to the best smart glasses you can buy right now.

You might also like

TechRadar – All the latest technology news

Read More

One of the most persistent Windows 11 bugs ever keeps telling users they’ve changed their location, when they haven’t – but it’s getting fixed

Windows 11 has a new bug (that’s also in Windows 10) whereby the operating system keeps telling users that their time zone has changed, when it hasn’t – and repeatedly doing this, driving some users to the point of distraction by all accounts.

Windows Latest flagged up multiple complaints about this bug, which has been acknowledged by Microsoft, and the company is now working on a fix.

Indeed, the tech site notes that it has experienced the glitch itself, whereby a dialog box pops up, warning that “due to a location change a new time zone has been detected.”

Then the user has the choice of clicking ‘Ignore’ to dismiss the prompt, or ‘Accept’ to be taken to the Date & Time settings where there’s actually nothing amiss (the time zone and location aren’t changed, just to clarify).

Essentially, the prompt is appearing by accident, but the real problem is that affected users don’t just see this once. It’s occurring repeatedly and in some cases multiple times per day, or even hour, which is going to get seriously tiresome.

A user hit by the problem complained in Microsoft’s Feedback Hub: “This is the 2nd system where this pop-up about me changing time zones has occurred. After I set the date and time (Central time zone), why does Windows think that I have moved 455 miles to the East? Fix your darn OS Microsoft.”


Analysis: A rare bug apparently – but a seriously annoying glitch

This is a bit of an odd one, to say the least, and while it’s a relatively benign bug – an errant pop-up that doesn’t actually throw anything of a spanner in the works (unlike some of the showstoppers we’ve seen in the past) – if it’s happening regularly, then it’s going to be a headache.

The good news is that Microsoft says the bug is rare, and so presumably the set of Windows 11 and 10 users who are subject to it happening particularly regularly is even rarer. That said, it needs to be fixed, and the problem has been around for a few weeks now.

According to Windows Latest, the fix is already in the pipeline and should (most likely) be applied as a server-side solution, meaning that it’ll happen on Microsoft’s end, and you won’t need to wait for an update to contain the cure if you’re affected by this issue. Fingers crossed that this resolution arrives swiftly, then.

Meantime, if you’re getting these head-scratching time zone notifications, there’s nothing you can do but keep dismissing them.

You might also like…

TechRadar – All the latest technology news

Read More

Runway’s new OpenAI Sora rival shows that AI video is getting frighteningly realistic

Just a week on from arrival of Luma AI's Dream Machine, another big OpenAI Sora has just landed – and Runway's latest AI video generator might be the most impressive one yet.

Runway was one of the original text-to-video pioneers, launching its Gen-2 model back in March 2023. But its new Gen-3 Alpha model, which will apparently be “available for everyone over the coming days”, takes things up several notches with new photo-realistic powers and promises of real-world physics.

The demo videos (which you can see below) showcase how versatile Runway's new AI model is, with the clips including realistic human faces, drone shots, simulations of handheld cameras and atmospheric dreamscapes. Runway says that all of them were generated with Gen-3 Alpha “with no modifications”.

Apparently, Gen-3 Alpha is also “the first of an upcoming series of models” that have been trained “on a new infrastructure built for large-scale multimodal training”. Interestingly, Runway added that the new AI tool “represents a significant step towards our goal of building General World Models”, which could create possibilities for gaming and more.

A 'General World Model' is one that effectively simulates an environment, including its physics – which is why one of the sample videos shows the reflections on a woman's face as she looks through a train window.

These tools won't just be for us to level-up our GIF games either – Runway says it's “been collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha”, which means tailored versions of the model for specific looks and styles. So expect to see this tech powering adverts, shorts and more very soon.

When can you try it?

A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head

(Image credit: Runway)

Last week, Luma AI's Dream Machine arrived to give us a free AI video generator to dabble with, but Runway's Gen-3 Alpha model is more targeted towards the other end of the AI video scale. 

It's been developed in collaboration with pro video creators with that audience in mind, although Runway says it'll be “available for everyone over the coming days”. You can create a free account to try Runway's AI tools, though you'll need to pay a monthly subscription (starting from $ 12 per month, or around £10 / AU$ 18 a month) to get more credits.

You can create videos using text prompts – the clip above, for example, was made using the prompt “a middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head”. Alternatively, you can use still images or videos as a starting point.

The realism on show is simultaneously impressive and slightly terrifying, but Runway states that the model will be released with a new set of safeguards against misuse, including an “in-house visual moderation system” and C2PA (Coalition for Content Provenance and Authenticity) provenance standards. Let the AI video battles commence.

You might also like…

TechRadar – All the latest technology news

Read More

Your old photos are getting a 3D makeover thanks to this huge Vision Pro update

With the unveiling of visionOS 2.0 for the Vision Pro at WWDC 24, Apple introduced many new features but left my wish to open up environments ungranted. Even so, aside from new display options for Mac Virtual Display and more control gestures, there is one feature that stands out from the rest.

When I reviewed the Vision Pro, I noted how emotional an experience it could be, especially viewing photos back on it. Looking at photos of loved ones who have since passed or even reliving moments that I frequently call up on my iPhone or iPad, there was something more about life-size or larger-than-life representations of the content. When shot properly, the most compelling spatial videos and photos give off a real feeling of intimacy and engagement.

The catch is that, currently, the only photos and videos that can be viewed in this way are videos that have been shot in Apple's spatial image format, and that's something you can only do on the 15 Pro or 15 Pro Max

However, in the case of photos, that's set to change with visionOS 2.

Make any photo more immersive

Apple Vision Pro – spatial photos visionos 2.0

(Image credit: Apple)

Photos that you view on the Vision Pro running the second generation of VisionOS will be able to be displayed as spatial photos thanks to the power of machine learning. This will add a left and right side to the 2D image to create the impression of depth and let the image effectively 'pop.' I cannot wait to give this a go, and I think it’ll give folks a more impactful experience with Apple's 'spatial computer.'

I also really like Apple’s approach here, as it won’t automatically present every photo as a spatial image – that could lead to some strange-looking shots, and there will also be photos that you’d rather leave in their original 2D form.

According to the visionOS 2.0 portion of Apple's keynote, the process is as simple as swiping through pictures within Photos and tapping a button to watch as machine learning kicks in, analyzes your photo, and adds depth elements. The resulting images really pop, and when viewed on a screen that could be as large as you want on the Vision Pro, the effect is striking.

I’ve already enjoyed looking at standard photos of key memories of my life with friends and family who are still here and some who have passed. Viewing it back on that grand stage is emotional, makes you think, and can be powerful. I’m hopeful that this option of engaging this 3D effect will make that impact even stronger.

It has the potential to greatly expand how much a Vision Pro owner actually uses the Photos app, considering that it’s a great way to view images on a large scale, be it a standard shot, ultra-wide, portrait, or even a panorama.

Mac Virtual Display expands, and improved gestures

Apple Vision Pro, Mac Virtual Display VisionOs 2.0

(Image credit: Apple)

While 'spatial photos' was the new feature that most caught my eye, it’s joined by two other new features in visionOS 2.0. For starters, Mac Virtual Display is set to get a big enhancement – you’ll be able to make the screen sizes much larger, almost like a curved display that wraps around, and one that will benefit from improved resolutions. That means more applications will run even better here.

Additionally, you can do more with hand gestures. Rather than hitting the Digital Crown to pull up the home screen, you can make a gesture similar to double-tapping to pull up that interface, while another gesture will let you easily access Control Center.

New ways of interaction are either overlaid in your reality, in an immersive one for Apple, or on Tatooine if you’re in Disney Plus.

TechRadar – All the latest technology news

Read More

Windows 11 is getting a new look according to a leak – but it might be exclusive to AI-powered PCs

It looks like Windows 11 could get a new official default wallpaper, according to leaked images that have emerged just before Microsoft Build 2024, the company’s annual conference for developers – where it’s expected that we’ll see some big debuts showing off the fruits of collaboration between Microsoft, Qualcomm, and other partners. 

Microsoft has been pretty tight-lipped about what it plans to show off, and hasn’t even officially announced the new wallpaper, although it’s already available for download in high resolution. 

This was uncovered by German tech blog WinFuture, whose sources leaked information about Samsung Galaxy Book4 Edge Pro laptops with ARM chips. The leak included multiple photographs of the laptop from a variety of angles, as well as the new wallpaper, which joins the series of Windows 11 signature Bloom wallpapers (Neowin has a collection of others you can view and download). Neowin speculates that the new AI-focused PCs will ship with the new background. 

See more

Accompanying the leak, X user @cadenzza_ shared a high-resolution version of the brand new Bloom wallpaper variation (apparently shared in a private Windows Insider Telegram group originally) that you can download by saving the image below or from @cadenzza_’s post, and set on your device.

The full image of the new Windows 11 colorful Bloom background wallpaper

(Image credit: Microsoft/X(Twitter) user @cadenzza_)

Microsoft's lips are sealed and it's got our attention

It’s interesting how closely Microsoft’s been guarding what it’s about to share, with this static wallpaper being one of the things that we can confirm at all. Neowin has proposed that Microsoft might be crafting new desktop background effects for Windows 11, perhaps making use of the next-generation devices’ AI capabilities, and creating effects simulating depth, and possibly making the background reactive to how you move your cursor. 

We’ll have to see if this is the case at some point in the next few days as Microsoft Build goes on. We expect the announcement of consumer versions of the Surface Pro 10 and Surface Laptop 6 laptops with Qualcomm Snapdragon X processors, and whatever AI innovations Microsoft wants to bring to our attention. Another new hardware introduction we expect is a Copilot keyboard button, which has been discussed for a while now. Other Copilot-related news could have to do with OpenAI’s recent debut of GPT-4o, and possibly a souped-up Windows Copilot AI assistant.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Search is getting a massive upgrade – including letting you search with video

Google I/O 2024's entire two-hour keynote was devoted to Gemini. Not a peep was uttered for the recently launched Pixel 8a or what Android 15 is bringing upon release. The only times a smartphone or Android was mentioned is how they are being improved by Gemini

The tech giant is clearly going all-in on the AI, so much so that the stream concludes by boldly displaying the words “Welcome to the Gemini era”. 

Among all the updates that were presented at the event, Google Search is slated to gain some of the more impressive changes. You could even argue that the search engine will see one of the most impactful upgrades in 2024 that it’s ever received in its 25 years as a major tech platform. Gemini gives Google Search a huge performance boost, and we can’t help but feel excited about it.

Below is a quick rundown of all the new features Google Search will receive this year.

1. AI Overviews

Google IO 2024

(Image credit: Google)

The biggest upgrade coming to the search engine is AI Overviews which appears to be the launch version of SGE (Search Generative Experience). It provides detailed, AI-generated answers to inquiries. Responses come complete with contextually relevant text as well as links to sources and suggestions for follow-up questions.

Starting today, AI Overviews is leaving Google Labs and rolling out to everyone in the United States as a fully-fledged feature. For anyone who used the SGE, it appears to be identical. 

Response layouts are the same and they’ll have product links too. Google has presumably worked out all the kinks so it performs optimally. Although when it comes to generative AI, there is still the chance it could hallucinate.

There are plans to expand AI Overviews to more countries with the goal of reaching over a billion people by the end of 2024. Google noted the expansion is happening “soon,” but an exact date was not given.

2. Video Search

Google IO 2024

(Image credit: Google)

AI Overviews is bringing more to Google Search than just detailed results. One of the new features allows users to upload videos to the engine alongside a text inquiry. At I/O 2024, the presenter gave the example of purchasing a record player with faulty parts. 

You can upload a clip and ask the AI what's wrong with your player, and it’ll provide a detailed answer mentioning the exact part that needs to be replaced, plus instructions on how to fix the problem. You might need a new tone arm or a cueing lever, but you won't need to type in a question to Google to get an answer. Instead you can speak directly into the video and send it off.

Searching With Video will launch for “Search Labs users in English in the US,” soon with plans for further expansion into additional regions over time. 

3. Smarter AI

Google IO 2024

(Image credit: Google)

Next, Google is introducing several performance boosts; however, none of them are available at the moment. They’ll be rolling out soon to the Search Labs program exclusively to people in the United States and in English. 

First, you'll be able to click one of two buttons at the top to simplify an AI Overview response or ask for more details. You can also choose to return to the original answer at any time.

Second, AI Overviews will be able to understand complex questions better than before. Users won’t have to ask the search engine multiple short questions. Instead, you can enter one long inquiry – for example, a user can ask it to find a specific yoga studio with introductory packages nearby.

Lastly, Google Search can create “plans” for you. This can be either a three-day meal plan that’s easy to prepare or a vacation itinerary for your next trip. It’ll provide links to the recipes plus the option to replace dishes you don't like. Later down the line, the planning tool will encompass other topics like movies, music, and hotels.

All about Gemini

That’s pretty much all of the changes coming to Google Search in a nutshell. If you’re interested in trying these out and you live in the United States, head over to the Search Labs website, sign up for the program, and give the experimental AI features a go. You’ll find them near the top of the page.

Google I/O 2024 dropped a ton of information on the tech giant’s upcoming AI endeavors. Project Astra, in particular, looked very interesting, as it can identify objects, code on a monitor, and even pinpoint the city you’re in just by looking outside a window. 

Ask Photos was pretty cool, too, if a little freaky. It’s an upcoming Google Photos tool capable of finding specific images in your account much faster than before and “handle more in-depth queries” with startling accuracy.

If you want a full breakdown, check out TechRadar's list of the seven biggest AI announcements from Google I/O 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Workspace is getting a talkative tool to help you collaborate better – meet your new colleague, AI Teammate

If your workplace uses Google Workspace productivity suite of apps, then you might soon get a new teammate – an AI Teammate that is. 

In its mission to improve our real-life collaboration, Google has created a tool to pool shared documents, conversations, comments, chats, emails, and more into a singular virtual generative AI chatbot: the AI Teammate. 

Powered by Google's own Gemini generative AI model, AI Teammate is designed to help you concentrate more on your role within your organization and leave the tracking and tackling of collective assignments and tasks to the AI tool.

This virtual colleague will have its own identity, its own Workspace account, and a specifically defined role and objective to fulfil.

When AI Teammate is set up, it can be given a custom name, as well as have other modifications, including its job role, a description of how it's expected to help your team, and specific tasks it's supposed to carry out.

In a demonstration of an example AI Teammate at I/O 2024, Google showed a virtual teammate named 'Chip' who had access to a group chat of those involved in presenting the I/O 2024 demo. The presenter, Tony Vincent, explained that Chip was privy to a multitude of chat rooms that had been set up as part of preparing for the big event. 

Vincent then asks Chip if I/O storyboards had been approved – the type of question you'd possibly ask colleagues –  and Chip was able to answer as it can analyze all of these conversations that it had been keyed into. 

As AI Teammate is added to more threads, files, chats, emails, and other shared items, it builds a collective memory of the work shared in your organization. 

Google Workspace

(Image credit: Google)

In a second example, Vincent shows another chatroom for an upcoming product release and asks the room if the team is on track for the product's launch. In response, AI Teammate searches through everything it has access to like Drive, chat messages, and Gmail, and synthesizes all of the relevant information it finds to form its response. 

When it's ready (which looks like about a second or slightly less), AI Teammate delivers a digestible summary of its findings. It flagged up a potential issue to make the team aware, and then gave a timeline summary, showing the stages of the product's development. 

As the demo is taking place in a group space, Vincent stated that anyone can follow along and jump in at any point, for example asking a question about the summary or for AI Teammate to transfer its findings into a Doc file, which it does as soon as the Doc file is ready. 

AI Teammate becomes as useful as it's customized to be and Google promises that it can make your collaborative work seamless, being integrated into Google's host of existing products that many of us are already used to.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Maps is getting a new update that’ll help you discover hidden gems in your area thanks to AI – and I can’t wait to try it out

It looks like Google Maps is getting a cool new feature that’ll make use of generative AI to help you explore your town – grouping different locations to make it easier to find restaurants, specific shops, and cafes. In other words, no more sitting around and mulling over where you want to go today!

Android Authority did an APK teardown (which basically means decompiling binary code within a program into a programming language that can be read normally) which hints at some new features on the horizon. The code within the Google Maps beta included mention of generative AI, which led Android Authority to Google Labs. If you’re unfamiliar with Google Labs, it’s a platform where users can experiment with Google’s current in-development tools and AI projects, like Gemini Chrome extensions and music ‘Time Travel’. 

So, what exactly is this new feature that has me so excited? Say you’re really craving a sweet treat. Instead of going back to your regular stop or simply Googling ‘sweet treats near me’, you’ll be able to ask Google Maps for exactly what you’re looking for and the app will give you suggestions for nearby places that offer it. Naturally, it will also provide you with pictures, ratings, and reviews from other users that you can use to make a decision.

Sweet treat treasure hunter 

I absolutely love the idea and I really hope we get to see the feature come to life as someone who has a habit of going to the same places over and over again because I either don’t know any alternatives or just haven’t discovered other parts of my city. The new feature has the potential to offer a serious upgrade to Google Maps’ more specific location search abilities, beyond simply typing in the name of the shop you want or selecting a vague group like ‘Restaurants’ as you can currently. 

You’ll be able to see your results into categories, and if you want more in-depth recommendations you can ask follow-up questions to narrow down your search – much in the same way that AI assistants like Microsoft Copilot can ‘remember’ your previous chat history to provide more context-sensitive results. I often find myself craving a little cake or a delicious cookie, so if I want that specific treat I can specify to the app what I’m craving and get a personalized list of reviewed recommendations. 

We’re yet to find out when exactly to expect this new feature, and without an official announcement, we can’t be 100% certain that it will ever make a public release. However, I’m sure it would be a very popular addition to Google Maps, and I can’t wait to discover new places in my town with the help of an AI navigator.

You might also like…

TechRadar – All the latest technology news

Read More

A key Apple app is rumored to be getting a major upgrade in macOS 15

We're set to hear much more about what's coming with macOS 15 when Apple's annual Worldwide Developers Conference (WWDC) gets underway on June 10 – and one app in particular is rumored to be getting a major upgrade.

That app is the Calculator app, and while it perhaps isn't the most exciting piece of software that Apple makes, AppleInsider reckons the upcoming upgrade is “the most significant upgrade” the app has been given “in years”.

It's so substantial, it's got its own codename: GreyParrot (that's said to be a nod towards the African grey parrot, known for its cognitive abilities). Part of the upgrade will apparently include the Math Notes feature we've already heard about in relation to a Notes app upgrade due in iOS 18.

It sounds as though Math Notes is going to make it easier to ferry calculations between the Notes and the Calculator apps. A new sidebar showing the Calculator history is reported to be on the way too. This might well get its own button on the app, AppleInsider says.

Currency conversions

Calculator for macOS

Currency conversions currently require a pop-up dialog (Image credit: Future)

A visual redesign is also apparently on the way, with “rounded buttons and darker shades of black” to match the iOS Calculator. Users will also be able to resize the Calculator app window, with the buttons resizing accordingly, which isn't currently possible.

Unit conversion is going to be made more intuitive and easier to access, AppleInsider says, with no need to open up the menus to select conversion types – at the moment, it's necessary to select currencies in a pop-up dialog.

The thinking is that Apple wants to better compete with apps such as OneNote from Microsoft, and the third-party Calcbot app for macOS. It's been a long time since the Calculator app was changed in any way, and its rather basic feature set means it's lagging behind other alternatives.

According to AppleInsider, there's no guarantee that Apple will go through with this Calculator upgrade, but it seems likely. Expect to hear much more about macOS 15, iOS 18, and Apple's other software products at WWDC 2024 on June 10.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s Photos app is getting more sophistication with new Designer app integration – but there’s a catch

Windows 11’s Photos app has been getting some impressive upgrades recently, and it looks like another one is on the way. The app is getting Designer web app integration, which is Microsoft’s tool that enables people to make professional-looking graphics, but there’s one little catch – it’ll prompt Designer to open in Edge (Microsoft’s web browser that comes installed with Windows 11). 

The new Designer integration joins a line-up of other features that have been added in the last two years, including the background blur feature, an AI magic eraser, and more. The new feature is  accessible via an 'Edit in Microsoft Designer' option within the Photos app, represented by an icon that will appear in the middle of the Preview window. 

It’s not the most subtle position for it, and I think it’s fair to assume Microsoft is doing that because it wants users to click it. Doing so will take users to the Microsoft Designer website which opens in an Edge window – and due to Edge not being the most popular of web browsers, this could irritate people who have set their default browser to a different app, such as Chrome

This development is still in the testing stages, according to Windows Latest, making its way through the Windows Insider Program. The feature can be found in Photos app version 2024.11040.16001.0, which is a part of the Windows 11 24H2 preview build in the Canary channel. The feature should also be available in the Windows 11 Insider Dev channel build, but the Photos app version has to be version 2024.11040.16001.0.

Apparently, you can also prompt the Designer web app to open by right-clicking the image while in Preview in the Photos app, and clicking ‘Edit in Designer online’ in the menu that appears.

Woman relaxing on a sofa, holding a laptop in her lap and using it

(Image credit: Shutterstock/fizkes)

The apparent state of the new feature

When it tried to activate the new feature, Windows Latest hit a wall as it was presented with a blank canvas in Designer, rather than the image that was going to be edited. Hopefully, this is an anomaly or an error, and it presumably will result in the image you’re looking at in Preview in the Photos app opening up in Designer when the feature is fully rolled out in a Windows update. 

Windows Latest made several attempts at making the feature function as intended, but it wasn’t to be, and I would hope that Microsoft takes this feedback on board, especially if it’s a widespread issue. You can import the image manually while having the Designer web app already open, but this will defeat the purpose of having an easily accessible option in the Photos app. 

Users can edit their image in Designer, but only if they’ve signed into their Microsoft account. Microsoft wrote about the feature in an official Windows Blogs post, explaining that it’s currently being tested in the US, UK, Australia, Ireland, India, and New Zealand.

Having various image editing tools scattered across the Photos app, the Designer web app, and the Paint app doesn’t make things easy for Windows users. People like accessing all the relevant tools from whatever app they’re currently using instead of having to memorize which app has what exclusive feature. 

The approach has been called ‘inconsistent’ by Windows Latest, and I would bet that it’s not alone in that opinion. While it’s clear that Microsoft wants to get people using its new AI-powered tools, the company would be much better served if made them easier to access through one powerful program, rather than being scattered around Windows 11.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More