Amazon Alexa’s next-gen upgrade could turn the assistant into a generative AI chatbot

Rumors started circulating earlier this year claiming Amazon was working on improving Alexa by giving it new generative AI features. Since then, we haven’t heard much about it until very recently when CNBC spoke to people familiar with the project’s development. The new reporting provided insight into what the company aims to do with the upgraded Alexa, how much it may cost, and the reason why Amazon is doing this.

CNBC’s sources were pretty tight-lipped. They didn’t reveal exactly what the AI will be able, but they did mention the tech giant’s goals. Amazon wants its developers to create something “that holds up amid the new AI competition,” referring to the likes of ChatGPT. Company CEO Andy Jazzy was reportedly “underwhelmed” with the modern-day Alexa and he isn’t the only one who wants the assistant to do more. Reportedly, the dev team is seemingly worried the model currently amounts to just being an “expensive alarm clock.”

To facilitate the new direction, Amazon reorganized major portions of its business within the Alexa team, shifting focus toward achieving artificial general intelligence. 

AGI is a concept from science fiction, but it’s the idea that an AI model may one day match or surpass the intelligence of a human being. Despite their lofty goals, Amazon seems to be starting small by wanting to create its own chatbot with generative capabilities. 

The sources state, “Amazon will use its own large language model, Titan, in the Alexa upgrade.” Titan is only available to businesses as a part of Amazon Bedrock. It can generate text, create images, summarize documents, and more for enterprise users, similar to other AIs. Following this train of thought, the new Alexa could offer the same features to regular, non-enterprising users.

Potential costs

Previous reports have said Amazon plans to charge people for access to the supercharged Alexa; however, the cost or plan structure were unknown. Now, we’re learning Amazon is planning to launch the Alexa upgrade as a subscription service completely separate from Prime, meaning people will have to pay extra to try out the AI, according to this new report.

Apparently, there’s been debate on exactly how much to charge. Amazon has yet to nail down the monthly fee. One of the sources told CNBC that “a $ 20 price point was floated” around at one point while someone else suggested dropping costs down to “single-digit dollar [amounts].” So, in other words, less than $ 10, which would allow the brand to undercut rivals. OpenAI, for example, charges $ 20 a month for its Plus plan.

There is no word on when Alexa’s update will launch or even be formally announced. But if and when it does come out, it might be the first chatbot accessible through an Amazon smart speaker like the Echo Pop

We did reach out to the company to see if it wanted to make a statement about CNBC’s report. We’ll update this story if we hear back.

Til then, check out TechRadar's roundup of the best smart speakers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini


TechRadar – All the latest technology news

Read More

Google’s Gemini AI can now handle bigger prompts thanks to next-gen upgrade

Google’s Gemini AI has only been around for two months at the time of this writing, and already, the company is launching its next-generation model dubbed Gemini 1.5.

The announcement post gets into the nitty-gritty explaining all the AI’s improvements in detail. It’s all rather technical, but the main takeaway is that Gemini 1.5 will deliver “dramatically enhanced performance.” This was accomplished with the implementation of a “Mixture-of-Experts architecture” (or MoE for short) which sees multiple AI models working together in unison. Implementing this structure made Gemini easier to train as well as faster at learning complicated tasks than before.

There are plans to roll out the upgrade to all three major versions of the AI, but the only one being released today for early testing is Gemini 1.5 Pro. 

What’s unique about it is the model has “a context window of up to 1 million tokens”. Tokens, as they relate to generative AI, are the smallest pieces of data LLMs (large language models) use “to process and generate text.” Bigger context windows allow the AI to handle more information at once. And a million tokens is huge, far exceeding what GPT-4 Turbo can do. OpenAI’s engine, for the sake of comparison, has a context window cap of 128,000 tokens. 

Gemini Pro in action

With all these numbers being thrown, the question is what does Gemini 1.5 Pro look like in action? Google made several videos showcasing the AI’s abilities. Admittedly, it’s pretty interesting stuff as they reveal how the upgraded model can analyze and summarize large amounts of text according to a prompt. 

In one example, they gave Gemini 1.5 Pro the over 400-page transcript of the Apollo 11 moon mission. It showed the AI could “understand, reason about, and identify” certain details in the document. The prompter asks the AI to locate “comedic moments” during the mission. After 30 seconds, Gemini 1.5 Pro managed to find a few jokes that the astronauts cracked while in space, including who told it and explained any references made.

These analysis skills can be used for other modalities. In another demo, the dev team gave the AI a 44-minute Buster Keaton movie. They uploaded a rough sketch of a gushing water tower and then asked for the timestamp of a scene involving a water tower. Sure enough, it found the exact part ten minutes into the film. Keep in mind this was done without any explanation about the drawing itself or any other text besides the question. Gemini 1.5 Pro understood it was a water tower without extra help.

Experimental tech

The model is not available to the general public at the moment. Currently, it’s being offered as an early preview to “developers and enterprise customers” through Google’s AI Studio and Vertex AI platforms for free. The company is warning testers they may experience long latency times since it is still experimental. There are plans, however, to improve speeds down the line.

We reached out to Google asking for information on when people can expect the launch of Gemini 1.5 and Gemini 1.5 Ultra plus the wider release of these next-gen AI models. This story will be updated at a later time. Until then, check out TechRadar's roundup of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Qualcomm exec says next-gen Windows coming mid-2024 – but will it be Windows 12?

Microsoft’s next-gen version of Windows, whatever that might be called, is set to pitch up in the middle of 2024.

The Register reports that Qualcomm’s CEO, Cristiano Amon, made the revelation in an earnings call for the company. On the call, Amon mentioned the next incarnation of Windows when talking about the incoming Snapdragon X Elite chip, which is going to be the engine of some of the AI-powered laptops Microsoft keeps banging on about (this is the year of AI PCs, remember?).

Amon said: “We’re tracking to the launch of products with this chipset [Snapdragon X Elite] tied with the next version of Microsoft Windows that has a lot of the Windows AI capabilities … We’re still maintaining the same date, which is driven by Windows, which is mid-2024, getting ready for back-to-school.”

So, the release date of the middle of 2024 for the laptops driven by Qualcomm’s chip is pitched there because that’s when next-gen Windows will come out.

This echoes previous chatter from the grapevine that the middle of 2024 should be the release date for the next iteration of Windows, including a specific mention of June from one source (add salt, naturally, as even if this is Microsoft’s plan right now, it may not pan out).

Analysis: Navigating the nuances

There’s a lot of nuances to all these rumors and official declarations about the launch of next-gen Windows (we’ve also heard from Intel, as well as Qualcomm). Firstly, let’s clarify: will the next desktop OS from Microsoft be Windows 12, or Windows 11 24H2?

The simple answer to this is we don’t know, but all the current evidence is stacking up to indicate that the next release will be Windows 11 24H2 – although that doesn’t completely rule out the possibility of Windows 12. On balance, Windows 12 is probably more likely to arrive in 2025 though (if that’s what it ends up being called – the point is, this will be an all-new Windows, not just an update to Windows 11).

However, there will be a different kind of all-new Windows arriving in 2024, even if we get Windows 11 24H2 this year, and not Windows 12, as seems likely. Confused? Well, don’t be: what Microsoft is ushering in – for the middle of this year – is a new platform Windows is built on. This new take on the underpinnings of the desktop OS is called Germanium and it brings a whole lot of work under the hood for better performance and security. The kind of things you won’t see, but will still benefit from.

Germanium is the platform that AI PCs will be built on, and when Qualcomm’s CEO mentions Snapdragon X Elite-powered laptops arriving in the middle of 2024 with the next version of Windows, that’s what Amon is really talking about: Germanium.

In short, this doesn’t mean we’ll get next-gen Windows 12 in mid-2024, but that if it’s the Windows 11 24H2 update – which as mentioned is most likely the case, going by the rumors flying around – it’ll still be a new Windows (the underlying platform, not the actual OS you interact with).

The other twist is that Windows 11 24H2 (or indeed Windows 12, if that slim chance pans out) won’t be coming to everyone in the middle of the year. The plan is to bring out the new Germanium-powered Windows, whatever it’s called, on new laptops (AI PCs) first – perhaps in July, going by previous buzz from the grapevine – but it’ll be a while before existing Windows 11 PCs get the upgrade. That rollout to all users is rumored to be happening in September, but whatever the case, it’ll be later in the year before everyone using Windows 11 gets the upgrade.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 could get next-gen USB4 Version 2.0 support with speeds of up to 80Gbps

Windows 11 could soon benefit from super-fast USB devices, as Microsoft is currently testing support of a new 80Gbps USB standard.

This will be the successor to USB4, capable of delivering data transfer speeds of up to 80 Gbps (doubling USB4’s speed) and is known as USB4 Version 2.0. The preview was released through Microsoft’s Dev Channel in the Windows Insider Program, Microsoft’s own community for professionals and Windows enthusiasts to try out new features and versions of Windows OSs and provide feedback.

The testing will be constrained to a very, very limited number of users for now because to facilitate this USB speed standard, your PC will have to have one of Intel’s most cutting-edge processors, the Intel Core 14th Gen HX-series mobile processors. 

This line up of processors was only just announced at CES 2024 on January 8, so very few users will have access to them in the present. 

As Microsoft details in its Windows Insider Blog, this is the first substantial update to the USB4 standard, doubling USB transfer speeds from 40Gbps to 80Gbps. Here’s what Microsoft had to say, expanding on what this development will mean for future devices: 

This is the first major version update of the USB4 standard and increases performance to 80Gbps from 40Gbps. It enables the next generation of high-performance displays, storage, and connectivity. It is fully backwards compatible with peripherals built for older generations of USB and Thunderbolt and works alongside all other USB Type-C features.

Microsoft Teams copilot

(Image credit: Microsoft Teams)

What else Microsoft is testing out right now?

The Windows 11 Preview Build 23615 offers testers a crop of new features including USB4 Version 2.0. One other introduction besides the USB speed upgrade that Microsoft is looking into is launching Copilot automatically when Windows 11 starts up, specifically for widescreen devices (no specifying exactly what qualifies as wide enough for this). Windows observers don't seem so hot on this prospect, and it seems like Microsoft knew this was likely and provided instructions on how to disable it: Settings > Personalization > Copilot. 

In this build, Microsoft also added apps that you can share URLs directly to via the Windows share window, namely WhatsApp, Gmail, X (formerly Twitter), Facebook, and LinkedIn. If you’d like to try this in Microsoft Edge, for instance, you do have to first enable the Share button as it’s disabled by default. You can do this by going to the three dot icon in the top right of Edge, going to Settings, scrolling down to “Select which buttons to show on the toolbar:” and toggling on for the Share button to be displayed.

Then, you’ll need to highlight or copy the link you want to share, and then click the Share button to the right of the address bar (which will be grayed out at first but then darken when you’ve selected a link).

While the above build is still being rolled out to Windows Insiders in the Dev Channel, these new features will also be made available through a gradual rollout in Beta build 22635.3061 via the Beta Channel of the Windows Insider Program.  Users who install this build will need to turn the toggle on to enable the new features if they’d like to try them. has detailed this and more features and preview build versions that have just been released that Windows Insiders can try out now. 

It’ll be a little while before we start seeing the effects of the USB4 Version 2.0 standard and you’ll have to get one of the newest Windows PCs available to see it for yourself. It sounds very promising and will likely improve users’ experiences when USB4 Version 2.0 devices and accessories start to roll out.


TechRadar – All the latest technology news

Read More

Microsoft’s next-gen Windows sounds pretty mind-blowing with ‘groundbreaking’ new AI features

Microsoft is cooking up a next-gen Windows – and it may not be called Windows 12 – with some huge new AI tricks, according to a new report.

This comes from Zac Bowden at Windows Central, one of the more prolific Windows leakers out there – and a good source for rumors in our experience – who has plenty to say about what Microsoft is planning to do with AI in the next version of Windows for 2024 (which is codenamed Hudson Valley).

Some heavy-duty work is planned to integrate AI throughout the Windows interface more deeply, bringing in a whole load of features, some of which are described as “groundbreaking”. According to Bowden’s sources, the cornerstone of this work will be an AI-powered Windows Shell.

This will benefit from an ‘advanced Copilot’ AI that’s constantly working away in the background, looking at your searches and what you’re doing, trying to understand the context and help appropriately.

Some examples are given, such as being able to use more natural language in a search in Windows, like “find me that file Karen sent to me on WhatsApp earlier this week.”

There will also be a timeline feature that allows you to scroll back through all your recent app and website usage which Copilot records, and you can search within that for any term.

We’re also told to expect an AI-supercharged version of Live Captions, capable of translating to multiple different languages in real-time, not just for video (or audio) playback, but also on a live video chat.

The slight catch is that some AI features will require NPU hardware – a Neural Processing Unit which is a partner chip for the CPU/GPU, one that specializes in accelerating AI tasks – and those will be more heavyweight capabilities. One example given is AI-powered upscaling of the image quality of videos or indeed games.

Robot hands emerging from laptop signifying AI

(Image credit: Shutterstock)

Bowden also contends that we could even get fancy wallpapers driven by AI, and indeed we’ve heard leaks previously about backgrounds that will use a parallax effect – one that interacts with the cursor, or gyroscope in Windows devices that have one, and looks like it’s popping out of your screen.

Away from AI – which will be the major focus of next-gen Windows, though we’d already guessed that – Microsoft is also planning changes to the interface. That includes a ‘Creator’ panel for the Start menu and File Explorer, which will be a hub for launching anything related to content creation in Windows. Bowden describes it as a kind of ‘launchpad for Microsoft 365’ bristling with shortcuts to your latest Word documents, PowerPoints, and so forth.

Furthermore, there’s even talk of shifting around bits of the core layout of the desktop, like putting the system tray (bottom-right) at the top of the screen – but more radical moves like this are likely for the future, not next year.

Finally, another major enhancement for next-gen Windows will come to energy saver (recently spotted in testing), with Microsoft seemingly looking at beefing up battery life by up to 50% in certain scenarios – which would be huge for laptop owners. And apparently, a new ‘green power’ option will be capable of detecting when the electricity it’s being fed from the socket is derived from renewable sources, and it’ll initiate charging if that’s the case. Pretty nifty.

When might we get all – or at least some – of these goodies? Bowden reckons that Microsoft is aiming to complete work on next-gen Windows in August 2024, and it’ll be rolled out around September or October, the typical time the big annual update is expected to arrive.

As we said at the outset, Bowden is also somewhat doubtful about whether this next version of Microsoft’s desktop OS will be Windows 12 – and we’ve already been discussing that in-depth elsewhere this morning (plus there’s some juicy new info on changes Microsoft is apparently planning for the release cadence of Windows).

A PC gamer looking happy

(Image credit: Shutterstock)

Analysis: Game-changing possibilities

This is just talk – chatter from the rumor mill – so we must apply the usual skepticism, and consider that some of it might be fanciful, wishful thinking. But it does sound like AI really will make a big difference to the next incarnation of Windows.

Having a search function that allows for more natural use of language – “find those images I put on Google Drive from that Microsoft press kit last month” – would be hugely powerful. Not having to remember the exact name of a file you’re hunting out will be a huge boon in itself. (And hopefully, we can avoid Copilot inquiring: “Oh, and by the way, why aren’t you using OneDrive?”).

There are several game-changing possibilities mentioned here, like real-time captions delivered for video chatting with multiple language options. And in a very literal sense, NPU-powered upscaling for games could be very useful where Nvidia’s or AMD’s upscaling tech isn’t present, or supported – and it’ll be great for videos and watching Netflix, or your preferred streaming service, on your PC too.

What isn’t mentioned here, and would seem to be an obvious avenue of potential improvement, is Voice Access. Powering up speech recognition tech with AI seems like a way to make next-gen Windows truly innovative – Voice Access has already come a long way (since incorporating Dragon’s excellent tech), but surely there’s scope for AI to make it all the more powerful. And for spoken conversations with Copilot to become the norm, with no typing needed, and no misinterpretation.

In recent times, accessibility has been an area Microsoft has been laudably keen to make improvements with, and surely that theme will continue with AI helping to push the boundaries therein.

You might also like…

TechRadar – All the latest technology news

Read More

Meta’s new VR headset design looks like a next-gen Apple Vision Pro

Meta has teased a super impressive XR headset that looks to combine the Meta Quest Pro, Apple Vision Pro and a few new exclusive features. The only downside? Anything resembling what Meta has shown off is most likely years from release.

During a talk at the University of Arizona College of Optical Sciences, Meta’s director of display systems research, Douglas Lanman, showed a render of Mirror Lake – an advanced prototype that is “practical to build now” based on the tech Meta has developed. This XR headset (XR being a catchall term for VR, AR and MR) combines design elements and features used by the Meta Quest Pro and Apple Vision Pro – such as the Quest Pro’s open side design and the Vision Pro’s EyeSight – with new tools such as HoloCake lenses and electronic varifocal, to make something better than anything on the market.

We’ve talked about electronic varifocal on TechRadar before – when Meta’s Butterscotch Varifocal prototype won an award – so we won’t go too in-depth here. Simply put, using a mixture of eye-tracking and a display system that can move closer or further away from the headset wearer’s face, electronic varifocal aims to mimic the way we focus on objects that are near or far away in the real world. It's an approach Meta calls a “more natural, realistic, and comfortable experience”.

You can see it at work in the video below.

HoloCake lenses help to enable this varifocal system while trimming down the size of the headset – a portmanteau of holographic and pancake.

Pancake lenses are used by the Meta Quest 3, Quest Pro, and other modern headsets including the Pico 4 and Apple Vision Pro, and thanks to some clever optic trickery they can be a lot slimmer than lenses previously used by headsets like the Quest 2.

To further slim the optics down, HoloCake lenses use a thin, flat holographic lens instead of the curved one relied on by a pancake system – holographic as in reflective foil, not as in a 3D hologram you might see in a sci-fi flick.

The only downside is that you need to use lasers, instead of a regular LED backlight. This can add cost, size, heat and safety hurdles. That said, needing to rely on lasers could be seen as an upgrade since these can usually produce a wider and more vivid range of colors than standard LEDs.

A diagram showing the difference between pancake, holocake and regular VR lens optics

Diagrams of different lens optics including HoloCake lenses (Image credit: Meta)

When can we get one? Not for a while 

Unfortunately, Mirror Lake won’t be coming anytime soon. Lanman described the headset as something “[Meta] could build with significant time”, implying that development hasn’t started yet – and even if it has, we might be years away from seeing it in action.

On this point Mark Zuckerberg, Meta’s CEO, added that the technology Mirror Lake relies on could be seen in products “in the second half of the decade”, pointing to a release in 2026 and beyond (maybe late 2025 if we’re lucky).

This would match up with when we predict Meta’s next XR headset – like a Meta Quest Pro or Meta Quest 4 – will probably launch. Meta usually likes to tease its headsets a year in advance at its Meta Connect events (doing so with both the Meta Quest Pro and Quest 3), so if it sticks to this trend the earliest we’ll see a new device is September or October 2025. Meta Connect 2023 passed without a sneak peek at what's to come.

Apple Vision Pro showing a wearer's eye through a display on the front of the headset via EyeSight

Someone wearing the Apple Vision Pro VR headset (Image credit: Apple)

Waiting a few years would also give the Meta Quest 3 time in the spotlight before the next big thing comes to overshadow it, and of course let Meta see how the Apple Vision Pro fares. Apple’s XR headset is taking the exact opposite approach to Meta’s Quest 2 and Quest 3, with Apple offering very high-end tech at a very unaffordable price ($ 3,499, or around £2,800 / AU$ 5,300). 

If Apple’s gamble pays off, Meta might want to mix up its strategy by releasing an equally high-end and costly Meta Quest Pro 2 that offers a more significant upgrade over the Quest 3 than the first Meta Quest Pro offered compared to the Quest 2. If the Vision Pro flops, Meta won’t want to follow its lead.


TechRadar – All the latest technology news

Read More

Google’s AI plans hit a snag as it reportedly delays next-gen ChatGPT rival

Development on Google’s Gemini AI is apparently going through a rough patch as the LLM (large language model) has reportedly been delayed to next year.

This comes from tech news site The Information whose sources claim the project will not see a November launch as originally planned. Now it may not arrive until sometime in the first quarter of 2024, barring another delay. The report doesn’t explain exactly why the AI is being pushed back. Google CEO Sundar Pichai did lightly confirm the decision by stating the company is “focused on getting Gemini 1.0 out as soon as possible [making] sure it’s competitive [and] state of the art”. That said, The Information does suggest this situation is due to ChatGPT's strength as a rival.

Since its launch, ChatGPT has skyrocketed in popularity, effectively becoming a leading force in 2023’s generative AI wave. Besides being a content generator for the everyday user, corporations are using it for fast summarization of lengthy reports and even building new apps to handle internal processes and projections. It’s been so successful that OpenAI has had to pause sign-ups for ChatGPT Plus as servers have hit full capacity.

Plan of attack

So what is Google’s plan moving forward? According to The Information, the Gemini team wants to ensure “the primary model is as good as or better than” GPT-4, OpenAI’s latest model. That is a tall order. GPT-4 is multimodal meaning it can accept video, speech, and text to launch a query and generate new content. What’s more, it boasts overall better performance when compared to the older GPT-3.5 model, now capable of performing more than one task at a time.

For Gemini, Google has several use cases in mind. The tech giant plans on using the AI to power new YouTube creator tools, upgrade Bard, plus improve Google Assistant. So far, it has managed to create mini versions of Gemini “to handle different tasks”, but right now, the primary focus is getting the main model up and running. 

It also plans to court advertisers with their AI as advertising is “Google’s main moneymaker.” Company executives have reportedly talked about using Gemini to generate ad campaigns, including text and images. Videos could come later, too.

Bard upgrade

Google is far from out of the game, and while the company is putting a lot of work into Gemini, it's still building out and updating Bard

First, if you’re stuck on your math homework, Bard will now provide step-by-step instructions on how to solve the problem, similar to Google Search. All you have to do is ask the AI or upload a picture of the question. Additionally, the platform can create charts for you by using the data you enter into the text prompts. Or you can ask it to make a smiley face like we did.

Google Bard's new chart plot feature

(Image credit: Future)

If you want to know more about this technology, we recommend learning about the five ways that ChatGPT is better than Google Bard (and three ways it isn't).

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

You might also like

TechRadar – All the latest technology news

Read More

Next-gen Windows is coming in 2024, Intel exec confirms (without mentioning Windows 12)

We appear to have got our clearest indication yet that a whole new version of Windows will be coming next year.

Windows Latest reports that at a recent technology conference, Intel’s chief financial officer, David Zinsner, confirmed that the next iteration of Windows is indeed due to land in 2024.

Zinsner commented: “We actually think 2024 is going to be a pretty good year for client, in particular, because of the Windows refresh.”

Clearly, then, Intel has been informed that there’s going to be a new version of Windows next year.

Although there’s no mention of the name Windows 12, or any other name for that matter – ‘Windows refresh’ is obviously not the title Microsoft will plump for when it comes to the successor to Windows 11.

Analysis: Playing the name game

Of course, there are already plenty of rumors around Microsoft bringing out a next-gen Windows in 2024. And there’s plenty of speculation that it will be called Windows 12, too, but the reality is that at this point, Microsoft may be deep into working on this next version, but probably doesn’t know what it’ll be called itself yet.

Windows 12 just seems the most likely default option, naturally. About the only other possibility that occurs to us is that Microsoft may want to jam Copilot into the name, or maybe ‘AI’ or something along those lines, given that this is the latest big thing (TM). And Copilot will certainly be considerably developed in a year’s time.

You may recall that Intel was previously the source of a leak about next-gen Windows, one that actually used the name Windows 12 when talking about support regarding upcoming processors. This info was quickly retracted when reported on, though, and we wouldn’t read anything into the use of the name, as we just mentioned.

Next-gen Windows, whether it’s Windows 12, Windows AI – or insert your own guess here – is expected to arrive later in the second half of 2024 (work theoretically began on the new OS at the start of 2022).

We’re expecting it to be built around big advances with Copilot which will doubtless be used to push it as a compelling upgrade. Microsoft will be looking for a sizeable carrot to dangle in front of would-be upgraders, especially considering that Windows 11 has failed pretty miserably to gain all that much traction in its two years of existence thus far.

You might also like

TechRadar – All the latest technology news

Read More