What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Excited for Apple’s Vision Pro? Forget that, rumors have started about how the sequel will be better

Apple is rumored to be considering making changes to the next version of the Vision Pro – still some way off, given the first-gen model is yet to launch, of course – around slimming down the headset’s size and weight.

In Mark Gurman’s latest Power On newsletter (for Bloomberg), the well-known Apple leaker told us that the company is mulling some notable improvements for the next-gen Vision Pro on the comfort front.

Gurman observes that with some feedback from testers expressing concerns about neck strain due to the weight of the headset, Apple wants to make the next-gen device both lighter and more compact.

This may be a key focus for the next iteration of the Vision Pro, as Apple fears that the weight of the incoming first device “could turn off consumers already wary of mixed-reality headsets,” Gurman asserts. The Vision Pro can feel too heavy for some folks, even during shorter periods of use, we’re told.

Reducing the weight of next-gen Vision Pro is the priority by the sounds of it, with any size reduction likely to be much less noticeable (and harder to achieve).

As 9 to 5 Mac, which spotted this, further points out, Apple actually already made the incoming first-gen headset more compact – with a trade-off. Namely, the design doesn’t give room for people who wear prescription glasses to be able to fit those in.

So, that creates a separate issue in catering to spectacle wearers, and Apple’s solution is to implement a system of prescription lenses that magnetically attach to the 4K displays for the headset.

That’s not ideal, though, for a lot of reasons. It’s a headache for retailers in terms of stocking the huge number of lens prescriptions they’ll have to deal with – having to find the right one for a glasses wearer not just if they’re buying, but also if they’re simply wanting to try out the headset.

Another obvious downside is that the owner’s glasses prescription may well change in the future (ours certainly does, repeatedly), so again, there’s the hassle of having to get new lenses for your Vision Pro too.

It seems Apple is mulling the idea of shipping custom-built headsets directly with the correct prescription lenses preinstalled, but there could be problems with that, as well.

Gurman noted: “First, built-in prescription lenses could make Apple a health provider of sorts. The company may not want to deal with that. Also, that level of customization would make it harder for consumers to share a headset or resell it.”

Whether that whole thorny nest of glasses-related issues can be tackled with the Vision Pro 2, well, we’ll just have to see.

Apple Vision Pro

(Image credit: Apple)

Analysis: Long-term vision for success

So, it seems like the weight of the Vision Pro might be an issue from early testing feedback. That said, in his try-out session, our editor-in-chief found the headset “relatively comfortable” and so wasn’t critical on that front. But 9 to 5 Mac’s writer observed that while shorter sessions are likely to be fine, they could “absolutely see getting tired of wearing [the headset] after extended sessions.”

This may vary from person to person somewhat, it’s true, but it sounds like if Apple is indeed planning to make the next-gen headset lighter, the firm is recognizing that things in this department are less than ideal.

At any rate, while it’s good to hear this, we’ll only really know how the Vision Pro shapes up on the comfort front when it comes to full review time.

For us, though, the most uncomfortable part of the Vision Pro experience is the price. Even just looking at that price tag makes our hearts heavy, as we won’t ever be able to afford the thing.

At $ 3,500 in the US (around £2,900, AU$ 5,500) – and remember, the prescription lenses will add to that bill, especially if you need multiple lenses for different family members – the Vision Pro is just too rich for our blood. We just can’t see that price flying with consumers when Apple’s headset hits the shelves next year in the US (in theory early in 2024).

Especially with mixed reality and VR headsets in general being a niche enough prospect as it is. Indeed, Meta’s Quest 3 is so, so much more affordable in comparison, and for the money represents a great buy.

It’s not like Apple doesn’t realize all this, of course, and we’ve already heard chatter on the grapevine about how a cheaper Vision Pro model might be inbound – which more than any other improvement, would be fantastic to see.

You might also like

TechRadar – All the latest technology news

Read More

Meta AI is coming to your social media apps – and I’ve already forgotten about ChatGPT

Meta is going all out on artificial intelligence, first developing its own version of ChatGPT as well as implementing Instagram’s AI ‘personas’ to appeal to a younger audience. Now, the company has announced a new AI image generation and editing feature during Meta’s Connect event, which will be coming to Instagram soon. 

If you’re familiar with OpenAI’s ChatGPT or Google’s Bard, Meta AI will feel very familiar to you. The all-general purpose assistant can help with all sorts of planning and organizational tasks, and will now offer the ability to generate images via the prompt ‘/imagine’. 

You’ll also be able to show Meta AI on Instagram a photo you wish to post and ask it to apply a watercolour effect, make the image black and white and so on. Think of the Meta assistant as a more ‘social’ version of ChatGPT, baked right into your social media apps.

Alongside the assistant, the initial roster of 28 AI characters is beginning to roll out across the company’s messaging app. Most of these characters are based on celebrities like Kendall Jenner, Mr. Beast, Paris Hilton and my personal favourite, Snoop Dogg! You can chat with these ‘personas’ directly and finally ask Paris what lipgloss she uses. As you chat with the characters their profile image will animate based on the topic of conversation, which is pretty cool considering chatting with most AI chatbots is kind of… boring, at least from a visual standpoint.

ChatGPT may have started it, but Meta could finish it

It’s clear that Meta is taking AI integration very seriously, and I love to see it! By integrating its virtual assistant and AI tools into the apps billions of people use every day it’s guaranteed an existing user base, and in my opinion, shows that the company has taken the time to really understand why users would approach their product. 

Instead of just unleashing an assistant that will give you recipes and do your homework, it looks like Meta AI is tailored to suit everyday purposes and feels like a really clever way to implement the tool in people’s lives. The assistant is right there in the app if and when you need it, so you don’t have to leave the app to engage with the assistant.

Meta’s huge scale of potential users gives it a good chance of being the AI assistant people will use for the first time and could be the AI assistant people will end up using on a day-to-day basis. No extra app to download or account to make, and no swiping away from your conversation to get to what you need. I think Meta made a smart choice taking its time and has now come out the gate swinging – and I really do think ChatGPT creators OpenAI should be a little bit worried. 

You might also like

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Microsoft quietly reveals Windows 11’s next big update could be about to arrive

If you were wondering when Windows 11’s big upgrade for this year will turn up, the answer is soon, with Microsoft now making the final preparations to deploy the 23H2 update – with a revelation apparently imminent.

As Windows Latest tells us, Microsoft just shipped a ‘Windows Configuration Update’ which is readying the toggle to allow users to select ‘Get the latest updates as soon as they’re available’ and be first in line to receive the 23H2 update.

Note that nothing is actually happening yet, just that this is a piece of necessary groundwork (confirmed via an internal document from Microsoft, we’re told) ahead of the rollout of the Windows 11 23H2 update.

Okay, so when is the 23H2 update actually going to turn up? Well, Windows Latest has heard further chatter from sources that indicates Microsoft is going to announce the upgrade at an event later this week.

That would be the ‘special event’ Microsoft revealed a while back, taking place in New York on September 21 (Thursday). As well as the expected Surface hardware launches, we will also evidently get our first tease of the 23H2 update, at least in theory.


Analysis: Copilot on the horizon

An announcement this week makes sense to us, ahead of a broader rollout that’ll be coming soon enough.

As Windows Latest further points out, the 23H2 update will likely become available next month – at least in limited form. This means those who have ticked that toggle to get updates as soon as possible may receive it in October – at least some of those folks, in the usual phased deployment – before that wider rollout kicks off in November, and everyone gets the new features contained within the upgrade.

In theory, that means Windows Copilot, though we suspect the initial incarnation of the AI assistant is still going to be pretty limited. (And we do wonder why Microsoft isn’t going to keep on baking it until next year, but that’s a whole other argument – it seems like with AI, everything has to be done in quite the rush).

It’s also worth bearing in mind that if you’re still on the original version of Windows 11, 21H2, you’ll need to upgrade anyway – as support for that runs out on October 10, 2023. PCs on 21H2 are being force-upgraded to 22H2 right now, although you’ll pretty much be able to skip straight to 23H2 after that, should you wish.

You might also like

TechRadar – All the latest technology news

Read More

The Meta Quest 3 feature I was most excited about might come at a price

The Meta Quest 3 launch event is less than a month away, and excitement for the new VR headset is reaching boiling point. But if this leak is correct, the feature I was most excited about might require a pricey add-on.

Ahead of the Oculus Quest 2  successor’s reveal the online retailer UnboundXR.eu has seemingly posted prices for several Quest 3 accessories. This includes a carrying case for €79.99 (around $ 86 / £68 / AU$ 134), an Elite Strap with Battery for €149.99 ($ 161 / £128 / AU$ 252), and a Silicone Face Interface for €49.99 ($ 54 / £42 / AU$ 84). These were spotted by u/Fredricko on Reddit, but the store pages have since been hidden. 

The one that’s most disappointing to me is seeing the Meta Quest 3 Charging Dock for €149.99 ($ 161 / £128 / AU$ 252). 

Thanks to a different Meta Quest 3 leak (courtesy of a US Federal Communication Agency filing) it looked like the new gadget would be getting a charging dock – my favorite Meta Quest Pro feature. Thanks to this peripheral I’ve never gone to wear my Quest Pro and found it’s out of charge – something I can’t say about my Quest 2. The dock also makes it easy to keep the headset and controllers powered up without needing to use multiple power outlets for charging – an issue with headsets such as the HTC Vive XR Elite, which requires three outlets for charging instead of one.

The Meta Quest 3 and its controllers are sat in a white plastic dock in an official looking image

The leaked Quest 3 dock. (Image credit: Meta)

Most importantly, this dock was included in the price of the Meta Quest Pro – which was $ 1,500 / £1,500 / AU$ 2,450 at launch and is now $ 999.99 / £999.99 / AU$ 1,729.99.  According to Meta, the cheapest Meta Quest 3 will be $ 499 / £499 / AU$ 829 / €499  so I was already a little worried that the dock wouldn’t come with the cheaper headset – forcing us to pay a little extra for its advantages. What I didn’t expect, however, was that the dock might be roughly a third of the price of the new machine, as this leak has suggested.

While these leaks come from a semi-official source – a Reddit user claims UnboundXR has said the prices are from Meta directly  – it’s still worth taking the information with a pinch of salt. They could be best-guess placeholder prices while the store builds the product pages ahead of the Quest 3 launch later this year. What’s more, the peripherals UnboudXR listed might still come packaged with the headset with the listings here merely being for replacement parts. We won’t know how pricey the add-ons really are until the headset launches at Meta Connect 2023.

Out with the old

If the price of these XR peripherals has got you down, I’m sorry to say the bad news doesn’t stop there. According to separate leaks, the Quest 3 may not be compatible with your Quest 2 accessories – forcing you to pay for all-new add-ons if you want them.

See more

This is with respect to the headset strap; @ZGFTECH on X (formerly Twitter) posted a picture seemingly showing a side-by-side of the Quest 3 strap and the Quest 2 Elite strap with the two having pretty different designs – suggesting the old strap will be incompatible with the new hardware. I’m still holding out hope however that my Quest Pro charging dock will be compatible with the Quest 3, though given the new dock’s wildly different design, I’m not holding my breath.

Admittedly this shouldn’t be entirely unexpected – it’s not unheard of for tech peripherals to be exclusive to specific devices. But it’s something to keep in mind if you’re looking to upgrade to Meta's latest VR hardware.

You might also like:

TechRadar – All the latest technology news

Read More

WhatsApp is about to get its first AI trick – and it could be just the start

WhatsApp is taking its first steps into the world of artificial intelligence as a recent Android beta introduced an AI-powered, sticker generation tool

Revealed in a new report from WABetaInfo, a Create button will show up in chats whenever some app testers open the sticker tab in the text box. Tapping Create launches a mini-generative AI engine with a description bar at the top asking you to enter a prompt. Upon inputting said prompt, the tool will create a set of stickers according to your specifications that users can then share in a conversation. As an example, WABetaInfo told WhatsApp to make a sticker featuring a laughing cat sitting on top of a skateboard, and sure enough, it did exactly as instructed. 

WhatsApp sticker generator

(Image credit: WABetaInfo)

It’s unknown which LLM (large language model) is fueling WhatsApp’s sticker generator. WABetaInfo claims it uses a “secure technology offered by Meta.”  Android Police, on the other hand, states “given its simplicity” it could be “using Dall-E or something similar.” 

Availability

You can try out the AI tool yourself by joining the Google Play Beta Program and then installing WhatApp beta version 2.23.17.14, although it’s also possible to get it through the 2.23.17.13 update. Be aware the sticker generator is only available to a very small group of people. There’s a chance you won’t get it. However, WABetaInfo claims the update will be “rolling out to more users over the coming weeks,” so keep an eye out for the patch when it arrives. No word on an iOS version. 

Obviously, this is still a work in progress. WABetaInfo says if the AI outputs something that is “inappropriate or harmful, you can report it to Meta.” The report goes on to state that “AI stickers are easily recognizable” explaining recipients “may understand when [a drawing] has been generated”. The wording here is rather confusing. We believe WABetaInfo is saying AI content may have noticeable glitches or anomalies. Unfortunately, since we didn’t get access to the new feature, we can’t say for sure if generated content has any flaws.

Start of an AI future

We do believe this is just the start of Meta implementing AI to its platforms. The company is already working on sticker generators for Instagram and Messenger, but they’re seemingly still under development. So what will the future bring? It’s hard to say. It would, however, be cool to see Meta finally add its Make-A-Scene tool to WhatsApp.

It’s essentially the company’s own take on an image generator, “but with a bigger emphasis on creating artistic pieces.” We could see this being added to WhatsApp as a fun game for friends or family to play. There’s also MusicGen for crafting musical compositions, although that may be better suited for Instagram.

Either way, this WhatsApp beta feels like Meta has pushed the first domino of what could be a string of new AI-powered features coming to its apps.

TechRadar – All the latest technology news

Read More

Microsoft hasn’t forgotten about Windows 10, as a vital fix for game crashes finally arrives

Windows 10 gamers have got a reason to celebrate with the latest preview update for the OS, which comes with an important fix for a nasty gaming-related crash, and other cures besides.

The problem with PC games is related to Timeout Detection and Recovery (TDR) errors popping up, either causing a crash, or even locking up the system in some more extreme cases.

As you may have seen, the fix for this was applied to Windows 11 in the Moment 3 update – it was first spotted in the preview of that patch which emerged late in June.

The good news for Windows 10 users is that the fix is in KB5028244 (build 19045.3271 for Windows 10 22H2), which again is a preview patch (an optional download). This means the full (polished) fix will be available in August’s cumulative update for Windows 10, and that’s only a couple of weeks away now.

In the release notes for the patch, Microsoft observes: “This update addresses an issue that might affect your computer when you are playing a game. Timeout Detection and Recovery (TDR) errors might occur.”

On top of this, there are fixes for a bug that prevents some VPN apps from making a successful connection, and a glitch that means when a PC comes back from sleep, certain display or audio devices go missing in action.

Furthermore, there’s the resolution of a problem with Windows 10 where a full-screen search can’t be closed (and prevents any further action from being taken with the Start menu), and a raft of other tweaks and fixes.


Analysis: A welcome fix, albeit slightly late

There are some important cures here, then, as those mentioned bugs are quite a pain for those affected.

PC gamers on Windows 10 – the vast majority still – were particularly miffed when Windows 11 got a solution for the TDR crashes in June, with Microsoft leaving them in the lurch. And with no mention of Windows 10 back at the time, some gamers were even talking about this being a reason to upgrade to Windows 11 – that’s how annoyed some folks are by this one.

As one Reddit user put it: “Windows 10 TDR errors have been the bane on [sic] my life.”

At any rate, the fix is now here, and hopefully it’ll prove effective on Windows 10. Of course, right now it’s still testing as an optional update, so you’ll have to manually grab the patch via Windows Update, and there may still be problems with it. That said, those affected by TDR crashes might be so keen to get rid of them that any risk of side effects elsewhere may seem a small price to pay.

Whatever the case, as mentioned, the full fix should be coming in the cumulative update for Windows 10 next month (assuming no problems are encountered in this final testing phase).

Clearly, Windows 11 has priority as Microsoft develops and tinkers with its desktop operating systems, but it feels an odd situation where two-thirds of gamers are still on Windows 10, and are getting the short end of the stick with fixes like this.

TechRadar – All the latest technology news

Read More

Bing AI chatbot is about to get two much-wanted features

Bing AI is getting a couple of the most-wanted features folks have been badgering Microsoft for, with image search rolling out to everyone imminently, and dark mode shouldn’t be too far behind that.

These nuggets of info come from Mikhail Parakhin, Microsoft’s head of Advertising and Web Services, who shared the details in a couple of tweets.

See more

We’re told that multimodal/image understanding is rolling out to everyone, meaning the ability to drop an image into the chatbot and have it identify the photo (whatever it may be – a famous building, for example).

Also known as Visual Search (or Bing Vision), Microsoft just penned a blog post on this noting that it’s “rolling out now for consumers on Bing desktop and the mobile app”.

As you can see from Parakhin’s tweet, Visual Search should be fully rolled out as of today, so you should see the feature later on at some point, if you don’t already.

In the replies to the above highlighted Twitter conversation, Parakhin further tweeted about a second piece of functionality for Bing AI that folks have been clamoring for with even greater eagerness than image searching, in some cases.

That would be dark mode, and we’re told that this capability should arrive for Bing AI in a “couple of weeks”, so hopefully pretty soon indeed.


Analysis: Dark times are coming – or maybe already here?

There has been a lot of prodding and poking of Microsoft about providing a dark mode for Bing AI, so it’s great to see this arrive. Interestingly, some users are already reporting that they have dark mode – so perhaps we can expect this very soon for the chatbot, hot on the heels of the full rollout of Visual Search.

Microsoft is making fast progress with its Bing AI, with various nifty bits of functionality coming in at a good pace. Another much-requested feature that’s due to arrive in the near future is a ‘no search’ option that’ll come in handy in certain situations. (This forces an answer direct from the AI, without it scraping data from the web as part of its reply to a query).

Bing AI needs Microsoft to continue driving forward, mind you, as Bard, the rival AI from Google, might have got off to a poor start, but it’s rapidly making up ground with new features now. With Bard set to get extensions brought into the mix soon, there may be some defectors to Google’s AI – something Microsoft will clearly be desperate to avoid.

However, what Microsoft needs to be careful about, of course, is annoying folks by doing its usual badgering tricks in Windows 11 to try and get people to use the AI (and other services for that matter).

TechRadar – All the latest technology news

Read More

ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future

 

Is ChatGPT old news already? It seems impossible, with the explosion of AI popularity seeping into every aspect of our lives – whether it’s digital masterpieces forged with the best AI art generators or helping us with our online shopping.

But despite being the leader in the AI arms race – and powering Microsoft’s Bing AI – it looks like ChatGPT might be losing momentum. According to SimilarWeb, traffic to OpenAI’s ChatGPT site dropped by almost 10% compared to last month, while metrics from Sensor Tower also demonstrated that downloads of the iOS app are in decline too.

As reported by Insider, paying users of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining on social media and OpenAI’s own forums about a dip in output quality from the chatbot.

A common consensus was that GPT-4 was able to generate outputs faster, but at a lower level of quality. Peter Yang, a product lead for Roblox, took to Twitter to decry the bot’s recent work, claiming that “the quality seems worse”. One forum user said the recent GPT-4 experience felt “like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup”.

See more

Why is GPT-4 suddenly struggling?

Some users were even harsher, calling the bot “dumber” and “lazier” than before, with a lengthy thread on OpenAI’s forums filled with all manner of complaints. One user, ‘bitbytebit’, described it as “totally horrible now” and “braindead vs. before”.

According to users, there was a point a few weeks ago where GPT-4 became massively faster – but at a cost of performance. The AI community has speculated that this could be due to a shift in OpenAI’s design ethos behind the more powerful machine learning model – namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while being cheaper for OpenAI to run.

OpenAI has yet to officially confirm this is the case, as there has been no mention of such a major change to the way GPT-4 works. It’s a credible explanation according to industry experts like Sharon Zhou, CEO of AI-building company Lamini, who described the multi-model idea as the “natural next step” in developing GPT-4.

AIs eating AIs

However, there’s another pressing problem with ChatGPT that some users suspect could be the cause of the recent drop in performance – an issue that the AI industry seems largely unprepared to tackle.

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online – including an unwanted torrent of AI-authored novels on Kindle Unlimited – means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

An iPhone screen showing the OpenAI ChatGPT download page on the App Store

ChatGPT app downloads have slowed, indicating a decrease in overall public interest. (Image credit: Future)

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent – especially since there’s yet to be any meaningful demonstration of how AI models might accurately differentiate between ‘real’ information and AI-generated content.

Discussions around AI have largely focused on the risks it poses to society – for example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after it was deemed ‘too dangerous’ to be released. But content cannibalization is more of a risk to the future of AI itself; something that threatens to ruin the functionality of tools such as ChatGPT, which depend upon original human-made materials in order to learn and generate content.

Do you use ChatGPT or GPT-4? If you do, have you felt that there’s been a drop in quality recently, or have you simply lost interest in the chatbot? I’d love to hear from you on Twitter. With so many competitors now springing up, is it possible that OpenAI’s dominance might be coming to an end? 

TechRadar – All the latest technology news

Read More