Logitech has built an AI sidekick tool that it hopes will help you work smarter, not harder, with ChatGPT

In a move that shows how mainstream artificial intelligence (AI) is these days, Logitech has launched its free Logi AI Prompt Builder software tool that isn’t yet another AI chatbot, but instead designed to help Logitech users get the most out of an existing chatbot, ChatGPT

Logitech is also working on the hardware side of making AI-specific peripherals, launching a wireless mouse that’s equipped with an AI prompt button: the Logitech Signature AI Edition Mouse.

Who can access the Logi AI Prompt App and where

The Logi AI Prompt Builder can be accessed via the existing Logi Options+ app. This is freely available to anyone using a Logitech keyboard or mouse that’s supported by the English version of the Logi Options+ app, which includes Logitech MX, Ergo, Signature, and Studio Series devices.

Logitech has set up a site detailing the new AI tool, and you can click ‘Download Now’ to get the Logi Options+ app. Once you download and install this, you can designate a keyboard shortcut that you’d like to use to quickly open up the Logi AI Prompt Builder. Then, users can open it through the Logi Options + app or by using their keyboard shortcut, enabling them to receive recommendations about the text they've selected to converse with ChatGPT about.

A close up shot of Logitech's new AI-specific mouse, set on a table and the various parts labelled

(Image credit: Logitech)

Logi AI Prompt Builder will then offer you suggestions for commonly-used ChatGPT prompts, such as ‘Rephrase’ and ‘Summarise.’ You can also customize your queries within the tool, and ask it to make suggestions that take into account the sort of tone, style, complexity, and length of answer that you’d like. The latter of these is also offered by other generative AI tools like Microsoft’s own digital AI assistant, Windows Copilot. According to Logitech, this app will make for a smoother and less disruptive workflow, especially for those who make use of AI tools, thanks to you having to make fewer clicks and being able to work faster. You can check out how the tool works for yourself before downloading and installing it by watching a demo that Logitech has put on the Logi AI Prompt Builder site. 

I could see this having use beyond helping people work with ChatGPT, as other generative AI chatbots like Google’s Gemini and Anthropic’s Claude Sonnet might also offer better responses thanks to Logitech’s suggestions. 

Logi AI Prompt Builder is now live and accessible for free for any user with a suitable Logitech device, and is available for both Windows and Mac users via the Logi Options+ app. The dedicated Logitech Signature AI Edition Mouse is currently available exclusively on the Logitech.com website for $ 49.99 in the US and £54.99 in the UK.

A man sitting at a table with a computer and the AI tool on his screen, in a room filled with modern decor

(Image credit: Logitech)

A vote of confidence for generative AI

This launch has piqued my interest greatly because it’s a pretty substantial move from a company that mostly specializes in PC peripherals, which suggests that it’s not just computer manufacturers that are making products that embrace our AI future. It’s also pretty indicative to me of companies like Logitech being convinced of generative AI’s staying power.

It’s one step closer to AI being a normal part of our work and everyday lives, and reminds me of Microsoft’s plans to add a Copilot button in the keyboards of new laptop models. I’m keen to try a tool like this for myself and see if my workflow becomes smoother, because if that’s the case, Logitech, Microsoft, and others could be on to something.

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT’s newest GPT-4 upgrade makes it smarter and more conversational

AI just keeps getting smarter: another significant upgrade has been pushed out for ChatGPT, its developer OpenAI has announced, and specifically to the GPT-4 Turbo model available to those paying for ChatGPT Plus, Team, or Enterprise.

OpenAI says ChatGPT will now be better at writing, math, logical reasoning, and coding – and it has the charts to prove it. The release is labeled with the date April 9, and it replaces the GPT-4 Turbo model that was pushed out on January 25.

Judging by the graphs provided, the biggest jumps in capabilities are in mathematics and GPQA, or Graduate-Level Google-Proof Q&A – a benchmark based on multiple-choice questions in various scientific fields.

According to OpenAI, the new and improved ChatGPT is “more direct” and “less verbose” too, and will use “more conversational language”. All in all, a bit more human-like then. Eventually, the improvements should trickle down to non-paying users too.

More up to date

See more

In an example given by OpenAI, AI-generated text for an SMS intended to RSVP to a dinner invite is half the length and much more to the point – with some of the less essential words and sentences chopped out for simplicity.

Another important upgrade is that the training data ChatGPT is based on now goes all the way up to December 2023, rather than April 2023 as with the previous model, which should help with topical questions and answers.

It's difficult to test AI chatbots from version to version, but in our own experiments  with ChatGPT and GPT-4 Turbo we found it does now know about more recent events – like the iPhone 15 launch. As ChatGPT has never held or used an iPhone though, it's nowhere near being able to offer the information you'd get from our iPhone 15 review.

The momentum behind AI shows no signs of slowing down just yet: in the last week alone Meta has promised human-like cognition from upcoming models, while Google has made its impressive AI photo-editing tools available to more users.

You might also like

TechRadar – All the latest technology news

Read More

Your Microsoft OneDrive storage is about to get smarter thanks to this time-saving Copilot feature

Microsoft’s on fire recently with the addition of some super-useful features thanks to its artificial intelligence assistant Copilot, and it looks like OneDrive is finally getting a much-needed AI boost. Soon, you’ll be able to search through your files without having to open them to find the relevant info simply by asking Copilot the question you want answered. 

Say you’re looking for a specific figure or quote but you have too many files to start searching, or you’re like me and don’t organize anything into folders at all (oops). Instead of opening every document and scanning through to find the specific bit of info you’re looking for, you’ll be able to pull up Copilot and tell it what you want to find. You could ask it to find a specific bit of info from a lecture presentation, or group project, and Copilot will go through the files and provide the relevant answers. 

According to MSPoweruser, this feature will work across multiple file types including DOC, DOCX, PDF, TXT,  and more, so you won’t be restricted to just Word documents. 

The feature is included in Microsoft’s 365 roadmap, due to be released to users sometime in May 2024. Hopefully, we’ll see this trickle down to Microsoft’s free Office for Web suite (formerly known as Office Online) which includes an in-browser version of Microsoft Word and 5GB of OneDrive cloud storage. 

A win for the unorganized girlies

This feature alone is enough to entice me away from Google Drive just for the convenience alone. There’s nothing worse than having to crawl through your folders and files to find something you’re looking for. 

I would have appreciated this feature when I was at university, especially with how many notes and textbooks I had scattered around my school One Drive account. By bringing Copilot into the mix, I could have found whatever I was looking for so much faster and saved myself from a fair amount of panic. 

If you work in an industry where you’re constantly dealing with new documents with critical information every day, or a student consistently downloading research papers or textbooks, this new addition to Copilot's nifty AI-powered skill set is well worth keeping an eye out for. 

While I am disappointed this feature will be locked behind the Microsoft 365 subscription, it’s not surprising – Microsoft is investing a lot of time and money into Copilot, so it makes sense that it would use its more advanced features to encourage people to pay to subscribe to Microsoft 365. However, there’s a danger that if it paywalls all the most exciting features, Copilot could struggle to be as popular as it deserves to be. Microsoft won’t want another Clippy or Cortana on its hands.

You might also like…

TechRadar – All the latest technology news

Read More

Google Maps could become smarter than ever thanks to generative AI

Google Maps is getting a dose of generative AI to let users search and find places in a more conversational manner, and serve up useful and interesting suggestions. 

This smart AI tech comes in the form of an “Ask about” user interface where people can ask Google Maps questions like where to find “places with a vintage vibe” in San Francisco. That will prompt AI to analyze information, like photos, ratings and reviews, about nearby businesses and places to serve up suggestions related to the question being asked.  

From this example, Google said the AI tech served up vinyl record stores, clothing stores, and flea markets in its suggestions. These included the location along with its rating, reviews, number of times rated, and distance by car. The AI then provides review summaries that highlight why a place might be of interest. 

You can then ask follow-up questions that remember your previous query, using that for context on your next search. For example, when asked, “How about lunch?” the AI will take into account the “vintage vibe” comment from the previous prompt and use that to offer an old-school diner nearby.

Screengrabs of the new generative AI features on Google Maps showing searches and suggestions

(Image credit: Google)

You can save the suggestions or share them, helping you coordinate with friends who might all have different preferences like being vegan, checking if a venue is dog friendly, making sure it is indoors, and so on.

By tapping into the search giant’s large-language models, Google Maps can analyze detailed information using data from more than 250 million locations, and photos, ratings and reviews from its community of over 300 million contributors to provide “trustworthy” suggestions. 

The experimental feature is launching this week but is only coming to “select Local Guides” in the US. It will use these members' insights and feedback to develop and test the feature before what’s likely to be its eventual full rollout, which Google has not provided a date for.

 Does anyone want this?  

Users on the Android subreddit were very critical of the feature with some referring to AI as a buzzword that big companies are chasing for clout, user lohet stated: “Generative AI doesn't have any place in a basic database search. There's nothing to generate. It's either there or it's not.”

Many said they would rather see Google improve offline Maps and its location-sharing features. User, chronocapybara summarized the feelings of others in the forum by saying:  “If it helps find me things I'm searching for, I'm all for it. If it offloads work to the cloud, making search slower, just to give me more promoted places that are basically ads, then no.” 

However, AI integration in our everyday apps is here to stay and its inclusion in Google Maps could lead to users being able to discover brand-new places easily and helping smaller businesses gain attention and find an audience.

Until the features roll out, you can make the most of Google Maps with our 10 things you didn't know Google Maps could do

You may also like

TechRadar – All the latest technology news

Read More

Microsoft has finally updated Sticky Notes in Windows 11 – and I’m excited about my favourite feature getting a lot smarter

Microsoft could finally be updating the Sticky Notes app for Windows 11 (and Windows 10), after years of seeming neglect.

Sticky Notes is a pre-installed app from Microsoft that allows users to put virtual sticky notes on the desktop to help remember tasks or make to-do lists across their devices linked to their Microsoft account. While it’s an app that can be easily overlooked (even by Microsoft), for those of us who use it – such as myself – it can be an incredibly useful tool for staying organized and productive.

As Windows Central reports, the Sticky Notes social media account has just put out an intriguing update, hinting at some big updates coming to the app in the near future. With the last official post from the account dating back to 2020, this sudden burst of activity suggests that whatever the changes are in store – they’re going to be big. 

See more

Microsoft has been pumping out a steady stream of updates for Windows 11 and Windows 10, and rumors of a big 24H2 update slated for later this year, suggest huge changes are coming to Microsoft’s latest operating system. Even ancient pre-installed apps like Microsoft Paint and Notepad have received some positive updates in the last two years, so it’s refreshing to see the Sticky Notes app finally get some love and attention. 

The recent post from the Sticky Notes account doesn't give out too much information, teasing only that we should expect one of the “biggest announcements yet” for the feature. The account also responded to some initial speculation from excited users clarifying that the news is not a web app – for now. Instead, Windows Central is speculating that it could have something to do with Artificial Intelligence (AI), and I agree- here’s why.

Working smarter, not harder 

Sticky Notes seems like quite a basic feature at the moment, which means it's ripe for getting new AI features – something Microsoft has been incredibly keen on lately. Its close partnership with OpenAI (the company behind the popular ChatGPT AI bot), and continuing mission to integrate its own AI bot, Copilot, into almost every facet of Windows 11, means Microsoft already has the tools and knowledge to give its older apps some AI brains.

This is exactly what happened with the iconic Notepad app, which recently got ChatGPT-powered AI features, turning the once basic word processing app into a rather cool and useful tool that can help you with your writing – and all for free, due to it coming pre-installed with Windows since the 1980s.

There are a lot of positives that can come out of combining the simplest tool on your desktop and the ‘smarts’ of ChatGPT or Microsoft Copilot coming together, especially as the Sticky Note app works across your devices. 

However, things could also go sideways and Microsoft might end up bloating and overcomplicating an app users enjoy for its simplicity and reliability. Sticky Notes is one of those apps on Windows that just works; you know what you’re going to find when you unbox a new PC and you always know exactly what your virtual sticky notes will look like. A big change like the social media account suggests could turn a lot of loyal users into disgruntled ones if Microsoft ends up making the Sticky Notes feature far too complicated. Plus, not every user will be thrilled to have artificial intelligence bleed into such a basic app (and the security and ethical issues that surround AI). 

I love Sticky Notes and while I’m on the fence about how these ‘big changes’ will affect one of my favourite Windows features, I do ultimately think it will be a good thing. We could see exciting updates that could allow people to create collaborative Sticky Notes on their desktop, have the AI draft shopping lists out of desired recipes, and comb through emails and calendar apps to create a daily to-do list or schedule. 

While I do think the Sticky Note app doesn’t need the upgrade, there is the concern that it could be left behind if it’s not brought up to speed. AI-powered features, if done well, will not only retain its existing fans like me but also encourage new users to discover the app – and maybe even fall in love with it.

You might also like…

TechRadar – All the latest technology news

Read More

Google Assistant gets AI boost – but will it make it smarter?

The AI chatbot race is far from over, despite ChatGPT’s current dominance, and Google is not showing any signs of letting up. In fact, reports suggest Google is preparing to “supercharge” Assistant, its virtual personal assistant, by integrating generative AI features similar to the ones found in OpenAI’s ChatGPT and Google’s own generative AI chatbot Bard

Google has begun development on a new version of Google Assistant for mobile (check out the full list of devices that will be able to run , as stated in an internal email circulated to employees as reported by Axios. This is allegedly going to take place through a reorganization of its present Assistant team which will see a reduction of “a small number of roles”.

The exact number of employees that are expected to be let go has not been specified, though Axios has claimed that Google has already laid off “dozens” of employees. We have contacted Google to find out more.

Google Assistant

(Image credit: Google)

The newer, shinier, and AI-connected Google Assistant

As reported by The Verge, Google is looking to capitalize on the momentum of the rapid development of large language models (LLMs) like ChatGPT to  “supercharge Assistant and make it even better,” according to Google spokesperson Jennifer Rodstrom. 

Google is placing a big bet on this Google Assistant gambit, being “deeply committed to Assistant” and its role in the future, according to Peeyush Ranjan, Google Assistant’s vice president, and Duke Dukellis, Google product director, in the email obtained by Axios.

This step in Google’s AI efforts follows Bard’s recent big update which enabled it to respond to user queries by “talking” (presumably meaning that it will reply using a generated voice, much like Google Assitant does), visual prompts, opening up Bard to more countries, and the introduction of over 40 languages. 

Google has not yet revealed what particular features it’s focusing on for Assistant, but there are plenty of ways it could improve its virtual assistant such as being able to respond in a more human-like manner using chatbot-like tech.

Making sure customer data remains safe and protected

Google Assistant is already in many people’s homes thanks to it being included in many devices such as Android smartphones and Google Nest smart speakers (find out how the Google Nest currently compares here) , so Google has an extensive number of users to test with. “We’re committed to giving them high quality experiences,” Rodstrom told the Verge. 

Of course, this does raise concerns about the privacy and security of its customers, as Google is likely to try and implement changes of this type to its smart home products, and some people may not be comfortable with giving the search giant even more access to their private lives. 

There is also a major concern (which, to be fair, also applies to other chatbots such as ChatGPT); accuracy of information.

google home

(Image credit: Google)

Tackling the issue of bad information and final thoughts

Google could tackle accuracy and misinformation concerns by making the generative AI being developed for Google Assistant devices linked to Google Search, as Bard is not intended to serve as an information source.

In a recent interview, the Google UK executive Debbie Weinstein emphasized that users should double-check the information provided by Bard using Google Search (as reported on by The Indian Express). 

If we’re talking hands-free Assistant devices, I assume that there is development happening to add mechanisms of this sort. Otherwise, users have to carry out a whole interrogation routine with their Assistant devices which could interrupt the flow of using the device quickly and intuitively.

It’s an enticing idea – the home assistant that can fold your laundry and tell you bedtime stories, and steps like these feel like pushes in that direction. It all comes at a cost, and the more tech saturates out lives, the more we expose to those who wish to use it for ill-intentioned purposes. 

This is going to be a huge issue for many people, and it should be, and Google should make just as much of an effort to secure its users data as it does doing magic tricks with it. That said, many Google device users and Android users will be looking forward to a more intelligent Google Assistant, as many report that they don’t get much sense from it at the moment. We’ll see if Google can deliver on its proposed steps (hopefully) forward.

Hopefully, these upgrades to both Bard and Google Assistant will make them, well, more intelligent. Putting security and privacy aside (only for a brief moment), this has real potential to make users' home devices, like Nest devices, more advanced in their ability to react to your questions and requests with relevant information and tailor responses using your personal information (responsibly, we hope).

TechRadar – All the latest technology news

Read More

Google Docs is getting a whole lot smarter – and collapsable?

In its eternal struggle to replicate features that Microsoft 365 and even Office has had for aeons, Google Docs is getting – make sure you’re sitting down – collapsible headings.

This is good – it’ll keep documents from feeling cluttered or unruly, we’re just at a loss as to why it’s taken until 2023 for this to happen.

The announcement, posted on the Google Workspace updates blog, revealed that the change will arrive shortly for Google Workspace and Personal users but, as tends to happen to us, we found that the feature isn’t yet available for us specifically.

 Making a word processor fit for purpose

We’re not being contrarian for clicks when we assert that Google software has always been behind the times – whether it’s deciding to chase the AI zeitgeist after Microsoft finds success in that space or still lacking reorderable headings in the document outline, meaning I much prefer to first draft long-form work in Microsoft Word.

And it is a shame that Google is treating artificial intelligence (which, as we’re being urged to understand at the moment, is simply a form of machine learning) as the be all and end all. 

The best way to enhance a productivity tool isn’t to throw in features that trade on buzzwords and promises of a personal assistant to do your work for you. Less ambitious features, unconcerned with grabbing headlines but which are altogether more important at making work bearable.

Google is obviously going for a little from column A and and a little for column B with this approach. Its happy medium is something like the “smart chips” across Google Workspace, allowing documents to contain links to other ones, files, people, or events, making them better at centralising information.

The “smart chips” are good, in that “smart” here means “convenient” rather than “literally sentient”. I feel like I’m on a theme here,  having written about the ills of AI in office software relatively recently, but I might like to revise what I wrote there.

It’s not so much that I need to be dazzled by innovation to keep me conscious, I just need to be able to get through the day without wanting to throw my cloud-driven office software out of the window.

So, Google, take note: make it easy to export images from Docs without making me download a zipped .html file of the whole document, do the reorderable outline thing, and just generally step back in time to 2003. That all sounds reasonable.

TechRadar – All the latest technology news

Read More

Microsoft’s ChatGPT-powered Bing AI gets smarter with more local knowledge

Microsoft’s Bing AI just got some more improvements, including one that should make the chatbot considerably more helpful when it comes to providing tailored recommendations based on your local area.

In a blog post introducing the latest changes, Microsoft acknowledged that it had received feedback telling the company that the ChatGPT-powered Bing needed to do better with local-related queries.

In other words, specific requests such as asking for the whereabouts of a store in your neighborhood, for example.

Microsoft informs us that it has bolstered Bing’s chops in this regard, so it’ll deliver “better answers if you’re trying to find a park, a store, or a doctor’s office near you.”

Other tweaks Microsoft recently applied to its Bing chatbot include increasing the limit of the max turns you can take (queries) in a single conversation from 15 to 20. Based on the allowance of 10 daily sessions, that gives you a limit of 200 turns per day in total.

Image and video search capabilities are also integrated in the chatbot now. These will pop up as answer cards, allowing the user to click ‘see more’ to dive into further detail with a Bing image search.


Analysis: Pushing forward and besting Bard

Obviously beefing up the performance of the Bing AI to do better with local queries is an important move to make. It’s no good having an all-singing and dancing AI (you have asked the chatbot to sing to you already, right?) if it falls down embarrassingly when it comes to making basic recommendations about locations and services near you.

Mind you, the enhanced performance for these kind of queries sounds like it’s in the early stages of getting a good coat of polish. As Microsoft puts it: “Expect us to make further improvements in local grounding based on your feedback.”

Like everything with Microsoft’s ChatGPT-powered AI, then, it’s very much a work in progress. Still, the amount of progress being made is impressively sure and steady, which has got to be a worry for Google.

Google’s rival AI, Bard, has been notably slow off the starting blocks. Indeed, it feels like Google forced Bard onto the starting blocks before it had even laced its trainers, because the firm felt like the new Bing couldn’t be left unanswered, seeing as the ChatGPT-powered AI is already boosting traffic to Microsoft’s search engine.

We’re told that Bard will become more capable, and will receive improvements to its reasoning skills later this week, and it’s clear enough that Google recognizes it needs to move faster with its rival AI. At the same time, it can’t afford any missteps as seen with Bard’s launch (and to be fair, with the Bing AI’s launch too, although Microsoft seems to have recovered pretty well from the mishaps Bing encountered early on).

Our main worry about Microsoft is that the success of the Bing chatbot – so far – could go to the company’s head. There’s already worrying talk of jamming adverts into Bing AI, which we very much hope won’t happen. That’s probably a forlorn hope, and if it turns out that way, this could be an area that Bard could turn to its advantage. That said, it’s not like Google won’t be surveying every avenue of monetization down the line, too – it’d be pretty naïve to think otherwise.

Both companies would do well to remember that these AIs must be perceived as helpful friends, though, and not ones with a hidden agenda. Or, more to the point we suppose, a poorly hidden agenda which becomes painfully transparent…

TechRadar – All the latest technology news

Read More

Microsoft’s CEO calls Alexa and Siri ‘dumb’ – but ChatGPT isn’t much smarter

In an interview with the Financial Times a few weeks ago, Microsoft’s CEO Satya Nadella dismissed voice assistants, such as Alexa and Siri, as “dumb as a rock”.

This might seem a little rich coming from the CEO of a company that launched (and then abandoned) the unloved Cortana voice assistant, but I actually agree. However, unlike Nadella, I'm not so sure that the new wave of AI chatbots are where the future really lies – or at least not yet. 

Sure, they appear to be smarter than the first bunch of voice assistants, including Amazon's Alexa, Apple's Siri and Google's (less charmingly named) Assistant, but that's not saying a lot. I was initially really impressed with these assistants, particularly Alexa, to the extent that I put aside my misgivings about how much information Amazon already collected about me, and filled my home with Echo devices of all shapes and sizes.

That all seems a long time ago now, though. Despite all the promise those voice-activated digital assistants had when they launched, I can't help but feel they’ve turned into little more than hands-free light switches and timers for when I’m cooking. They even made me temporarily forget how to use a real light switch. Seriously.

That’s it. I don’t even use Alexa to play music any more. Partly because none of the Echo devices I have come with remotely decent speakers, and also because Alexa seems to have developed a strange habit where when I ask for a song to be played, it more often than not chooses a random alternative take or live version, rather than the studio version I was after. All very frustrating, especially if you're a Bob Dylan fan.

Even as a light switch, I’ve found it increasingly unreliable. I now often have to repeat myself several times before Alexa understands my request and complies. That’s if I’m lucky. Sometimes it listens, then just does nothing.

It’s become more of an inconvenience and annoyance – the exact opposite of what these virtual assistants were supposed to be. To be fair to Nadella, he told the Financial Times that “Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work. We had a product that was supposed to be the new front-end to a lot of [information] that didn’t work.”

We’re not alone in getting disillusioned with voice assistants. As the Financial Times reports, Adam Cheyer, co-creator of Apple's Siri, says that “the previous capabilities have just been too awkward… No one knows what they can do or can’t do. They don’t know what they can say or can’t say.”

It also seems like the companies behind the voice assistants are losing interest. Not only did Microsoft unceremoniously dump Cortana after years of trying to get Windows 10 and Windows 11 users to embrace (or at least tolerate) it, Amazon has cut a large number of jobs recently, and there are reports that the teams involved with Alexa and Echo devices have been particularly hard hit.

Two wrongs don’t make a right

It may be easy to suggest that Nadella’s dismissal of voice assistants is down to sour grapes, as Microsoft’s Cortana was the least popular out of the ‘big four’ – which also includes Alexa, Google Assistant, and Siri (sorry, Samsung, but no one likes Bixby either) – but I certainly agree with him. The shine has worn off.

However, it’s increasingly looking like Microsoft thinks that artificial intelligence chatbots, most noticeably ChatGPT, could solve these problems – and it’s here where I’m going to have to disagree, at least for now.

Microsoft is a big investor in ChatGPT and OpenAI, the company behind it, but when it announced it was bringing the power of ChatGPT to its Bing search engine, it managed something rare: it got people excited about Bing.

Suddenly, people were keen to try out a browser which had for so long been neglected in favor of Google. This surge in interest, plus widespread coverage in the press, has deepened Microsoft’s love affair with ChatGPT.

Having an AI bot that can converse with humans in a life-like way, and use huge amounts of stored data in its own libraries and on the internet to answer questions, seems like the natural evolution of voice assistants.

And, one day it might be. However, the technology has so far not lived up to expectations. People using ChatGPT or the version included in Bing have found the chatbot can give incorrect information, and it can also behave strangely, especially if you challenge it when it replies with the wrong answer. A similar issue emerged with Google’s rival Bard AI, which returned an incorrect answer to a question during the launch event. This quickly became quite embarrassing for Microsoft and Google, and it proved to a lot of us that AI bots are not quite ready for the limelight.

Can’t live up to the hype, sometimes unreliable and even a bit frustrating? That certainly sounds familiar, so if Microsoft and other companies don’t want history repeating, they’d do well to think twice before rushing to implement AI bots in voice assistants.

TechRadar – All the latest technology news

Read More

Bing chatbot just got smarter – and it’s about to get different AI personalities

Microsoft has deployed a new version of its Bing chatbot (v96) featuring improvements to make the AI smarter in a couple of key areas – and a big change has been flagged up as imminent, too.

Mikhail Parakhin, who heads up the Advertising and Web Services division at Microsoft, shared this info on Twitter (via MS Power User).

See more

So, what’s new with v96? Parakhin explains that users of the ChatGPT-powered Bing will now experience a ‘significant’ reduction in the number of times that the AI simply refuses to reply to a query.

There will also be “reduced instances of hallucination in answers” apparently, which is industry lingo meaning that the chatbot will produce fewer mistakes and inaccuracies when responding to users. In short, we should see less misinformation being imparted by the chatbot, and there have been some worrying instances of that occuring recently.

The other major news Parakhin delivers is that the so-called tri-toggle, known more formally as the Bing Chat Mode selector – featuring three settings to switch between different personalities of the AI Bing – is set to go live in the “next couple of days” we’re told.


Analysis: Long and winding road ahead

The ability to switch between a trio of personalities is the big change for the Bing chatbot, and to hear that it’s imminent is exciting stuff for those who have been engaging with the AI thus far.

As detailed previously, the trio of personalities available are labeled as Precise, Balanced, and Creative. The latter is set to provide a chattier experience, and Precise will offer a shorter, more typical ‘search result’ delivery, with Balanced being a middle road between the two. So, if you don’t like how the AI is responding to you, at least there will be choices to alter its behavior.

Various different versions of the Chat Mode selector have been tested, as you would imagine, and the final model has just been picked. This is now undergoing honing before release which should happen later this week as noted, but we’re guessing there’ll be plenty of further fine-tuning to be done post-release.

Certainly if the overall Bing AI experience has been anything to go by, as the whole project is, of course, still in its early stages, and Microsoft is chopping and changing things – sometimes in huge ways – seemingly without much caution.

The current tuning for v96 to ensure Bing doesn’t get confused and simply not reply will help make the AI a more pleasant virtual entity to interact with, and the same will hopefully be true for the ability to switch personalities.

At the very least, the Creative personality should inject some much-needed character back into the chatbot, which is what many folks want – because if the AI behaves pretty much like a search engine, then the project seems a bit dry and frankly in danger of being judged as pointless. After all, the entire drive of this initiative is to make Bing something different rather than just a traditional search experience.

It’s going to be a long road of tweaking for the Bing AI no doubt, and the next step after the personalities go live will likely be to lift that chat limit (which was imposed shortly after launch) to something a bit higher to allow for more prolonged conversations. If not the full-on rambles initially witnessed, the ones that got the chatbot into hot water for the oddities it produced…

TechRadar – All the latest technology news

Read More