Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Windows 11 no longer has Cortana as Microsoft pulls the plug on digital assistant

It’s official – Microsoft has made the move to scrap Cortana in Windows 11, as promised a while back.

If you recall, back in June, Microsoft let us know that Cortana was going to be killed off later in 2023. We then heard a firm date for that to happen, namely August, and the first sightings of some folks seeing Cortana dumped were reported just over a week ago.

And now it seems Microsoft is fully pulling support for the digital assistant.

Windows Central reports that the deprecation of Cortana is fully underway, and you’ll be notified the assistant is no longer available if you try to access it in Windows 11. Along with that notification, a link is provided to a support page where you can learn more about what’s going on here.

The change seems to be rolling out for everyone on Windows 11 now, though some folks may still have access to Cortana – but not for much longer.

Cortana will also be getting the elbow from Microsoft Teams later this year, we’re told, and will only remain in Outlook mobile by the time the end of 2023 rolls around.


Analysis: Curtains for Cortana across the board

Waving goodbye to Cortana won’t be a difficult task for most users. After all, certainly for the general computing population using Windows 11, Cortana wasn’t used much anyway. Microsoft had already angled the digital assistant more towards business use because of this – but Cortana will also be dumped from Microsoft Teams, as well, soon enough.

The reason for getting rid of Cortana pretty much everywhere (except Outlook mobile, for some reason) is obvious, and that’s the incoming Windows Copilot AI, a much more ambitious desktop assistant.

This will basically be the Bing AI integrated into a side panel in Windows 11, but with a lot of extra abilities to customize Windows settings in various ways, to save you the trouble of having to hunt for these (options that might be buried deep in submenus somewhere).

Microsoft’s Copilot is already in test builds of Windows 11, but right now, it’s still a barebones incarnation of what the software giant has promised. Meaning it’s pretty much just a built-in Bing chatbot with a few very limited powers to manipulate the Windows 11 environment, though Microsoft is going to build out the latter facets considerably going forward.

Rumor has it Copilot could debut in Windows 11 23H2, and clearing out Cortana before then would make sense in that light. We still have our doubts that Copilot will be impressive enough to launch in just a few short months though (mind you, Bing AI itself not being ready didn’t stop Microsoft launching the chatbot, either).

TechRadar – All the latest technology news

Read More

Google Assistant gets AI boost – but will it make it smarter?

The AI chatbot race is far from over, despite ChatGPT’s current dominance, and Google is not showing any signs of letting up. In fact, reports suggest Google is preparing to “supercharge” Assistant, its virtual personal assistant, by integrating generative AI features similar to the ones found in OpenAI’s ChatGPT and Google’s own generative AI chatbot Bard

Google has begun development on a new version of Google Assistant for mobile (check out the full list of devices that will be able to run , as stated in an internal email circulated to employees as reported by Axios. This is allegedly going to take place through a reorganization of its present Assistant team which will see a reduction of “a small number of roles”.

The exact number of employees that are expected to be let go has not been specified, though Axios has claimed that Google has already laid off “dozens” of employees. We have contacted Google to find out more.

Google Assistant

(Image credit: Google)

The newer, shinier, and AI-connected Google Assistant

As reported by The Verge, Google is looking to capitalize on the momentum of the rapid development of large language models (LLMs) like ChatGPT to  “supercharge Assistant and make it even better,” according to Google spokesperson Jennifer Rodstrom. 

Google is placing a big bet on this Google Assistant gambit, being “deeply committed to Assistant” and its role in the future, according to Peeyush Ranjan, Google Assistant’s vice president, and Duke Dukellis, Google product director, in the email obtained by Axios.

This step in Google’s AI efforts follows Bard’s recent big update which enabled it to respond to user queries by “talking” (presumably meaning that it will reply using a generated voice, much like Google Assitant does), visual prompts, opening up Bard to more countries, and the introduction of over 40 languages. 

Google has not yet revealed what particular features it’s focusing on for Assistant, but there are plenty of ways it could improve its virtual assistant such as being able to respond in a more human-like manner using chatbot-like tech.

Making sure customer data remains safe and protected

Google Assistant is already in many people’s homes thanks to it being included in many devices such as Android smartphones and Google Nest smart speakers (find out how the Google Nest currently compares here) , so Google has an extensive number of users to test with. “We’re committed to giving them high quality experiences,” Rodstrom told the Verge. 

Of course, this does raise concerns about the privacy and security of its customers, as Google is likely to try and implement changes of this type to its smart home products, and some people may not be comfortable with giving the search giant even more access to their private lives. 

There is also a major concern (which, to be fair, also applies to other chatbots such as ChatGPT); accuracy of information.

google home

(Image credit: Google)

Tackling the issue of bad information and final thoughts

Google could tackle accuracy and misinformation concerns by making the generative AI being developed for Google Assistant devices linked to Google Search, as Bard is not intended to serve as an information source.

In a recent interview, the Google UK executive Debbie Weinstein emphasized that users should double-check the information provided by Bard using Google Search (as reported on by The Indian Express). 

If we’re talking hands-free Assistant devices, I assume that there is development happening to add mechanisms of this sort. Otherwise, users have to carry out a whole interrogation routine with their Assistant devices which could interrupt the flow of using the device quickly and intuitively.

It’s an enticing idea – the home assistant that can fold your laundry and tell you bedtime stories, and steps like these feel like pushes in that direction. It all comes at a cost, and the more tech saturates out lives, the more we expose to those who wish to use it for ill-intentioned purposes. 

This is going to be a huge issue for many people, and it should be, and Google should make just as much of an effort to secure its users data as it does doing magic tricks with it. That said, many Google device users and Android users will be looking forward to a more intelligent Google Assistant, as many report that they don’t get much sense from it at the moment. We’ll see if Google can deliver on its proposed steps (hopefully) forward.

Hopefully, these upgrades to both Bard and Google Assistant will make them, well, more intelligent. Putting security and privacy aside (only for a brief moment), this has real potential to make users' home devices, like Nest devices, more advanced in their ability to react to your questions and requests with relevant information and tailor responses using your personal information (responsibly, we hope).

TechRadar – All the latest technology news

Read More

Windows 11 Copilot leak gives us a glimpse of the AI assistant in action

We know Windows 11 is set to get Microsoft’s Copilot built in, and we’ve just caught a glimpse of the AI assistant feature (well, actually, a couple of sightings, and we’ll come back to the other one later).

In case you missed it (unlikely, admittedly), Copilot is the Bing Chat-powered integrated AI that pops up in a side panel to help in Windows 11, and Windows Latest managed to get a peek at an early version (add your own seasoning, and plenty of it, as with any leak).

There’s a big caveat here, namely that the pre-release version of Copilot shown (in a very brief clip) isn’t fully functional by any means.

Still, it gives you a flavor of how the Windows 11 helper – an assistant with a much, much grander vision than Cortana – will perform, and what it can do.

Windows 11 Copilot Pre-release Version

(Image credit: Windows Latest)

We see the user instructing Copilot to turn on Dark Mode (which, ahem, it fails to do – as noted, this isn’t a proper working version), and a response to a food-based question (the queries work in much the same way as with the Bing chatbot already, and the three core personalities for replies are in here, too).

We don’t see much here, and nothing of the really cool tricks that Copilot will eventually be able to do (such as turning on multiple features in one fell swoop to help with a certain aim like ‘being more productive’, or summarizing content to go in an email, right there in the app, in-line).

However, Windows Latest does observe that Microsoft will use in-house plug-ins to customize the Bing Chat experience in Windows 11, and that Copilot will utilize a system of “action cards” to detect how you are using the OS, and offer up intelligent suggestions based on that.


Analysis: Where art thou, Copilot?

Okay, so while this glimpse of Microsoft’s AI is still very much early work, and not very exciting, it’s a useful hint that Copilot is ticking along progress-wise. Because we’ve not heard anything from Microsoft since the initial announcement of the AI, when we were told that it’d be in testing in June.

Now, June is almost over, and it seems unlikely that a preview build is going to show up later this week with a functional Copilot doing its query answering and settings manipulating stuff.

That said, we’ve caught not only this sighting of Copilot from Windows Latest, but there was another one at the weekend. That was provided by regular Twitter-based leaker Albacore, who pointed out that recent Windows 11 preview builds in the Dev channel have a Windows Copilot button (hidden – and when enabled, it doesn’t do anything, mind).

That’s another hint that things are coming into place for Copilot’s release to be tested in preview. However, we’ve got a feeling this will take a lot of internal testing before it gets to Windows Insiders, somehow. As the blurb in the Copilot side panel observes, it’s AI-powered, and “surprises and mistakes are possible”.

When it comes to a Bing chatbot query, a mistake is embarrassing enough, but with an AI embedded right into the heart of Windows 11, Microsoft is going to need to take a lot more care to avoid any potential blunders – even in testing.

TechRadar – All the latest technology news

Read More

Google Assistant ends Notes support for third parties, paving the way for other AIs

Google giveth and taketh away as the tech giant will roll back Google Assistant’s Note & List integration for third-party apps on June 20. 

This means that moving forward it will no longer be possible to create or edit notes or lists via Assistant voice commands outside of first-party software like Google Keep. People first caught wind of this update from affected third parties like AnyList which sent a notice to its users informing them of the changes. The developers understand “the loss of this feature [will be] frustrating” for many and they’re currently communicating with Google to get everything straightened out. They hope one day their app will regain Google Assistant support. However, at the time of this writing, AnyList had nothing more to share.

Amazon Alexa and Siri will still support third-party integration so you do have some options available (Be sure to check out our smart speaker guide if you plan on making the jump).

As you can probably easily imagine, people aren’t happy. A thread on the Google Home Subreddit, for example, is filled to the brim with people frustrated at the sudden shutdown. Online reports pointed out it is still possible to export old notes from third-party apps via Google Takeout. We highly recommend exporting your files now before it’s too late. 

Analysis: the diminishing Google Assistant?

As jarring as the news may be, it isn’t coming out of nowhere as the company has been making a lot of disruptive changes to Google Assistant. Back in June 2022, the company announced it would be ending Assistant’s Conversational Actions for third-party developers on June 13, 2023. And more recently, Google “ended software support for third-party smart displays from Lenovo, JBL, [as well as] LG”, effectively killing off that part of its business. But the question is why?

It could be because Google is shifting its focus away from Assistant towards generative AI. The company did recently open the doors of its Search Generative Experience to testers, but it’s hard to say with total certainty. 2023 has certainly been a weird year for Assistant. 2022 saw the AI get some sizable updates like Quick Phrases, but this year, Google saw fit to shut down crucial support in key areas.

We reached out to Google to see if it would like to make a statement about the sudden shutdown of the Notes & List support. Hopefully, the company can shed some light on the matter. This story will be updated at a later time. If the current course continues, we may see a new Assistant headstone in the Google Graveyard before long.  

TechRadar – All the latest technology news

Read More