WhatsApp is testing a new self-destructing voice messages feature

WhatsApp is currently testing a View Once mode for voice messages as a “new layer of privacy” on the mobile app.

The feature functions similarly to the disappearing images and videos present on the platform. Meta is merely expanding it elsewhere. According to WABetaInfo, a new icon sporting the number one will appear in the chat bar while you record a voice note with the lock on. Tapping said icon enables the View Once mode (well it's more like Listen Once) preventing recipients from exporting, forwarding, saving, or recording messages. Once sent over, you, the sender, cannot listen to it nor can the other person play it again after the first time. It’s gone forever.  

WhatsApp Listen Once voice messages

(Image credit: Future)

As WABetaInfo points out, this tool has the potential to effectively eliminate “the risk of your personal or sensitive information falling into the wrong hands.” Messages can’t be shared with people outside the initial chat room, greatly reducing the odds “of unauthorized access.”

This update is available for both Android and iOS. If you’re interested in trying out yourself, Android users can join the Google Play Beta Program and install version 2.23.78 of the WhatsApp beta. iPhone owners can try to join the TestFlight program for WhatsApp. However, at the time of this writing it’s no longer accepting any more entrants, although it is possible a slot could open soon.

Going quiet

As for the future of WhatsApp, things will be getting a little quiet. None of the other beta features are as impactful or noteworthy as the self-destructing voice messages. Looking through WABetaInfo’s other posts, we saw that Meta is working implementing avatar reactions plus a redesigned audio and video menu for iOS. Nothing really ground-breaking.

It’s not surprising the platform is going silent at the moment as 2023 has been quite the year for WhatsApp. It’s seen multiple major updates these past 10 months or so from several quality-of-life changes to eight-person video calls on the Windows desktop app. And recently, the company began testing an AI-powered sticker generator for chats. Perhaps Meta is keeping its projects under wraps so it can kick off 2024 in a big way.

While we have you, be sure to follow TechRadar’s official WhatsApp channel. We post our latest reviews and news stories daily on there. 

You might also like

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

New Final Cut Pro update brings Mac Studio support, Voice Isolation, and more

Apple has released the next version of Final Cut Pro, version 10.6.2, and introduces a host of new improvements and new features, including optimizations for Mac Studio workstations, and introduces new Duplicate Detection and Voice Isolation features.

For the Mac Studio, the new update is somewhat vague other than indicating that there have been optimizations for playback and graphics performance on both the M1 Max and M1 Ultra versions of the Mac Studio, but users should see an overall improvement in performance.

Duplicate detection, as the name suggests, searches for duplicate ranges within the timeline and marks the clips for easier editing, something that would be especially helpful for long-form content. 

Meanwhile, Voice Isolation, as the name suggests, is a feature that uses machine learning to isolate voice frequencies from other sounds in the background.

Other improvements to Final Cut Pro include updates to its companion apps Motion and Compressor, various updates to Tracker Options/Object Tracker, and other performance and reliability enhancements. 

iMovie has also been updated to its 3.0 version, which adds Storyboards and Magic Movie features.

The full release notes for the update are:

Via 9to5mac

TechRadar – All the latest technology news

Read More

WhatsApp will soon help avoid the embarrassment of sending the wrong voice message

WhatsApp’s voice messaging feature will soon get some nifty updates to help compose, send, and listen to those convenient audio messages.

The Meta-owned app is the main mode of communication for over two billion people every month. It’s free, highly accessible, and end-to-end encrypted, making it an important app for users around the globe to connect with family and friends. These updates could enhance the already pretty good voice messaging feature of the app by helping avoid miscommunications in audio messages and helping listeners speed through long-winded conversations. 

Included features are:

WhatsApp didn’t provide a release date in the announcement or information about which platforms it will arrive on and in what order, but you can expect the features to roll out over the next few weeks. 

WhatsApp Voice Message Update

(Image credit: WhatsApp)

 Analysis: WhatsApp stays on top for a reason 

 The updates that Meta steadily brings to WhatsApp aren’t anything groundbreaking, and that’s by design. There’s a reason that the app continues to be the most popular global messaging app out there. 

Small features brought about in incremental updates maintain the app’s ease of use by not getting in the way of how the app’s two billion monthly active users already interact with it.

With these updates, it looks like the majority of the interface remains the same, and the new draft previews will help users avoid sending messages that weren’t ready yet. It’s the little things that count the most.

TechRadar – All the latest technology news

Read More

WhatsApp will soon help avoid the embarrassment of sending the wrong voice message

WhatsApp’s voice messaging feature will soon get some nifty updates to help compose, send, and listen to those convenient audio messages.

The Meta-owned app is the main mode of communication for over two billion people every month. It’s free, highly accessible, and end-to-end encrypted, making it an important app for users around the globe to connect with family and friends. These updates could enhance the already pretty good voice messaging feature of the app by helping avoid miscommunications in audio messages and helping listeners speed through long-winded conversations. 

Included features are:

WhatsApp didn’t provide a release date in the announcement or information about which platforms it will arrive on and in what order, but you can expect the features to roll out over the next few weeks. 

WhatsApp Voice Message Update

(Image credit: WhatsApp)

 Analysis: WhatsApp stays on top for a reason 

 The updates that Meta steadily brings to WhatsApp aren’t anything groundbreaking, and that’s by design. There’s a reason that the app continues to be the most popular global messaging app out there. 

Small features brought about in incremental updates maintain the app’s ease of use by not getting in the way of how the app’s two billion monthly active users already interact with it.

With these updates, it looks like the majority of the interface remains the same, and the new draft previews will help users avoid sending messages that weren’t ready yet. It’s the little things that count the most.

TechRadar – All the latest technology news

Read More

Meta Builder Bot concept happily builds virtual worlds based on voice description

The Metaverse, that immersive virtual world where Meta (née Facebook) imagines we'll work, play, and interact with friends and family is also where we may someday build entire worlds with nothing but our voice.

During an online AI development update delivered, in part, by Meta/Facebook Founder and CEO Mark Zuckerberg on Wednesday (February 23), the company offered a glimpse of Builder Bot, an AI concept that allows the user to build entire virtual experiences using their voice.

Standing in what looked like a stripped-down version of Facebook's Horizon Worlds' Metaverse, Zuckerberg's and a co-worker's avatars asked a virtual bot to add an island, some furniture, clouds, a catamaran, and even a boombox that could pay real music to the environment. In the demonstration, the command phrasing was natural and the 3D virtual imagery appeared instantly, though it did look a bit like the graphics you'd find in Nintendo's Animal Crossing: New Horizons.

The development of Builder Bot is part of a larger AI initiative called Project CAIRaeoke, which is an end-to-end neural model for building on-device assistance. 

Meta's Builder Bot concept

Mark Zuckerberg’s legless avatar and Builder Bot. (Image credit: Future)

Zuckerberg explained that current technology is not yet equipped to help us explore an immersive version of the internet that will ultimately live in the Metaverse. While that will require updates across a whole range of hardware and software, Meta believes AI is the key to unlocking advancement that will lead to, as Zukerberg put it, “a new generation of assistants that will help us explore new worlds”.

“When we’re wearing [smart] Glasses, it will be the first time an AI system will be able to see the world from our perspective,” he added. A key goal here is for the AI they're developing to see as we do and, more importantly, learn about the world as we do, as well.

It's unclear if Builder Bot will ever become a true part of the burgeoning Metaverse, but its skill with real-time language processing and understanding how parts of the environment should go together is clearly informed by the work Meta is doing.

Mark Zuckerberg talks AI translation

Mark Zuckerberg talks AI translation (Image credit: Future)

Zuckerberg outlined a handful of other related AI projects, all of which will eventually feed into a Metaverse that can be accessed and used by anyone in the world.

These include “No Language Left Behind,” which, unlike traditional translation that often uses English as a mid-translation point, can translate languages directly from the source to the translation language. There's also the very Star Trek-like “Universal Speech Translator”, which would provide instantaneous speech-to-speech translation across all languages, including spoken languages.

“AI is going to deliver that in our lifetimes,” said Zuckerberg.

Mark Zuckerberg talks image abstraction

Mark Zuckerberg talks image abstraction (Image credit: Future)

Meta is also investing heavily in self-supervised learning (SSL) to build human-like cognition into AI systems. Instead of training with tons of images to help the AI identify patterns, the system is fed raw data and then asked to predict the missing parts. Eventually, the AI learns how to build abstract representations.

An AI that can understand abstraction could complete an image just from a few pieces of visual information, or generate the next frame of a video it's never seen. It could also build a visually pleasing virtual world with only your words to guide it.

For those full-on freaked out by Meta's Metaverse ambitions, Zuckerberg said that the company is building the Metaverse for everyone and they are “committed to build openly and responsibly” while protecting privacy and preventing harm.

It's unlikely anyone will take his word for it, but we look forward to watching the Metaverse's development.

TechRadar – All the latest technology news

Read More

Your WhatsApp voice calls are getting a needed overhaul for iOS and Android

WhatsApp is testing a new look for being in a call, both on iOS and Android, which shows who's speaking in a group call with waveforms, alongside a more modern design.

The company has been working on improvements across the app for the last year, with multi-device support, a desktop app for Windows 11, and more to better rival other messaging apps.

But calling in WhatsApp has been relegated to the standard user interface of what iOS and Android offer to third-party apps with call features.

However, version 22.5.0.70, currently available to beta testers, the new look for calling in the app is going to benefit group calls more than those that are one-to-one.


Analysis: Making your voice calls look much better

WhatsApp audio wave form call

(Image credit: WABetaInfo)

For years, the interface when you're in a call on iOS and Android has barely seen any improvement since their first versions. While iOS 14 brought a compact view for when you would be in a call, the full-screen view has been relatively unchanged.

More users are preferring to choose to call over other apps, from WhatsApp to Skype, especially with group calls, which is why an update to the interface for WhatsApp is welcome.

Here, you've got an elegant design that shows who's speaking thanks to the audio waveforms for when someone speaks, alongside three options that's available to you at all times if you want to go on mute, end the call, or go on loudspeaker.

It's a modern design that only goes to show how much of an update the call screen in iOS and Android needs, especially for group calls.

Via WABetaInfo

TechRadar – All the latest technology news

Read More

Windows 11 now lets you type with your voice

Windows 11 has a new preview build which extends voice control capabilities to allow typing on the virtual keyboard.

Voice access is a feature which was introduced in testing for Windows 11 in December 2021, allowing for a range of different voice controls including the ability to operate mouse clicks with your voice – so adding the same functionality for the touch keyboard in this new build 22538 makes sense.

The way it works is simple: open the virtual keyboard with a command, and each key has a number on it. To press a key, you simply say “click 27” if you want number 27 (which is the letter ‘s’), for example. You can also easily access numbers, punctuation or emoji.

Microsoft further notes that it's starting to roll out the ability to download Speech Packs (from the Microsoft Store) for “device-based speech recognition that provides a better performance of transcription.”

Also present in this fresh preview release for the Dev Channel is some useful work with Alt-Tabbing and the Task View (the focus rectangle highlighting what’s selected now uses your chosen accent color), and a bunch of other minor changes and bug fixes as ever, all of which are listed in Microsoft’s blog post.


Analysis: Making Dragon a more fiery beast

As we’ve pointed out before, the Windows 11 voice access features are pretty much all drafted across from Nuance’s Dragon speech recognition app (Microsoft bought Nuance last year).

What’s interesting with the on-screen touch keyboard controls brought in with this preview build is that this is a new endeavor not seen in Dragon (at least, not in the version we use – namely Dragon Professional 15, which is the latest release). There are some ways of using your voice to control keys in Dragon 15, but they’re limited (to the likes of function keys, Tab and backspace).

We can expect Microsoft to further build on voice access as Windows 11 matures, and we’re keen to know what comes next. Further honing dictation accuracy – which is already admittedly good – would be great to see.

TechRadar – All the latest technology news

Read More

WhatsApp is making voice messages look more exciting

Voice messages are, by their very nature, an audio experience – but this is something that WhatsApp is looking to change. When you receive a voice message form a contact (or, indeed, if you send one yourself), you will be used to seeing a progress bar during playback.

This is a handy visual aid that helps let you know how long a message is and how much more there is to listen to. But now the feature is getting a bit of an upgrade to make it more visually appealing thanks to voice waveforms.

We've seen WhatsApp playing around with waveforms previously, with Android users who are signed up for the beta program having been given a sneaky glimpse at the feature. But it was only a brief look, as voice waveforms were swiftly disabled without a word of explanation

However, they appear to be back. The fact that the new visual accompaniment to voice messages is now available for iOS and Android beta testers (but still only beta testers) could be indicative of the feature being almost complete and ready for an even wider rollout. But what's all the fuss about?

Sound and vision

On one hand, these are just pretty animations to watch while you listen to a message you have received. On the other, they are helpful visual tools that can be reassuring when there is a period of silence in a message; if there is no activity in the waveform, you can safely assume that there is no sound to hear, rather than there being a problem with your speakers… or ears.

As is often the case with WhatsApp, although this new feature is being made available to beta testers, it is not necessarily going to be available to all beta testers immediately. It's something that's controlled server-side, so while ensuring that you have the latest version of the app installed is undoubtedly a good idea, it's sadly no guarantee of getting access to voice wave forms right now.

WABetaInfo reports that WhatsApp beta for Android 2.21.25.11 and WhatsApp beta for iOS 2.21.240.18 are compatible with the feature, so make sure that you have one of these installed for the best possible change of getting to try out the new feature.

TechRadar – All the latest technology news

Read More