Truecaller’s new feature can turn your voice into a personal secretary

Caller ID service Truecaller is giving users the ability to create a digital assistant that has their voice and can respond on their behalf. If you’re unfamiliar with the app, Truecaller launched its AI Assistant feature in 2022 to screen phone calls and take messages, among other things. Up to this point, it utilized pre-made voices, but thanks to the power of Microsoft’s Azure AI Speech, you can now use your own.

Setting up your voice within Trucaller is quite easy; you just need a subscription to Truecaller Premium, which is $ 9.99 a month per account. Once that is set, the software will immediately ask you to select an AI assistant – but instead of picking one of the pre-made personalities, select “Add your Voice.”

You’ll then be asked to read a consent sentence and a brief training script out loud into your smartphone’s microphone. Doing so ensures the AI has a voice that mimics your “speaking style.” When done, Truecaller states that Microsoft’s Azure Custom Voice begins to process the recording to create a “high-quality digital replica.” The app will give you a demo sound bite to help you imagine what it’ll sound like when someone calls you. 

Truecaller's training script

(Image credit: Truecaller)

Robo-voice

Keep in mind the technology isn’t perfect. While the digital assistant may sound like you, it does come across as rather robotic. The company published a YouTube video on its official channel showing what the AI sounds like. Admittedly, the software does a decent job at mimicking a person’s vocal inflections; however, responses still sound stiff. That said, it is an interesting and interactive way to screen calls as they come in, especially when stopping spam ones. 

Keep an eye out for the patch when it arrives, as we tried to create our own digital secretary on our Android but couldn’t since we didn’t receive the feature as of yet. It’s unknown exactly when and where the update will be available. TechCrunch claims the tool will roll out “over the next few weeks” as a public beta across a small selection of countries. These include, but are not limited to, the US, Canada, Australia, and Sweden. Soon after, it’ll become widely available “to all users in the eligible markets. 

We also reached out to Truecaller with a couple of questions, including how recordings are stored, whether they are saved on the device or uploaded to company servers, and more. If we hear back, this story will be updated.

While we have you, check out TechRadar's round up of the best encrypted messaging apps on Android for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Good news for Mac users wanting to run Windows apps: VMware Fusion Pro 13 is now free for personal use

The Windows 11 emulator for Mac, VMware Fusion Pro 13, is now free for personal use, as the software developer has waived the previous $ 199 fee. 

Announced in a blog post, VMware Fusion Pro 13 creates a virtual Windows machine for macOS devices, allowing you to run Windows apps on the likes of MacBooks and iMacs powered by Apple's M-class silicon.

Without question, it's among the best virtual machine software available but its price tag was previously alienating to casual consumers. Professional usage, however, will still require a license, but if you want to boot it up and play around with the software, you can do so without spending a cent, which is exciting. 

Keep in mind that running VMware Fusion Pro 13 on Apple's own silicon such as the M2 and M3 chip, means you'll be restricted to the performance of the SoC. While the current slew of Apple laptops and computers are powerful, with respectable integrated graphics, they can't quite hold a candle to what the best graphics cards can do. 

To use VMware Fusion Pro 13 you will need an account which can be created through the Broadcom support website, and then you'll be able to download the software. It's bittersweet news when considering that the company's Fusion Player is being discontinued, but you're getting a big upgrade. 

Unlike the Fusion Player, you'll be able to run multiple virtual machines with Fusion Pro 13, meaning you can essentially have your own virtual network localized on one device. That's exciting news for building and launching servers, or for cloud computing, among other uses. 

VMware Fusion Pro 13

(Image credit: VMware)

An excellent pro-consumer move

VMware's decision to make its Fusion Pro 13 software free is an excellent move on the company's part to gain visibility for the application. While there's no faulting the performance capabilities, asking $ 200 at the gate seriously limited the overall install base. Now people who were using Fusion Player can get the full-fat user experience at no charge. 

We've had excellent things to say about the VMware Workstation Player over the years and consider it to be the best virtual machine on the market. Now with Fusion 13 Pro being free, it gives the likes of VirtualBox (also free) and Parallels Desktop a run for their money – especially as you also aren't spending any. 

You may also like…

TechRadar – All the latest technology news

Read More

ChatGPT might get its own dedicated personal device – with Jony Ive’s help

Sam Altman, the CEO of ChatGPT developer OpenAI, is reportedly seeking funding for an AI-powered, personal device – perhaps not unlike the Humane AI Pin – and ex-Apple design guru Jony Ive is apparently getting involved as well.

This is as per The Information (via MacRumors), and the rumor is that Altman and Ive have started a “mysterious company” together to make the device a reality. The report doesn't mention much about the hardware, except to say it won't look like a smartphone.

As we've seen with the Humane AI Pin and the Rabbit R1, having an AI assistant running on a device means you don't necessarily need a display and traditional apps – the artificial intelligence engine can do everything for you, no tapping or scrolling required.

Altman and Ive are said to be seeking around $ 1 billion in funding, so this is clearly a major undertaking we're talking about. It's not clear how much involvement OpenAI would have, but its ChatGPT bot would most likely be used on the new device.

Previous rumors

A close up of ChatGPT on a phone, with the OpenAI logo in the background of the photo

ChatGPT could find itself in a new device (Image credit: Shutterstock/Daniel Chetroni)

This hasn't come completely out of the blue: back in September The Financial Times reported that Altman and Ive were “in talks” to get funding for a new project from SoftBank, a Japanese investment company.

SoftBank has a stake in CPU company Arm, which might be tapped to provide components for the hardware – which can't run entirely on AI cloud magic of course. All this is speculation for the time being, however.

In January, Sam Altman was spotted touring around a Samsung chip factory, so all the indications are that he's planning something in terms of physical hardware. It remains to be seen just how advanced this hardware is though.

During his time with Apple, Jony Ive led the design teams responsible for the iPod, iPhone, iPad and MacBook, so whatever is in the pipeline, we can expect it to look stylish. We can also expect to hear more about this intriguing device in the years ahead.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini’s new Calendar capabilities take it one step closer to being your ultimate personal assistant

Google’s new family of artificial intelligence (AI) generative models, Gemini, will soon be able to access events scheduled in Google Calendar on Android phones.

According to 9to5Google, Calendar events were on Gemini Experiences Senior Director of Product Management at Google Jack Krawczyk’s “things to fix ASAP” list for what Google would be working to add to Gemini to make it a better-equipped digital assistant. 

Users who have the Gemini app on an Android device can now expect Gemini to respond to voice or text prompts like “Show me my calendar” and “Do I have any upcoming calendar events?” When 9to5Google tried this the week before, Gemini responded that it couldn’t fulfill those types of requests and queries – which was particularly noticeable as those kinds of requests are pretty commonplace with rival (non-AI) digital assistants such as Siri or Google Assistant. However, when those same prompts were attempted this week, Gemini opened the Google Calendar app and fulfilled the requests. It seems that if users would like to enter a new event using Gemini, you need to tell it something like “Add an event to my calendar,” to which it should then prompt the user to fill out the details manually by using voice commands. 

Google Calendar

(Image credit: Shutterstock)

Going all in on Gemini

Google is clearly making progress to set up Gemini as its proprietary all-in-one AI offering (including as a digital assistant, replacing Google Assistant in the future). It’s got quite a few steps before it manages that, with users asking for features like the ability to play music or edit their shopping lists via Gemini. Another significant hurdle for Gemini to clear if it wants to become popular is that it’s only available in the United States for now. 

The race to become the best AI assistant has gotten a little bit more intense recently between Microsoft with Copilot, Google with Gemini, and Amazon with Alexa. Google did recently make some pretty big strides in its ability to compress the larger Gemini models so it could run on mobile devices. The capabilities of these more complex models sound like they can give Gemini’s capabilities a major boost. Google Assistant is pretty widely recognized and this is another feather in Google’s cap. I feel hesitant about placing a bet on any single one of these digital AI assistants, but if Google continues at this pace with Gemini, I think its chances are pretty good.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps is getting an AI-boosted upgrade to be an even better navigation assistant and your personal tour guide

It looks like Google is going all-in on artificial intelligence (AI), and following the rebranding of its generative AI chatbot from Bard to Gemini, it’s now bringing generative AI recommendations to Google Maps.

The AI-aided recommendations will help Google Maps perform even better searches for a variety of destinations, and the feature is also supposedly able to function as an advisor that can offer insights and tips about things like location, budgets, and the weather. Once the feature is enabled, it can be accessed through the search function, much like existing Google Maps features. Currently, it’s only available for US users, but hopefully, it will roll out worldwide very soon. 

This upgrade of Google Maps is the latest move in Google’s ramped-up AI push, which has seen developments like AI functionality integrated into Google Workspace apps. We’ve also had hints before that AI features and functions were coming to Google Maps – such as an improved Local Guides feature. Local Guides is intended to synthesize local knowledge and experiences and share them with users to help them discover new places.

What we know about how this feature works

Android Police got a first look at how users were introduced to the new AI-powered recommendations feature. A reader got in touch with the website and explained how they were given an option to Search with generative AI in their Google Maps search bar. When selected, it opened up a page that detailed how the new feature makes use of generative AI to provide you with recommendations in a short onboarding exercise. Tapping Continue opens up the next page that provides users with a list of suggested queries like nearby attractions they can go to kill time or good local restaurants.

Similarly to ChatGPT, Google Maps apparently also includes tips toward the bottom of that page to help you improve your search results. Users can add more details to finetune their search like their budget, a place or area they might have in mind, and what the weather looks like when they’re planning to go somewhere. If you select one of these suggested queries, Google Maps will then explain how it would go through the process of selecting specific businesses and locations to recommend.

When the user doesn’t specify an area or region, Google Maps resorts to using the user’s current location. However, if you’d like to localize your results to an area (whether you’re there or not), you’ll have to mention that in your search.

After users try the feature for the first time and go through the short onboarding in Maps, they can access it instantly through the search menu. According to Android Police, Search with generative AI will appear below the horizontal menu that lists your saved location such as Home, Work, and so on.

A promising feature with plenty of potential

Again, this feature is currently restricted to people in the US, but we hope it’ll open up to users in other regions very soon. Along with AI recommendations, Google Maps is also getting a user interface redesign aimed at upgrading the user experience.

While I get that some users might be getting annoyed or overwhelmed with generative AI being injected into every part of our digital lives, this is one app I'd like to try when equipped with AI. Also, Google is very savvy when it comes to improving the user experience of its apps, and I’m keen to see how this feature’s introduction plays out.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT could become a smart personal assistant helping with everything from work to vacation planning

Now that ChatGPT has had a go at composing poetry, writing emails, and coding apps, it's turning its attention to more complex tasks and real-world applications, according to a new report – essentially, being able to do a lot of your computing for you.

This comes from The Information (via Android Authority), which says that ChatGPT developer OpenAI is working on “agent software” that will act almost like a personal assistant. It would be able to carry out clicks and key presses as it works inside applications from web browsers to spreadsheets.

We've seen something similar with the Rabbit R1, although that device hasn't yet shipped. You teach an AI how to calculate a figure in a spreadsheet, or format a document, or edit an image, and then it can do the job for you in the future.

Another type of agent in development will take on online tasks, according to the sources speaking to The Information: These agents are going to be able to research topics for you on the web, or take care of hotel and flight bookings, for example. The idea is to create a “supersmart personal assistant” that anyone can use.

Our AI agent future?

The Google Gemini logo on a laptop screen that's on an orange background

Google is continuing work on its own AI (Image credit: Google)

As the report acknowledges, this will certainly raise one or two concerns about letting automated bots loose on people's personal computers: OpenAI is going to have to do a lot of work to reassure users that its AI agents are safe and secure.

While many of us will be used to deploying macros to automate tasks, or asking Google Assistant or Siri to do something for us, this is another level up. Your boss isn't likely to be too impressed if you blame a miscalculation in the next quarter's financial forecast on the AI agent you hired to do the job.

It also remains to be seen just how much automation people want when it comes to these tasks: Booking vacations involves a lot of decisions, from the position of your seats on an airplane to having breakfast included, which AI would have to make on your behalf.

There's no timescale on any of this, but it sounds like OpenAI is working hard to get its agents ready as soon as possible. Google just announced a major upgrade to its own AI tools, while Apple is planning to reveal its own take on generative AI at some point later this year, quite possibly with iOS 18.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 may have the ability to turn any table into your personal VR keyboard

Meta CEO Mark Zuckerberg recently took to Instagram to preview a potential virtual keyboard feature for Quest headsets.

Posted on his official account, the short clip shows Zuckerberg and Meta CTO Andrew Bosworth typing away on a VR keyboard while wearing a Quest 2 headset. The device was able to accurately track their finger movements and display what they were writing on screen without requiring any extra peripherals. According to Zuckerberg, he was able to achieve 100 wpm (words per minute) while Bosworth hit 120 wpm. To put that into perspective, the average typing speed of an adult is 40 wpm so it does perform well.

If development bears fruit, it could solve a longstanding problem with virtual reality. 

Typing in VR is a slow process. You’re forced to enter inputs one at a time since floating VR keyboards can't match the speed of a physical device. Sure, you can purchase one of the best physical keyboards out there to get the speed that you want. But then you’re forcing yourself to carry around an extra peripheral alongside the VR headset just to get the user experience you want. Things can get cumbersome.

A work in progress

There is still work to be done over at Meta’s Reality Labs research unit where this tech was developed. 

News site UploadVR points out in their report the headset requires “fiducial markers” to work properly. Fiducial markers are those black and white squares you see in the Instagram video. They assist the hardware in calibrating itself so it knows where to place the virtual keyboard. The end goal here would be to one day not need those squares for help so the VR helmet can project the keeb on any flat-enough surface.

Personally, we worry about typing feel. This technology already exists with laser keyboards that can project the keys onto a flat surface. The problem with these projections is typing feels terrible because you’re just mashing your fingers into a table, and we fear Meta’s feature will essentially be the same thing. This may be fine for the occasional email, but we can’t imagine using a VR keyboard for an entire day’s work. 

VR peripherals

It's important to mention Meta is holding a two-day Connect virtual event from September 27 to 28. It's been confirmed the Quest 3 headset will make its debut at Connect, and perhaps a beta test for the VR keyboard will be announced then. An official launch date seems unlikely. As stated earlier, there's still work to be done.

We’re also curious to know if the company will finally show off its wristband device at the event.

If you’re not aware, Meta has been working on a wristband gadget that can read the electrical signals in a person’s arm to register inputs. The latest trailer for this gadget shows it can be used for simple gestures like twitching your finger to control a video game avatar. However, back in 2021, an earlier prototype displayed the ability to function as a virtual keyboard by using the same electrical signals. It’s unknown at this time if Meta scrapped the wristband feature in favor of the headset keyboard or if it’s still in the works.

Be sure to check out TechRadar’s list of the best wireless keyboards if you’re looking for a keeb to pair up with your Quest headset. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More