Mercedes-Benz is bringing ChatGPT into cars for the first time

Luxury car brand Mercedes-Benz is outfitting its MBUX Voice Assistant with ChatGPT as part of a new US-only beta program. Joining the beta will allow drivers of over 900,000 “vehicles equipped with MBUX [to hold] “more dynamic” conversations with the onboard AI.

In the official announcement post, the company states it's seeking to improve its voice assistant beyond “predefined tasks and responses”. ChatGPT’s own large language model would “greatly improve [MBUX’s] natural language understanding [to] expand the topics to which it can respond.” So not only will customers be able to give voice commands, but they can also ask the AI for detailed information about their destination or suggestions for a new dinner recipe. 

ChatGPT in a Mercedes-Benz car

(Image credit: Mercedes-Benz)

Security

To make the program possible, Mercedes is incorporating Microsoft’s Azure OpenAI Service in the rollout, ensuring, according to the auto manufacturer, “enterprise-grade security, privacy, and reliability”. Conversation data will be collected and then stored in the Mercedes-Benz Intelligent Cloud where it will be “anonymized and analyzed.” All IT processes will be controlled by the company as it promises to protect “all customer data from… misuse.” Microsoft won’t have any access.

If you want to see it in action before installation, tech news site Electrek recently published a couple of videos showing off the upgraded MBUX. It utilizes both the dashboard screen as well as its onboard voice to deliver answers. When asked for suggestions for the best local beaches, the AI displayed a text list of nearby locations before recommending activities like surfing. It can even tell jokes, although they’re pretty terrible.

Availability

The beta program starts June 16 in the United States only, as stated earlier. To get started, eligible customers must first say “Hey Mercedes, I want to join the beta program” as a command to MBUX. From there, it’ll teach you how to install the ChatGPT patch. It appears part of the onboarding process includes connecting a mobile device to the AI. A full list of vehicles supporting the beta is available on the company’s website. In total, there are over 25 models ranging from sedans to SUVs.

ChatGPT on the Mercedes-Benz app

(Image credit: Mercedes-Benz)

The beta program should last three months. After that time, it’ll go offline for an indeterminate amount of time. Mercedes will then take the data it collects to improve the AI for an eventual launch. It’s unknown if either the program or the final version will be available to other global regions or other languages besides English.

We reached out to Mercedes-Benz for more information on the launch. This story will be updated at a later time.

Having a generative AI at your beck and call giving you travel suggestions sounds pretty useful and could lead to a lot more fruitful sightseeing. To that end, we recommend checking TechRadar’s list of the best travel camera for 2023 before planning your next trip.

TechRadar – All the latest technology news

Read More

Researchers tricked a Tesla Model S into speeding with a piece of tape – how could hackers cheat our cars in the future?

As the advent of autonomous driving inches forward year by year, there’s an incredible opportunity to cede control over to the machines. AI can help look for dangers on the road and adjust our speed long before problems occur. It’s an exciting time because machine learning in cars is almost magical.

The first time, a car like the Subaru Legacy Outback tells you not to look down at your phone, or a Ford Explorer applies the brakes suddenly when you fail to notice the semi-truck that just pulled out in front of you is when you realize how far we’ve come.

Curiously, these new advancements could also present an opportunity for hackers. While the AI tech in cars never needs to sleep and is always vigilant, it is not that hard to trick the machine learning routines, even with a piece of tape.

Over the limit

Recently, researchers at McAfee announced an 18-month project where they attempted to alter the cruise control abilities in two 2016 Tesla Model S cars. They applied tape to a speed limit sign and then drove the Model S, watching as the vehicle jumped up in speed by 80 miles-per-hour. It only took one extension of the number three on a speed limit sign that said 35, changing it to read 85 instead.

The companies that developed some of the autonomous driving tech in the Tesla S refuted the claims by saying a human driver would also read the speed limit sign inaccurately, and that’s exactly when I started wondering what this all means.

Tesla Model S

I agree that human drivers are likely not that perceptive. On a highway recently, I noticed how a departure lane I took off the main highway was posted at only 35 miles-per-hour (coincidentally enough).

I slowed down to 35, but I wondered why the city lowered the speed so quickly from 75 miles per hour. It was accurate, but it didn’t make sense to me. The road was nowhere near a residential area.

However, the fact that I was wondering is the important factor.

Tesla Model S

Autonomous tech in cars might not do this. Experts who responded to Mcafee did say the Model S also uses crowd-sourced data and likely also uses GPS data, which is much harder to spoof. That said, it made me wonder.

Autonomous cars will need to do more than read speed limit signs. They will also need to interpret the conditions and the setting — it would not make sense to suddenly go from 35 MPH to 85 MPH. If it is a simple calculation from one number to another, it won’t work.

New tricks

In the future, I wondered how hackers might trick cars in other ways. We’re on the verge of cars connecting to the roadway and to other cars. Recently, an artist demonstrated how hauling a wagon full of smartphones could trick Google Maps into thinking there was traffic congestion. What else could they do?

I can envision someone creating a stir by sending out fake signals about other cars on the road, sending notices about road closures, or even worse — tapping into car systems from the side of the road and telling them to brake suddenly.

Tesla Model S

At the same time, it is a lot of fuss over something minor. Fewer and fewer cars are reading roadway signs and are determining speed based on GPS data instead. No research has ever shown that hackers could cause cars to brake suddenly, and when there are examples they are usually in controlled environments. 

I think it is mostly a curiosity. We like to be able to fool the machines, and that’s a good thing. As long as they don’t ever start fooling with us.

On The Road is TechRadar's regular look at the futuristic tech in today's hottest cars. John Brandon, a journalist who's been writing about cars for 12 years, puts a new car and its cutting-edge tech through the paces every week. One goal: To find out which new technologies will lead us to fully self-driving cars.

TechRadar – All the latest technology news

Read More