The Apple Vision Pro’s first 3D movies have just shown up in the App Store

Multiple 3D movies have reportedly appeared on the Apple TV app seemingly in preparation for the launch of the Vision Pro headset next year.

The updated support was discovered by tech news site FlatpanelsHD after digging through the recently released tvOS 17.2 beta. Apparently, there was more to the patch other than introducing a redesigned UI to Apple TV. According to the report, the 3D movies that can be found on the platform include, but are not limited to, Jurassic World Dominion, Pacific Rim Uprising, and Shrek. The full list can be found on FlatpanelsHD, which primarily consists of action films. Each title will have a 3D-compatible icon on their respective details page letting you know of its support.

It’s important to mention that every single title has had a 3D cinema release in the past. There aren’t any original 3D movies or series, at the time of this writing. This leads us to believe that maybe Apple has created a new file format for the Vision Pro. Studio developers could’ve converted the films into said format so they can be played on the headset. However, we don’t know for sure. This is just speculation on our end.

Immersive and comfortable

Obviously, there isn’t a way to actually view these movies in their intended way since the Vision Pro isn’t out yet nor do we know “what resolution and frame rate these 3D movies [will] play in.” Each eye on the headset can output 4K resolution so that’s one possibility. Older titles, like Shrek, will most likely have to be remastered to a higher quality.

Although the resolution remains unknown, we have some idea as to what the experience will be like. Apple has a video on its website teaching developers how to prepare content for visionOS. The 16-minute lesson is pretty complex, but the main takeaway is that Apple is taking care to ensure watching content on the Vision Pro results in an immersive and comfortable experience.

The headset utilizes stereoscopic 3D, a technique where the device creatively uses flat images to produce the illusion of depth. One eye will see one image while the other eye sees a “slightly different perspective”. Overlay the two on top of each other and you get a 3D view.

It’s similar to how our own eyes perceive the world around us as each one sees objects in a slightly different manner. This difference is called parallax and it’s something the tech giant is striving to nail. Rendered elements in a 3D video without parallax can “cause discomfort when viewing.”

Bringing back an old idea

It’ll be interesting to see what else comes from this support. As FlatpanelsHD points out, Apple could invertedly resurrect 3D movies as the new hardware enables the format. Maybe 3D TVs might make a comeback. 

They’ve seemingly gone the way of the dodo. There are, however, a few companies out there eager to revive the old idea like Magnetic3D. Now we just need the content, which could be led by the upcoming Godzilla series, Monarch: Legacy of Monsters if the latest rumors are to be believed. 

While we have you, be sure to check out TechRadar's latest round-up of the best VR headset deals for November 2023.

You might also like

TechRadar – All the latest technology news

Read More

Elon Musk says xAI is launching its first model and it could be a ChatGPT rival

Elon Musk’s artificial intelligence startup company, xAI, will debut its first long-awaited AI model on Saturday, November 4.

The billionaire made the announcement on X (the platform formerly known as Twitter) stating the tech will be released to a “select group” of people. He even boasts that “in some important respects, it is the best that currently exists.”

It’s been a while since we’ve last heard anything from xAI. The startup hit the scene back in July, revealing it’s run by a team of former engineers from Microsoft, Google, and even OpenAI. Shortly after the debut on July 14, Musk held a 90-minute-long Twitter Spaces chat where he talked about his vision for the company. During the chat, Musk stated his startup will seek to create “a good AGI with the overarching purpose of just trying to understand the universe”. He wants it to run contrary to what he believes is problematic tech from the likes of Microsoft and Google. 

Yet another chatbot

AGI stands for artificial general intelligence, and it’s the concept of an AI having “intelligence” comparable to or beyond that of a normal human being. The problem is that it's more of an idea of what AI could be rather than a literal piece of technology. Even Wired in their coverage of AGIs states there’s “no concrete definition of the term”.

So does this mean xAI will reveal some kind of super-smart model that will help humanity as well as be able to hold conversations like a sci-fi movie? No, but that could be the lofty end goal for Elon Musk and his team. We believe all we’ll see on November 5 is a simple chatbot like ChatGPT. Let’s call it “ChatX” since the billionaire has an obsession with the letter “X”.  

Does “ChatX” even stand a chance against the likes of Google Bard or ChatGPT? The latter has been around for almost a year now and has seen multiple updates becoming more refined each time. Maybe xAI has solved the hallucination problem. That'll be great to see. Unfortunately, it's possible ChatX could just be another vehicle for Musk to spread his ideas/beliefs.

Analysis: A personal truth spinner

Musk has talked about wanting to have an alternative to ChatGPT that focuses on providing the “truth”, whatever that means. Musk has been a vocal critic of how fast companies have been developing their own generative AI models with seemingly reckless abandon. He even called for a six-month pause on AI training in March. Obviously, that didn’t happen as the technology advanced by leaps and bounds since then.

It's worth mentioning that Twitter, under Musk's management, has been known to comply with censorship requests by governments from around the world, so Musk's definition of truth seems dubious at best. Either way, we’ll know soon enough what the team's intentions are. Just don’t get your hopes up.

While we have you, be sure to check out TechRadar's list of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Galaxy S24, S23, and Pixel phones could be first in line for Assistant with Bard

At the same time as launching the Pixel 8, Pixel 8 Pro and Pixel Watch 2 last week, Google also unveiled its new AI-powered Assistant with Bard tool – and now we've got a better idea of which phones might be getting the app first.

The team at 9to5Google has dug into the latest Google app for Android to look for references to Assistant with Bard, and based on hidden code that's been uncovered, it looks as though the Pixel 8 and Samsung Galaxy S24 phones will be first in line.

With the Pixel 8 and Pixel 8 Pro shipping tomorrow, it seems likely that users of these phones will be able to try Assistant with Bard before anyone else – and Google intimated as much when it announced the AI bot. The Samsung Galaxy S24 isn't due to launch until January or February next year.

However, Google has also gone on record as saying Assistant with Bard will be available to “select testers” to begin with, before more people get it over the “next few months”. In other words, even if you've got a Pixel 8, you might be waiting a while.

Coming soon

After Pixel 8 and Galaxy S24 owners have had a good play around with everything that Assistant with Bard has to offer, 9to5Google suggests that the Pixel 6, Pixel 7, and Galaxy S23 handsets will be the next to receive the upgrade.

Some example queries have also been found in the Google app code, including “help explain in a kid-friendly way why rainbows appear” and “give me some ideas to surprise my concert-loving friend on their birthday”.

Those lines will be familiar to anyone who's already played around with the generative AI in Google Bard: like ChatGPT, it can write poetry, reports, emails, and much more, as well as coming up with ideas and explaining difficult topics.

Assistant with Bard adds all that to what we already have in Google Assistant: answering questions, controlling smart lights, finding out what the weather's doing, and so on. It could soon be the most powerful Google app on your phone.

You might also like

TechRadar – All the latest technology news

Read More

WhatsApp launches its first native macOS app with group calling support

After what feels like forever, Meta-owned instant messaging platform WhatsApp has finally launched a native macOS app. The new version gives Mac users similar functionality that Windows PC users have enjoyed since March.

The new Mac app is already available for download for free via WhatsApp's servers, but those who prefer to use the Mac App Store might have to wait a little while — WhatsApp says that it's coming soon.

No matter where you download the upgraded WhatsApp app, you'll benefit from new features, including some that have been brought about by the move to a native macOS app. For the first time, familiar Mac features such as drag and drop are available to WhatsApp users. Files can easily be dragged and then dropped into a chat for convenience, while those with longer chat histories will now see more of them, we're told.

More Mac in your app

WhatsApp announced its new app via a blog post, noting that there are improved calling features for those who want to take advantage of them.

“With the new WhatsApp app for Mac, you can now make group calls from your Mac for the first time, connecting with up to 8 people on video calls and up to 32 people on audio calls,” the blog post explains. “Now you can join a group call after it’s started, see your call history and choose to receive incoming call notifications even when the app is closed.”

Alongside the new drag-and-drop support, WhatsApp users can also expect all of the usual features that they're used to. That means that their chats will continue to be end-to-end encrypted, and cross-platform support will continue to make WhatsApp one of the best instant messaging platforms around. In a world where Apple continues to refuse to support RCS on its devices, third-party apps remain a requirement for communicating with people across the Android-iPhone divide.

The new native WhatsApp Mac app can be downloaded from the company's website right now. You'll need macOS 11 Big Sur or later, and you'll need a Mac running on Apple silicon — so that's M1 or later, folks. Don't have a Mac that meets those requirements? WhatsApp web is still available.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

WhatsApp is about to get its first AI trick – and it could be just the start

WhatsApp is taking its first steps into the world of artificial intelligence as a recent Android beta introduced an AI-powered, sticker generation tool

Revealed in a new report from WABetaInfo, a Create button will show up in chats whenever some app testers open the sticker tab in the text box. Tapping Create launches a mini-generative AI engine with a description bar at the top asking you to enter a prompt. Upon inputting said prompt, the tool will create a set of stickers according to your specifications that users can then share in a conversation. As an example, WABetaInfo told WhatsApp to make a sticker featuring a laughing cat sitting on top of a skateboard, and sure enough, it did exactly as instructed. 

WhatsApp sticker generator

(Image credit: WABetaInfo)

It’s unknown which LLM (large language model) is fueling WhatsApp’s sticker generator. WABetaInfo claims it uses a “secure technology offered by Meta.”  Android Police, on the other hand, states “given its simplicity” it could be “using Dall-E or something similar.” 

Availability

You can try out the AI tool yourself by joining the Google Play Beta Program and then installing WhatApp beta version 2.23.17.14, although it’s also possible to get it through the 2.23.17.13 update. Be aware the sticker generator is only available to a very small group of people. There’s a chance you won’t get it. However, WABetaInfo claims the update will be “rolling out to more users over the coming weeks,” so keep an eye out for the patch when it arrives. No word on an iOS version. 

Obviously, this is still a work in progress. WABetaInfo says if the AI outputs something that is “inappropriate or harmful, you can report it to Meta.” The report goes on to state that “AI stickers are easily recognizable” explaining recipients “may understand when [a drawing] has been generated”. The wording here is rather confusing. We believe WABetaInfo is saying AI content may have noticeable glitches or anomalies. Unfortunately, since we didn’t get access to the new feature, we can’t say for sure if generated content has any flaws.

Start of an AI future

We do believe this is just the start of Meta implementing AI to its platforms. The company is already working on sticker generators for Instagram and Messenger, but they’re seemingly still under development. So what will the future bring? It’s hard to say. It would, however, be cool to see Meta finally add its Make-A-Scene tool to WhatsApp.

It’s essentially the company’s own take on an image generator, “but with a bigger emphasis on creating artistic pieces.” We could see this being added to WhatsApp as a fun game for friends or family to play. There’s also MusicGen for crafting musical compositions, although that may be better suited for Instagram.

Either way, this WhatsApp beta feels like Meta has pushed the first domino of what could be a string of new AI-powered features coming to its apps.

TechRadar – All the latest technology news

Read More

Mercedes-Benz is bringing ChatGPT into cars for the first time

Luxury car brand Mercedes-Benz is outfitting its MBUX Voice Assistant with ChatGPT as part of a new US-only beta program. Joining the beta will allow drivers of over 900,000 “vehicles equipped with MBUX [to hold] “more dynamic” conversations with the onboard AI.

In the official announcement post, the company states it's seeking to improve its voice assistant beyond “predefined tasks and responses”. ChatGPT’s own large language model would “greatly improve [MBUX’s] natural language understanding [to] expand the topics to which it can respond.” So not only will customers be able to give voice commands, but they can also ask the AI for detailed information about their destination or suggestions for a new dinner recipe. 

ChatGPT in a Mercedes-Benz car

(Image credit: Mercedes-Benz)

Security

To make the program possible, Mercedes is incorporating Microsoft’s Azure OpenAI Service in the rollout, ensuring, according to the auto manufacturer, “enterprise-grade security, privacy, and reliability”. Conversation data will be collected and then stored in the Mercedes-Benz Intelligent Cloud where it will be “anonymized and analyzed.” All IT processes will be controlled by the company as it promises to protect “all customer data from… misuse.” Microsoft won’t have any access.

If you want to see it in action before installation, tech news site Electrek recently published a couple of videos showing off the upgraded MBUX. It utilizes both the dashboard screen as well as its onboard voice to deliver answers. When asked for suggestions for the best local beaches, the AI displayed a text list of nearby locations before recommending activities like surfing. It can even tell jokes, although they’re pretty terrible.

Availability

The beta program starts June 16 in the United States only, as stated earlier. To get started, eligible customers must first say “Hey Mercedes, I want to join the beta program” as a command to MBUX. From there, it’ll teach you how to install the ChatGPT patch. It appears part of the onboarding process includes connecting a mobile device to the AI. A full list of vehicles supporting the beta is available on the company’s website. In total, there are over 25 models ranging from sedans to SUVs.

ChatGPT on the Mercedes-Benz app

(Image credit: Mercedes-Benz)

The beta program should last three months. After that time, it’ll go offline for an indeterminate amount of time. Mercedes will then take the data it collects to improve the AI for an eventual launch. It’s unknown if either the program or the final version will be available to other global regions or other languages besides English.

We reached out to Mercedes-Benz for more information on the launch. This story will be updated at a later time.

Having a generative AI at your beck and call giving you travel suggestions sounds pretty useful and could lead to a lot more fruitful sightseeing. To that end, we recommend checking TechRadar’s list of the best travel camera for 2023 before planning your next trip.

TechRadar – All the latest technology news

Read More

Adobe Illustrator gets its first Firefly AI tool

Adobe Illustrator is the latest app to get Firefly capabilities, with the update aimed at letting designers rapidly experiment with colors using simple text prompts. 

Generative Recolor is the first example of an Adobe Firefly-powered tool inside the popular graphic design software. Designers can use text prompts to create and save custom themes for recoloring vector artwork, so there’s no need to spend time altering individual elements of a commercial design. 

The move comes days after rolling out Adobe Express and Firefly for Enterprise, as the company ramps up integration of its AI art generator.  

Setting Illustrator alight 

If there’s one thing we learned at Adobe Summit 2023, it’s that the firm is keen to push its AI as a co-pilot for creators of all experience levels, at every level of an organization. The latest Firefly-powered tool is no exception, with the company highlighting diverse uses from marketing graphics to mood-boarding.  

Still in beta and built directly into Illustrator, Generative Recolor lets designers capture the mood of a piece based on text prompts – the examples used by Adobe include “noon in the desert” and “midnight in the jungle”. Users can then quickly experiment by swapping out colors, palettes, and themes, and produce multiple color variants for a wide range of uses, like seasonally appropriate advertising.  

Adobe Illustrator infused with Firefly's AI capabilities

(Image credit: Adobe)

“Adobe Illustrator is the tool behind many of the world’s most iconic designs, from brand logos to product packaging. Firefly will help customers accelerate their creative process and save countless hours, while facilitating rapid ideation, experimentation and asset creation,” said Ashley Still, senior vice president, digital media at Adobe.

But it’s not the only new update to the digital art software, which also added the font tool Retype, new Layers functionalities, and improvements to Image Trace.

As we reported last week, Adobe reconfirmed future plans to let businesses train Firefly with custom assets to create brand-aligned content. Enterprise users will soon be able to get an IP indemnity from Adobe to guard against copyright claims and help make the AI-generated content “commercially safe” for businesses.

TechRadar – All the latest technology news

Read More

Google’s AI-boosted search engine enters first public trial – here’s how to try it

Google has opened up access to its Search Labs testing program allowing users to try out the upcoming search engine update with the most notable change being the Search Generative Experience or SGE.

To be clear, Search Labs isn’t technically open to the public as you’ll have to first join a waitlist. If you’ve already signed up, be sure to check your email account for an invitation from Google as they're currently rolling out. Don’t worry if you haven’t entered as there’s still room left in the waitlist on both desktop and mobile.

To join on desktop, you need to first install Google Chrome on your computer. From there, head on over to the Search Labs website, select Join Waitlist, and wait for the invitation to arrive. On mobile devices, launch the Google app. You should see a science beaker-esque icon in the top left corner of the screen. Just like before, select Join Waitlist then wait for the invite. Search Labs is available on both iOS and Android so no one’s being left out. Install the latest app update if you don't see the icon.

Limited-time only

Unless you’re a subscriber to Google One Premium, it may take a while until you get an invite. A recent report from 9To5Google states Premium subscribers are getting “priority access” to Search Labs, although “it won’t be immediate.” “Access spots are limited” at the moment, but more will open up over “the coming weeks. 

But once you get the invite, act fast. SGE and the rest of the Search Labs experiments will be available for a limited time only. It’s unknown for how long, so we asked Google for more information. This story will be updated if we hear back.

There’s been a fair amount of hype surrounding SGE ever since it was first revealed during I/O 2023. The technology essentially enhances Google Search to provide long, detailed responses to queries by taking context into consideration. It could very well completely change how people use the search engine

Word of advice

For the lucky few who get early access to SGE, Google recommends starting off with simple terms so you can get used to how the AI works. Once you get a feel for it, try entering more specific queries. One of the highlighted use cases of SGE is to help people with their shopping. The AI can generate a detailed list of features, reviews, price points, and even link to the product itself.

In addition to Google’s advice, we have some of our own because we’ve used multiple generative AI models from Bing to Brave Summarizer. One thing we’ve learned is generative AIs can hallucinate, meaning they come up with totally false information that bears no resemblance to reality. Don't always believe what you read. And do be mindful of what you enter as generative AIs keep the information you type in. In fact, some major tech corporations, like Samsung, have banned their employees from using ChatGPT after some sensitive information was leaked.

Google I/O 2023 revealed a lot more than just the tech giant’s AI tools. Be sure to check out TechRadar’s coverage of the event as it happened.

TechRadar – All the latest technology news

Read More

The first iOS 16.6 beta has made iMessage even more secure

Apple has only just dropped iOS 16.5, but already there’s a public beta for iOS 16.6, the finished version of which will probably land in the next month or so, based on past form. This doesn’t look to be one of the biggest iOS updates ever, but there’s one potentially very useful new feature.

That feature is iMessage Contact Key Verification, which Apple actually announced last year, but is only now activating. If you and the person or people you’re messaging both enable this feature, then you’ll be alerted if Apple detects a potential intrusion – for example, if the cloud servers your messages are carried on appear to have been breached.

Contact Verification Codes can also be compared and verified in person or over a FaceTime call. So, all this is essentially a way of verifying that you’re talking to the person you believe you’re talking to, and that no one is eavesdropping on the conversation.

An image showing the iMessage Contact Key Verification feature

(Image credit: Apple)

This is probably a level of security beyond what most people really need, especially as iMessage is already end-to-end encrypted. Indeed, when Apple announced the feature, it positioned this as something aimed at people facing “extraordinary digital threats,” such as journalists and government officials.

It’s a feature that’s designed to stop “an exceptionally advanced adversary, such as a state-sponsored attacker,” so this isn’t something you should – in theory – need to avoid garden-variety hackers. That said, it’s something anyone can enable, so if you want that extra peace of mind, the option is now there.

Or it will be, anyway – while the feature is now visible, it doesn’t appear to be functional yet, according to BGR.

Few features to find

Presumably, then, Apple is still getting it set up, but with it visible in this iOS 16.6 beta, it seems very likely that the iMessage Contact Key Verification feature will fully launch in the finished version of iOS 16.6.

This seems to be the only feature that has been found in this iOS 16.6 beta, and handily Apple hasn’t provided any release notes for the beta. So, there may be more features lurking in there, and there may be additional features added in subsequent betas or the finished iOS 16.6 release.

But as we’re not aware of any functional changes in this current build, there’s probably no need to download it. And while it will definitely be worth grabbing the finished version, we might not see many new features until iOS 17.

TechRadar – All the latest technology news

Read More

Matter’s first update won’t light up your smart home, but the next one might

The Connectivity Standards Alliance (CSA) is rolling out the first major update to the Matter standard. Well, it's supposed to be a major update, but it's more like a minor patch.

Matter 1.1, as it’s called, won't add any new new device types to the support list, nor will there be any major changes to the individual platforms like Apple HomeKit. The CSA, instead, is making three relatively small changes affecting both companies and users alike, and those changes mostly affect the former. 

Change number one is it is now easier for smart home manufacturers to get started with Matter. The standard’s specifications reportedly have been made clearer allowing for “better guidance” in growing “support for new device types.” The CSA has also made it easier for developers to certify their products so they can get into the hands of customers faster. 

Finally, Matter 1.1 will be fixing a bug affecting Intermittently Connected Devices or ICD, which are “typically battery-powered” gadgets like motion sensors and door locks. Moving forward, it will be less likely an ICD “will be reported as offline” whenever a user or platform interacts with it. But, as far we understand the CSA, the bug isn’t completely gone. The error can still happen.

And that’s pretty much it for Matter 1.1. Two developer-centric changes and one for the users that doesn’t solve the problem at all. It just lowers the chance of the error occurring.

Analysis: the hype is gone

This update is disappointing, to say the least. It’s been seven months since Matter officially launched, and it’s pretty safe to say the hype surrounding the standard has effectively died. Roll out, too, has been slow. Google, for example, has only recently added Matter support to its Google Home app on iOS. And it took Amazon nearly four months to finally roll out Matter to its Echo lineup

Plus, multi-admin control is still a problem, according to The Verge. It isn’t easy to switch your gadget from one platform to another if it’s already connected to one, for instance. And that's something that doesn’t make any sense because the whole point of Matter – its reason for existence – is to have better interoperability between smart home platforms.

It appears the CSA is allowing smart home brands to update their individual platforms at their own discretion. The question is: when are they going to be updating? In all honesty, who knows?

The CSA states it’s going to remain committed to its goal of a “twice-yearly release cycle” for future updates. It’s currently working on “the next version of Matter”, bringing in “new features and device type support.” We contacted the CSA for more information on Matter 1.2, as we’ll call it. This story will be updated at a later time.

If you want to know what works with the standard, check out TechRadar’s list of the smart home devices that play nice with Matter.

TechRadar – All the latest technology news

Read More