Apple starts work on macOS 16 – and it sounds like a bigger deal than a MacBook Pro redesign

While we’re eagerly awaiting the public release of new operating systems like iOS 18 and macOS 15 (Sequoia) later this year, it seems like Apple has already begun work on macOS 16 (and iOS 19, for that matter). This fresh rumor, coupled with whispers of a MacBook Pro refresh for later in 2024, has us buzzing about the future of Apple’s best tech.

Reputable leaker and industry commentator Mark Gurman noted in his most recent ‘Power On’ newsletter (for Bloomberg) that Apple has started development of all its major operating systems for 2025, meaning macOS 16, iOS 19, watchOS 12, and visionOS 3.

Mind you, we’d expect that Apple would be kicking off work on next year’s big software refreshes at this point, though it’s still exciting to hear that the development of macOS 16 is underway. It’s too early even to speculate about what next year’s version of macOS could look like, and Gurman doesn’t drop any hints as to possible features, but if Sequoia has shown us anything, we’re certain we are in for another big, AI-driven refresh.

Indeed, by the time we get to 2025, we wonder whether Apple might be planning to incorporate AI in a much bigger way with macOS 16, maybe bringing in features that will change the way we use our Mac devices entirely! Given the pace of development in the world of artificial intelligence, this can’t be ruled out.

New MacBooks on the horizon?

Software aside, as for the future of Mac hardware, we’re already hearing rumors about the M4 refresh due to happen with Apple’s Mac lineup, with some reports speculating that the MacBook Pro could be the first Mac in line for the new chip (which is only in the iPad Pro right now).

According to Gurman’s earlier reports, we may only see the MacBook Pro 14-inch base model get an M4 refresh this year (with the vanilla M4), with the other models (with M4 Pro and Max) only debuting early in the following year. Furthermore, we’re not likely to get any major hardware changes to Apple’s MacBook ranges for the next couple of years, and it sounds like the big move with the MacBook Pro – when it gets OLED, which is likely to be a good time for a full redesign – may not happen until 2026.

So, Apple might feel the need to make up for only ushering in more minor improvements on the hardware front, by taking a big leap on the software front – meaning a much-improved macOS 16 (with lots of fresh AI powers as mentioned, most likely). Take all this as the speculation it very much is, mind you.                

You might also like…

TechRadar – All the latest technology news

Read More

Amazon is reportedly working on its own AI chatbot that might be smarter than ChatGPT

Amazon is reportedly working on its own AI chatbot, codenamed “Metis”, that’ll operate in a similar vein to ChatGPT. 

According to Business Insider, who spoke “to people familiar with the project,” the new platform will be accessible through a web browser. They also viewed an internal document revealing the chatbot's potential capabilities. It’ll provide text answers to inquiries in a “conversational manner,” give links to sources, suggest follow-up questions, and even generate images. 

So far, it appears that Metis performs just like any other generative AI, but things soon begin to deviate. The company apparently wants to utilize a technique called “retrieval-augmented generation,” or RAG for short. It gives Metis the ability to grab information outside of its original training data, thereby giving the AI a big advantage over its rivals.

ChatGPT, by comparison, works by accessing a data reservoir whenever a user inputs a prompt, but that reservoir has a cut-off date that differs between the service’s models. For example, GPT-4 Turbo has a cut-off date of December 2023. It’s not privy to anything that has happened so far in 2024.

Powering the AI chatbot

It’s unknown if Amazon has implemented RAG at the time of Business Insider’s report. Metis is also slated to function as an “AI agent.” Judging from the description given, it would allow the service to function as a smart home assistant of sorts, “automating and performing complex tasks.” This includes but is not limited to turning on lights, making vacation itineraries, and booking flights.

The report goes on to reveal some of the tech powering Metis. The AI runs on a new internal AI model called Olympus, which is supposed to be a better version of Amazon’s “publicly available Titan.” The company even brought people from the Alexa team to help with development. In fact, Metis “uses some of the [same] resources” as the long-rumored Alexa upgrade.

Differing attitudes

Attitudes towards the AI chatbot vary among different parts of the company. Amazon CEO Andy Jassy seems very interested in the project, as he is directly involved with development and often reviews the team’s progress. Others, however, are less enthusiastic. One of the sources told Business Insider that they felt the company was way too late to party. Rival companies are so ahead of the curve that playing chase may not be worthwhile.

The report mentions that Amazon’s ventures into AI have been mostly duds. The Titan model is considered weaker than rival models; their Amazon Q corporate chatbot isn’t great, and there is low demand for their Trainnium and Inferentia AI chips. Amazon needs a big win to stay in the AI space.

Sources claim Metis is scheduled to launch in September around the same time Amazon is planning to hold its next big event. However, the date could change at any time. Nothing is set in stone at the moment.

While we have you, be sure to check out TechRadar's list of the best AI chatbots for 2024.

You might also like

TechRadar – All the latest technology news

Read More

AI-generated movies will be here sooner than you think – and this new Google DeepMind tool proves it

AI video generators like OpenAI's Sora, Luma AI's Dream Machine, and Runway Gen-3 Alpha have been stealing the headlines lately, but a new Google DeepMind tool could fix the one weakness they all share – a lack of accompanying audio.

A new Google DeepMind post has revealed a new video-to-audio (or 'V2A') tool that uses a combination of pixels and text prompts to automatically generate soundtracks and soundscapes for AI-generated videos. In short, it's another big step toward the creation of fully-automated movie scenes.

As you can see in the videos below, this V2A tech can combine with AI video generators (including Google's Veo) to create an atmospheric score, timely sound effects, or even dialogue that Google DeepMind says “matches the characters and tone of a video”.

Creators aren't just stuck with one audio option either – DeepMind's new V2A tool can apparently generate an “unlimited number of soundtracks for any video input” for any scene, which means you can nudge it towards your desired outcome with a few simple text prompts.

Google says its tool stands out from rival tech thanks to its ability to generate audio purely based on pixels – giving it a guiding text prompt is apparently purely optional. But DeepMind is also very aware of the major potential for misuses and deepfakes, which is why this V2A tool is being ringfenced as a research project – for now.

DeepMind says that “before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing”. It will certainly need to be rigorous, because the ten short video examples show that the tech has explosive potential, for both good and bad.

The potential for amateur filmmaking and animation is huge, as shown by the 'horror' clip below and one for a cartoon baby dinosaur. A Blade Runner-esque scene (below) showing cars skidding through a city with an electronic music soundtrack also shows how it could drastically reduce budgets for sci-fi movies. 

Concerned creators will at least take some comfort from the obvious dialogue limitations shown in the 'Claymation family' video. But if the last year has taught us anything, it's that DeepMind's V2A tech will only improve drastically from here.

Where we're going, we won't need voice actors

The combination of AI-generated videos with AI-created soundtracks and sound effects is a game-changer on many levels – and adds another dimension to an arms race that was already white hot.

OpenAI has already said that it has plans to add audio to its Sora video generator, which is due to launch later this year. But DeepMind's new V2A tool shows that the tech is already at an advanced stage and can create audio based purely on videos alone, rather than needing endless prompting.

DeepMind's tool works using a diffusion model that combines information taken from the video's pixels and the user's text prompts then spits out compressed audio that's then decoded into an audio waveform. It was apparently trained on a combination of video, audio, and AI-generated annotations.

Exactly what content this V2A tool was trained on isn't clear, but Google clearly has a potentially huge advantage in owning the world's biggest video-sharing platform, YouTube. Neither YouTube nor its terms of service are completely clear on how its videos might be used to train AI, but YouTube's CEO Neal Mohan recently told Bloomberg that some creators have contracts that allow their content to be used for training AI models.

Clearly, the tech still has some limitations with dialogue and it's still a long way from producing a Hollywood-ready finished article. But it's already a potentially powerful tool for storyboarding and amateur filmmakers, and hot competition with the likes of OpenAI means it's only going to improve rapidly from here.

You might also like…

TechRadar – All the latest technology news

Read More

MacOS Sequoia’s wildest update – iPhone mirroring – might be more useful than you think

When Apple introduced macOS Sequoia and its new iPhone Mirroring capability, I didn't get it. Now, though, after seeing it in action and considering some non-obvious use cases, I may be ready to reconsider.

Apple unveiled the latest AI-infused version of macOS during its WWDC 2024 keynote, which also saw major updates to iOS, iPadOS, visionOS, tvOS, and watchOS. It also served as the launch platform for Apple Intelligence, an Apple-built and branded version of artificial intelligence. I get that Apple's been building AI PCs for a while (ever since the M1 chip, they've included an on-board neural engine), and there are many features, including a better Siri, powerful photo editing features, and smart writing help, to look forward to but I found myself fixating elsewhere.

Apple was putting the iPhone on your Mac, or, rather, an iPhone screen floating in the middle of the lovely macOS Sequoia desktop. In a way, this is the most significant redesign of the new platform. It puts an entirely different OS – a mobile one, no less – on top of a laptop or desktop. 

Wow. And also, why?

I admit that I had a hard time conceiving what utility you could gain from having a second, live interface on an already busy desktop. Apple has said in the past that they build features, in some cases, based on user requests. Who had ever asked for this?

After the keynote, I had the chance to take a deeper dive, which helped me better understand this seemingly unholy marriage and why, in some cases, it might make perfect sense.

Making it so

WWDC 2024

(Image credit: Future / Lance Ulanoff)

Apple built a new app to connect your iOS 18-running iPhone to your macOS Sequoia Mac. In a demo I saw, it took one click to make it happen. Behind the scenes, the two systems are building a secure Bluetooth and WiFi connection. On the iPhone, there is a message that mirroring is live. On the Mac, well, there's the iPhone screen, complete with the dynamic Island cutout (a strange choice if you ask me – why virtualize dead space?).

I was honestly shocked at the level of iPhone functionality Apple could bring to the Mac desktop.

You can use the Mac trackpad to swipe through iPhone apps.

You can click to launch apps and run them inside the iPhone screen on your Mac desktop.

Pinch and zoom on the Mac trackpad works as expected with the iPhone apps.

There's even full drag-and-drop capability between the two interfaces. So you could take a video from the Go Pro app on your mirrored iPhone screen and drag and drop it into another app, like Final Cut Pro on the Mac.

Essentially, you are reaching through one big screen to get to another smaller one – on a different platform – that is sitting locked beside your desktop. It's stange and cool, but is it necessary?

WWDC 2024

(Image credit: Future / Lance Ulanoff)

Not everything makes sense. You can search through your mirrored phone screen, but why not just search on your desktop?

You can use the mirrored iPhone screen in landscape mode and play games. However, there's no obvious way to tell someone trying to play a game that uses the iPhone gyroscope that this is a bad idea.

I like that there's enough awareness that while the iPhone screen can look exactly like the screen on the phone, you can click to access a slightly larger frame that allows you to control the mirrored screen.

It's not the kind of mirroring that locks you in. To end it, you just pick up and unlock the phone to end the connection.

Even seeing all this, though, I wondered how people might use iPhone Mirroring.

Even seeing all this, though, I wondered how people might use iPhone Mirroring. There's the opportunity to play some games that aren't available on Mac. Multi-player word game fans might like that if they get a notification, they can open the mirrored phone screen, make a move, and then return to work.

When macOS Sequoia ships later this fall, you'll even be able to resize the mirrored iPhone window, which I guess could be useful for landscape games.

Notifications from your phone sounds redundant, especially for those of us in the iCloud ecosystem where all our Apple products get the same iMessages. But the system is smart enough to know it shouldn't repeat notifications on both screens, and you'll have the option to decide which iPhone notifications appear on your Mac.

Some notifications only appear on your iPhone, and others appear in both places, but you can't always act on them on the Mac.  This new feature might bridge that gap. A fellow journalist mentioned that iPhone mirroring would finally give him a way to jump from a notification he saw on his Mac for his baby cam app, where this is no cam app, to the live feed on the iPhone. This finally struck me as truly useful.

Is that enough of a reason to have your iPhone screen pasted on your Mac desktop? I don't know.  It might take up too much real estate on my MacBook Air 13-inch, but it would be kind of cool on a 27-inch iMac, if I had one.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s File Explorer could hook up directly with your smartphone to make file transfers from Android easier than ever

Microsoft has been hard at work further integrating Android devices into Windows 11, recently allowing users to draft in their phones as makeshift webcams. Riding the same wave of inter-device connectivity, a new feature is apparently in the works that will allow you to see and use your smartphone directly in Windows 11’s File Explorer – just like it was an external drive. 

According to reputable leaker @PhantomOfEarth on X, the groundwork is present in Windows 11 for the ‘Cross Device Experience Host’ to be able to link File Explorer on the desktop to your smartphone. This will allow File Explorer direct access to the files on your smartphone, or the ability to shift files the other way, from your PC to phone.

See more

If you cast your mind back to the beginning of the year, you may remember that the Cross Device Experience Host is replacing the Phone Link feature, so if you’re wondering why this may sound like more of a Phone Link feature, there’s your answer.

Once you turn on the feature – note that it’s still hidden in test versions of Windows 11 – @PhantomOfEarth observes that you’ll be asked to grant file access permissions, after which you’ll be good to go.

See more

Exciting times

Sadly, there isn’t anything else revealed about the feature, and we don’t even know the basics of how it’ll actually work. We’re assuming it’ll use Wi-Fi, maybe, to connect your phone and PC, so that your smartphone is always there in File Explorer whenever you sit at your computer with it (with both on the same Wi-Fi network). That’s pure speculation, mind.

We expect to see this functionality make an appearance in the Windows Insider Program, where devs and enthusiasts test out potential new features in preview builds of Windows 11. Until we have official word from Microsoft to confirm the feature is happening, though, we won’t know for sure – so don’t get your hopes up too high. 

That being said, it’s still a pretty cool ability to look forward to!  Not only could you move documents, photos, or other files between your PC and phone a lot more quickly and conveniently, but as noted, it seems like once you’ve set permissions your device should automatically register in File Explorer.

This is definitely a feature I would have enjoyed when I was a student and had to search and scramble between my phone and my laptop to make sure I had all the relevant research in one organized place. While I won’t allow myself to get too excited yet, I will wait patiently and hope to see the feature on my PC before too long. 

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI’s GPT-4o ChatGPT assistant is more life-like than ever, complete with witty quips

So no, OpenAI didn’t roll out a search engine competitor to take on Google at its May 13, 2024 Spring Update event. Instead, OpenAI unveiled GPT-4 Omni (or GPT-4o for short) with human-like conversational capabilities, and it's seriously impressive. 

Beyond making this version of ChatGPT faster and free to more folks, GPT-4o expands how you can interact with it, including having natural conversations via the mobile or desktop app. Considering it's arriving on iPhone, Android, and desktop apps, it might pave the way to be the assistant we've all always wanted (or feared). 

OpenAI's ChatGPT-4o is more emotional and human-like

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

GPT-4o has taken a significant step towards understanding human communication in that you can converse in something approaching a natural manner. It comes complete with all the messiness of real-world tendencies like interrupting, understanding tone, and even realizing it's made a mistake.

During the first live demo, the presenter asked for feedback on his breathing technique. He breathed heavily into his phone, and ChatGPT responded with the witty quip, “You’re not a vacuum cleaner.” It advised on a slower technique, demonstrating its ability to understand and respond to human nuances.

So yes, ChatGPT has a sense of humor but also changes the tone of responses, complete with different inflections while conveying a “thought”. Like human conversations, you can cut the assistant off and correct it, making it react or stop speaking. You can even ask it to speak in a certain tone, style, or robotic voice. Furthermore, it can even provide translations.

In a live demonstration suggested by a user on X (formerly Twitter), two presenters on stage, one speaking English and one speaking Italian, had a conversation with Chat GPT-4o handling translation. It could quickly deliver the translation from Italian to English and then seamlessly translate the English response back to Italian.

It’s not just voice understanding with GPT-4o, though; it can also understand visuals like a written-out linear equation and then guide you through how to solve it, as well as look at a live selfie and provide a description. That could be what you're wearing or your emotions. 

In this demo, GPT said the presenter looked happy and cheerful. It’s not without quirks, though. At one point ChatGPT said it saw the image of the equation before it was even written out, referring back to a previous visual of just a wooden tabletop.

Throughout the demo, ChatGPT worked quickly and didn't really struggle to understand the problem or ask about it. GPT-4o is also more natural than typing in a query, as you can speak naturally to your phone and get a desired response – not one that tells you to Google it.  

A little like “Samantha” in “Her”

If you’re thinking about Her or another futuristic-dystopian film with an AI, you’re not the only one. Speaking with ChatGPT in such a natural way is essentially the Her moment for OpenAI. Considering it will be rolling out to the mobile app and as a desktop app for free, many people may soon have their own Her moments.

The impressive demos across speech and visuals feel may only be scratching the surface of what's possible. Overall performance and how well GPT-4o performs day-to-day in various environments remains to be seen, and once available, TechRadar will be putting it through the test. Still, after this peek, it's clear that GPT-4o is preparing to take on the best Google and Apple have to offer in their eagerly-anticipated AI reveals.

The outlook on GPT-4o

However, announcing this the day before Google I/O kicks off and just a few weeks after we’ve seen new AI gadgets hit the scene – like the Rabbit R1 – OpenAI is giving us a taste of truly useful AI experiences we want. If this rumored partnership with Apple comes to fruition, Siri could be supercharged, and Google will almost certainly show off its latest AI tricks at I/O on May 14, 2024. But will they be enough?

We wish OpenAI showed off a bit more live demos with the latest ChatGPT-4o in what turned out to be a jam-packed, less-than-30-minute keynote. Luckily, it will be rolling out to users in the coming week, and you won’t have to pay to try it out.

You Might Also Like

TechRadar – All the latest technology news

Read More

Meta is on the brink of releasing AI models it claims to have “human-level cognition” – hinting at new models capable of more than simple conversations

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

Mark Zuckerberg thinks the Meta Quest 3 is better than Vision Pro – and he’s got a point

Mark Zuckerberg has tried the Apple Vision Pro, and he wants you to know that the Meta Quest 3 is “the better product, period”. This is unsurprising given that his company makes the Quest 3, but having gone through all of his arguments he does have a point – in many respects, the Quest 3 is better than Apple’s high-end model.

In his video posted to Instagram, Zuckerberg starts by highlighting the fact that the Quest 3 offers a more impressive interactive software library than the Vision Pro, and right now that is definitely the case. Yes, the Vision Pro has Fruit Ninja, some other spatial apps (as Apple calls them), and plenty of ported-over iPad apps, but nothing on the Vision Pro comes close to matching the quality or immersion levels of Asgard’s Wrath 2, Walkabout Mini Golf, Resident Evil 4 VR, The Light Brigade, or any of the many amazing Quest 3 VR games

It also lacks fitness apps. I’m currently testing some for a VR fitness experiment (look out for the results in March) and I’ve fallen in love with working out with my Quest 3 in apps like Supernatural. The Vision Pro not only doesn't offer these kinds of experiences, but its design isn’t suited to them either – the hanging cable could get in the way, and the fabric facial interface would get drenched in sweat; a silicone facial interface is a must-have based on my experience.

The only software area where the Vision Pro takes the lead is video. The Quest platform is badly lacking when it comes to offering the best streaming services in VR – only having YouTube and Xbox Cloud Gaming – and it’s unclear if or when this will change. I asked Meta if it has plans to bring more streaming services to Quest, and I was told by a representative that it has “no additional information to share at this time.” 

Zuckerberg also highlights some design issues. The Vision Pro is heavier than the Quest 3, and if you use the cool-looking Solo Knit Band you won’t experience the best comfort or support – instead most Vision Pro testers recommend you use the Dual-Loop band which more closely matches the design of the Quest 3’s default band as it has over the head support.

You also can’t wear glasses with the Vision Pro, instead you need to buy expensive inserts. On Quest 3 you can just extend the headset away from your face using a slider on the facial interface and make room for your specs with no problem.

Lance Ulanoff wearing Apple Vision Pro

The Vision Pro being worn with the Dual-Loop band (Image credit: Future)

Then there’s the lack of controllers. On the Vision Pro unless you’re playing a game that supports a controller you have to rely solely on hand tracking. I haven’t used the Vision Pro but every account I’ve read or heard – including Zuckerberg’s – has made it clear that hand-tracking isn’t any more reliable on the Vision Pro than it is on Quest; with the general sentiment being that 95% of the time it works seamlessly which is exactly my experience on the Quest 3.

Controllers are less immersive but do help to improve precision – making activities like VR typing a lot more reliable without needing a real keyboard. What’s more, considering most VR and MR software out there right now is designed for controllers software developers have told us it would be a lot easier to port their creations to the Vision Pro if it had handsets.

Lastly, there’s the value. Every Meta Quest 3 and Apple Vision Pro comparison will bring up price so we won’t labor the point, but there’s a lot to be said for the fact the Meta headset is only $ 499.99 / £479.99 / AU$ 799.99 rather than $ 3,499 (it’s not yet available outside the US). Without a doubt the Quest 3 is giving you way better bang for your buck.

The Meta Quest 3 controller being held above a table with a lamp, a plant and the QUest 3 headset on. You can see the buttons and the thumbstick on top.

The Vision Pro could be improved if it came with controllers (Image credit: Future)

Vision Pro: not down or out 

That said, while Zuckerberg makes some solid arguments he does gloss over how the Vision Pro takes the lead, and even exaggerates how much better the Quest 3 is in some areas – and these aren’t small details either.

The first is mixed reality. Compared to the Meta Quest Pro the Vision Pro is leaps and bounds ahead, though reports from people who have tried the Quest 3 suggest the Vision Pro doesn’t offer as much of an improvement – and in ways it is worse as Zuckerberg mentions.

To illustrate the Quest 3’s passthrough quality Zuckerberg reveals the video of him comparing the two headsets is being recorded using a Quest 3, and it looks pretty good – though having used the headset I can tell you this isn’t representative of what passthrough actually looks like. Probably due to how the video is processed recordings of mixed reality on Quest always look more vibrant and less grainy than experiencing it live.

Based on less biased accounts from people who have used both the Quest 3 and Vision Pro it sounds like the live passthrough feed on Apple’s headset is generally a bit less grainy – though still not perfect – but it does have way worse motion blur when you move your head.

Apple Vision Pro spatial videos filmed at the beach being watched by someone wearing the headset on their couch

Mixed reality has its pros and cons on both headsets (Image credit: Apple)

Zuckerberg additionally takes aim at the Vision Pro’s displays pointing out that they seem less bright than the Quest 3’s LCDs and they offer a narrower field of view. Both of these points are right, but I feel he’s not given enough credit to two important details.

While he does admit the Vision Pro offers a higher resolution he does so very briefly. The Vision Pro’s dual 3,680 x 3,140-pixel displays will offer a much crisper experience than the Quest 3’s dual 2064 x 2208-pixel screens. Considering you use this screen for everything the advantage of better visuals can’t be understated – and a higher pixel density should also mean the Vision Pro is more immersive as you’ll experience less of a screen door effect (where you see the lines between pixels as the display is so close to your eyes).

Zuckerberg also ignores the fact that the Vision Pro’s screens are OLEDs. Yes, this will mean they’re less vibrant, but the upshot is they offer much better contrast for blacks and dark colors. Better contrast has been shown to improve a user’s immersion in VR based on Meta and other’s experiments so I wouldn’t be surprised if the next Quest headset also incorporated OLEDs – rumors suggest it will and I seriously hope it does.

Lastly, there’s eye-tracking which is something the Quest 3 lacks completely. I don’t think the unavailability of eye-tracking is actually a problem, but that deserves its own article.

Hamish Hector holding Starburst to his face

This prototype headset showed me how important great contrast is (Image credit: Future)

Regardless of whether you agree with Mark Zuckerberg’s arguments or not one thing that’s clear from the video is that the Vision Pro has got the Meta CEO fired up. 

He ends his video stating his desire for the Quest 3 and the Meta’s open model (as opposed to the closed-off walled-garden Apple has where you can only use the headset how it intends) to “win out again” like Windows in the computing space.

But we’ll have to wait and see how it pans out. As Zuckerberg himself admits “The future is not yet written” and only time will tell if Apple, Meta or some new player in the game (like Samsung with its Samsung XR headset) will come out on top in the long run.

You might also like

TechRadar – All the latest technology news

Read More