Sam Altman hints at the future of AI and GPT-5 – and big things are coming

OpenAI CEO Sam Altman has revealed what the future might hold for ChatGPT, the artificial intelligence (AI) chatbot that's taken the world by storm, in a wide-ranging interview. While speaking to Lex Friedman, an MIT artificial intelligence researcher and podcaster, Altman talks about plans for GPT-4 and GPT-5, as well as his very temporary ousting as CEO, and Elon Musk’s ongoing lawsuit.

Now, I say GPT-5, but that’s currently its unofficial name used to refer to it, as it’s still being developed and even Altman himself alludes to not conclusively knowing what it’ll end up being named. He does give this somewhat cryptic quote about the nature of OpenAI’s upcoming release: 

“… what’s the one big unlock? Is it a bigger computer? Is it a new secret? Is it something else? It’s all of these things together.”

He then follows that by stating that he and his colleagues think that what OpenAI does really well is “multiply 200 medium-sized things together into one giant thing.” He specifically confirms to Friedman that this applies “Especially on the technical side.” When Altman and Friedman talk about the leap from GPT-4 to GPT-5, Altman does say he’s excited to see the next GPT iteration “be smarter.” 

What's on the horizon for OpenAI

Man holding a phone which is displaying ChatGPT is, prototype artificial intelligence chatbot developed by OpenAI

(Image credit: Shutterstock/R Photography Background)

Friedman asks Altman directly to “blink twice” if we can expect GPT-5 this year, which Altman refused to do. Instead, he explained that OpenAI will be releasing other important things first, specifically the new model (currently unnamed) that Altman spoke about so poetically. This piqued my interest, and I wonder if they’re related to anything we’ve seen (and tried) so far, or something new altogether. I would recommend watching the entire interview as it’s an interesting glimpse into the mind of one of the people leading the charge and shaping what the next generation of technology, specifically ChatGPT, will look like. 

Overall, we can’t conclude much, and this interview suggests that what OpenAI is working on is pretty important and kept tightly under wraps – and that Altman likes speaking in riddles. That’s somewhat amusing, but I think people would like to know how large the advancement in AI we’re about to see is. I think Altman does have some awareness of people’s anxieties about the fact that we are very much in an era of a widespread AI revolution, and he does at least recognise that society needs time to adapt and process the introduction of a technological force like AI. 

He seems like he’s aware on some level of the potential that AI and the very concept of artificial general intelligence (AGI) will probably overhaul almost every aspect of our lives and the world, and that gives me some reassurance. Altman and OpenAI want our attention and right now, they’ve got it – and it sounds like they’re cooking up something very special to keep it. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

New Windows 11 update fixes a whole lot of things – but breaks some as well

Windows 11 users who are installing the latest update are having some serious issues, from many accounts.

Windows Latest reported that there are bugs in the preview update – so yes, this is an optional update, not something you have to install – that are causing major problems with Windows 11’s interface in one way or another.

For starters, with patch KB5034204, some users are apparently experiencing a glitch where File Explorer – the folders and files on the desktop – is becoming unresponsive. This can lead to the whole desktop going blank (all folders and icons disappearing) for a while, before returning to normal, we’re told. Others are reporting File Explorer crashing while shutting down their PC.

Windows Latest further details reports of icons like the Recycle Bin vanishing, taskbar icons not working, and even the Windows 11 taskbar itself going missing, as complained about on Reddit (plus this is a problem the tech site encountered itself).

The other issue folks seem to be experiencing with KB5034204 is that the update fails to install. There are complaints on Microsoft’s Feedback Hub that the installation process reaches 100%, so looks like it has finished, but then crashes out with a message mentioning missing files. Stop code errors (like ‘0x8007000d’) are also in evidence with these installation mishaps.


Analysis: Out of the frying pan…

Clearly, we need to take into account that this is a preview update, meaning that it’s still officially in testing, and optional patches like this aren’t installed unless you specifically ask for them. As with any pre-release software, you can expect problems, in other words.

Even so, you might want an optional update because it provides a fix for a bug you’re suffering with, and in the case of KB5034204, it resolves a couple of notable issues disrupting video chats and streaming audio (and a misbehaving Start menu, too, plus more besides).

However, in this case, you might swap one problem for another when installing this optional update, and possibly a worse glitch (the wonkiness caused with the Windows 11 interface outlined above seems pretty nasty).

That said, there is a solution (kind of) for the missing taskbar at least, which is to press the Windows key + X – apparently, that sees the bar come back, but its behavior may still be odd going by the various reports around this particular bug.

It’s disappointing to see installation failures popping up again with this preview update, mainly because this was a flaw in evidence with the January cumulative update. It seems that Microsoft hasn’t resolved this yet, then, and the fear is that it might still be present in the February update for Windows 11 (which this preview is an advance version of, as you may realize).

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11’s Snipping Tool could get new powers for taking screenshots – but is Microsoft in danger of overcomplicating things?

Windows 11’s Snipping Tool is set to get a handy feature to embellish screenshots, or at least it seems that way.

Leaker PhantomOfEarth discovered the new abilities in the app by tinkering with bits and pieces in version 11.2312.33.0 of Snipping Tool. As you can see in the tweet below, the functionality allows the user to draw shapes (and fill them with color) and lines.

See more

That means you can highlight parts of screenshots by pointing with arrows – for an instructional step-by-step tutorial you’ve made with screen grabs, for example – or add different shapes as needed.

Note that this is not in testing yet, because as noted, the leaker needed to play with the app’s configuration to get it going. However, the hidden functionality does seem to be working fine, more or less, so it’s likely that a rollout to Windows 11 testers isn’t far off.


Analysis: A feature drive with core apps

While you could furnish your screenshots from Snipping Tool with these kinds of extras simply by opening the image in Paint, it’s handy to have this feature on tap to directly work on a grab without needing to go to a second app.

Building out some of the basic Windows 11 apps is very much becoming a theme for Microsoft of late. For example, recently Snipping Tool has been testing a ‘combined capture bar’ (for easily switching between capturing screenshots or video clips), and the ability to lift text straight from screenshots which is really nifty in some scenarios.

Elsewhere, core apps like Paint and Notepad are getting an infusion of AI (with Cocreator and a rumored Cowriter addition), and there’s been a lot of work in other respects with Notepad such as adding tabs.

We think these initiatives are a good line of attack for Microsoft, although there are always folks who believe that simple apps like Snipping Tool or Notepad should be kept basic, and advanced functionality is in danger of cluttering up these streamlined utilities. We get where that sentiment comes from, but we don’t think Microsoft is pushing those boundaries yet.

Via Windows Central

You might also like…

TechRadar – All the latest technology news

Read More

6 new things we’ve learned about the Apple Vision Pro as its first video ad lands

We've had quite the wait for the Apple Vision Pro, considering it was unveiled back in June at Apple's annual WWDC event. Yesterday we finally got the news that the Vision Pro will be going on sale on Friday, February 2, with preorders open on Friday, January 19 – and some other new bits of information have now emerged, alongside its first video ad (below).

As Apple goes into full sales mode for this pricey mixed reality headset, it's answering some of the remaining questions we had about the device, and giving us a better idea of what it's capable of. Considering one of these will cost you $ 3,499 (about £2,750 / AU$ 5,225) and up, you're no doubt going to want all of the details you can get.

Here at TechRadar we've already had some hands-on time with the Vision Pro, and checked out how 3D spatial videos will look on it (which got a firm thumbs up). Here's what else we've found out about the Vision Pro over the last 24 hours.

1. Apple thinks it deserves to be in a sci-fi movie

Take a look at this brand new advert for the Apple Vision Pro and see how many famous movies you can name. There's a definite sci-fi angle here, with films like Back to the Future and Star Wars included, and Apple clearly wants to emphasize the futuristic nature of the device (and make strapping something to your face seem cool rather than nerdy).

If you've got a good memory then you might remember that one of the first adverts for the iPhone also made use of short clips cut from a multitude of films, featuring stars such as Marilyn Monroe, Michael Douglas, and Steve McQueen. Some 16 years on, Apple is once again using the power of the movies to push a next-gen piece of hardware.

2. The battery won't last for the whole of Oppenheimer

Apple Vision Pro

(Image credit: Apple)

Speaking of movies, you're going to need a recharge if you want to watch all of Oppenheimer on the Apple Vision Pro. Christopher Nolan's epic film runs for three hours and one minute, whereas the Vision Pro product page (via MacRumors) puts battery life at 2.5 hours for watching 2D videos.

That's when you're watching a video in the Apple TV app, and in one of the virtual environments that the Vision Pro is able to conjure up. Interestingly, the product page text saying that the device could run indefinitely as long as it was plugged into a power source has now been quietly removed.

3. The software is still a work in progress

Apple Vision Pro on a person's head

Preorders for the Vision Pro open this month (Image credit: Apple)

Considering the high price of the Apple Vision Pro, and talk of limited availability, this doesn't really feel like a mainstream device that Apple is expecting everyone to go out and buy. It's certainly no iPhone or Apple Watch – though a cheaper Vision Pro, rumored to be in the pipeline, could certainly change that dynamic somewhat.

With that in mind, the software still seems to be a work in progress. As 9to5Mac spotted in the official Vision Pro press release, the Persona feature is going to have a beta label attached for the time being – that's where you're represented in video calls by a 3D digital avatar that doesn't have a bulky mixed reality headset strapped on.

4. Here's what you'll be getting in the box

Apple Vision Pro

(Image credit: Apple)

As per the official press release from Apple, if you put down the money for a Vision Pro then you'll get two different bands to choose from and wrap around your head: they are the Solo Knit Band and the Dual Loop Band, though it's not immediately clear what the differences are between them.

Also in the box we've got a light seal, two light seal cushions, what's described as an “Apple Vision Pro Cover” for the front of the headset, an external battery back, a USB-C charging cable, a USB-C power adapter, and the accessory that we've all been wanting to see included – an official Apple polishing cloth.

5. Apple could release an app to help you fit the headset

Two hands holding the Apple Vision Pro headset

(Image credit: Apple)

When it comes to fitting the Apple Vision Pro snugly to your head, we think that Apple might encourage buyers to head to a physical store so that they can be helped out by an expert. However, it would seem that Apple also has plans for making sure you get the best possible fit at home.

As spotted by Patently Apple, a new patent filed by Apple mentions a “fit guidance” system inside an iPhone app. It will apparently work with “head-mountable devices” – very much like the Vision Pro – and looks designed to ensure that the user experience isn't spoiled by having the headset badly fitted.

6. There'll be plenty of content to watch

A person views an image on a virtual screen while wearing an Apple Vision Pro headset.

(Image credit: Apple)

Another little nugget from the Apple Vision Pro press release is that users will be able to access “more than 150 3D titles with incredible depth”, all through the Apple TV app. Apple is also introducing a new Immersive Video format, which promises 180-degree, three-dimensional videos in 8K quality.

This 3D video could end up being one of the most compelling reasons to buy an Apple Vision Pro – we were certainly impressed when we got to try it out for ourselves, and you can even record your own spatial video for playing back on the headset if you've got an iPhone 15 Pro or an iPhone 15 Pro Max.

You might also like

TechRadar – All the latest technology news

Read More

9 things announced at the Meta Connect 2023 event

Meta’s Connect developer conferences have been fairly humble these past couple of years as the company shifted to online events due to the pandemic. But for 2023, the tech giant returned to an in-person event and took some big swings.

During the keynote, we received a ton of new information regarding the Meta Quest 3 VR headset, Meta's generative AI projects, and the next generation of Ray-Ban smart glasses.

The star of the show was undoubtedly the Quest 3. It features improved hardware running on the Snapdragon XR2 Gen 2 SoC (system on a chip), an in-depth mapping upgrade, and greater support for video games. The reveal was certainly impressive. However as the conference went on, it felt like the spotlight shifted to all the AI announcements.

We’ve known some of the AI models Meta has been developing for a while now, like its revamped chatbot to take on GPT-4. But as it turns out there was a lot more going on behind the scenes as the company showed off a slew of AI features coming to its messaging apps. 

There is a lot to cover, so if you want to know about a specific topic, you can use the jump links at the top to head over to a particular section. Or you can read the whole thing as it happened.

Virtual Reality

1. Meta Quest 3

Meta Connect 2023

(Image credit: Meta)
  • $ 499.99
  • Available for pre-order
  • Launches October 10

We finally get a look at the Meta Quest 3 VR headset after months of leaks. Compared to the Quest 2, this new model is 40 percent thinner thanks to the pancake lenses allowing for a slimmer design, according to company CEO Mark Zuckerberg. Each lens is able to output 4K resolution (2,064 x 2,208 pixels) per eye for the highest quality possible. The speakers are getting an upgrade too. They now have a “40 percent louder volume range than Meta Quest 2”. 

All this will be powered by the Qualcomm Snapdragon XR2 Gen 2 chipset mentioned earlier, which is said to be capable of twice “the graphical performance.”

Also, the headset is paired up with two Touch Plus Controllers now boasting better haptic feedback for more immersive gaming. The Quest 3 is currently available for pre-order on Meta’s official website. Prices start at $ 499.99 for the 128GB model while the 512GB headset is $ 649.99. It ships out on October 10.

2. Better gaming

Xbox Game Pass on Quest 3

(Image credit: Meta)
  • Xbox Cloud Gaming coming in December
  • No longer need a PC
  • Some titles will be in mixed reality

A large portion of Zuckerberg’s presentation was dedicated to gaming as Meta wants gamers to adopt its headset for a fresh, new experience. To enable this, Xbox Cloud Gaming will be accessible on the Quest 3 this December. This means you can play Halo Infinite or Minecraft on an immersive virtual screen. And the best part is you no longer need to connect to a gaming PC to run your favorite titles. Thanks to the Snapdragon chip, the headset is now powerful enough to run the latest games.

For greater interactivity, some titles like BAM! can be played on a table in your house through a mixed reality environment. The Quest 3 will display the board game in front of you while you still see the room around you. 

3. Immersive environments

A person playing with VR Lego while wearing the Meta Quest 3

(Image credit: Meta)
  • Will automatically map your room
  • Virtual objects appear
  • Can switch between immersive and blended spaces

Mixed reality is made possible due to the Quest 3’s “full-color passthrough capability and a depth sensor”. The device will scan a room, taking note of the objects in it in order to set up a mixed-reality space. This is all done automatically, by the way. Through this, virtual objects will appear in your house. 

Besides video games, the mixed reality spaces can be used to establish your own immersive workout or meditation area. For basketball or MMA fans, you can get ring-side seats where you can watch your favorite teams or fighters duke it out as if you’re there. Double-tapping the headset on the side changes the view from an immersive perspective to a wide-angle shot where you can see everything.

Generative AI

4. Meta AI assistant

Ray-Ban Meta Smart Glasses

(Image credit: Meta)
  • Powered by Bing Chat
  • Will be available on WhatsApp, Instagram, and more
  • Can access the internet

Mark Zuckerberg revealed Meta has entered a partnership with Microsoft allowing the former to use Bing Chat as the basis for their new in-app assistant called Meta AI. It works in much the same way. You can ask quick questions or engage with it in some light conversation.

What’s interesting is it’ll be available on Facebook, Instagram, Messenger, and WhatsApp. It will have access to the internet for displaying real-time information. Enabling this can backfire as it may cause the AI to hallucinate or come up with false information. To combat this, Meta states it carefully trained its AI to stay accurate.

It’s unknown when the assistant will launch officially; although we did ask. We should mention it will be available in beta on the upcoming second-generation Ray-Ban smart glasses which launches in October.

5. Multiple personalities

Snoop Dogg Dungeon Master

(Image credit: Meta)
  • AI Assistant can have a persona
  • These persona can offer specific advice
  • Or be a source of entertainment

It seems Meta AI will have split personalities as it'll be possible to have it emulate a certain persona. Each one is based on a famous public figure. For example, Victor the fitness coach is based on basketball star Dwayne Wade. Seemingly, each persona will appear with a video of the celebrities in the corner. The video is connected to the AI and will emote according to the text. 

The personas do get a little wacky. Rapper Snoop Dogg gave his likeness to be the Dungeon Master model guiding people through a choose-your-own-adventure text game. Others have a more practical use like the chef AI giving cooking advice.

6. Generating images

Meta generative AI

(Image credit: Meta)
  • Emu can generate high quality images
  • Can be accessed through Instagram and WhatsApp
  • Can generate stickers in three seconds

Emu, or Expressive Media Universe, is Meta’s new image generation engine. Like others of its kind, Emu is capable of pumping out high-quality images matching a specific text prompt. However, it will do so in five seconds flat – or so Mark Zuckerberg claims. What’s unique about this engine is it will power content generation on Meta’s other apps like Instagram and WhatsApp.

On the two platforms, Emu will allow users to create their own stickers for group chats in about three seconds. Generating images will require you to enter a forward slash and then a prompt such as “/image a sailboat with infinite sails.” This technology is being used on Instagram to generate unique backgrounds and new filters.

7. AI Studio

Meta Connect 2023

(Image credit: Meta)
  • User will be able to make their own AI
  • Sandbox kit will it easy to create models
  • Sandbox launches next year

Meta is opening the door for people to come in and make their AI via the AI Studio platform. Within the coming weeks, developers can get their hands on a new API that they can use to build their very own artificial personality. Non-programmers will get the opportunity to do the same through a company-provided sandbox. However, it’ll be a while until it sees the light of day as it won’t roll out until early 2024. 

The tech giant explains that with this tech you can create your own NPCs (non-player characters) for Horizon Worlds.

Smart glasses

8. Next-gen Ray-Bans

RayBan Meta Smart Glasses jumping out of their case

(Image credit: Meta)
  • $ 299
  • Available in 15 countries
  • Launches October 17

Near the end of his presentation, Mark Zuckerberg announced the next generation of Ray-Ban smart glasses now sporting better visual quality, better audio, and more lightweight body. On the corners of the frames will be two 12MP ultra wide camera lenses capable of recording 1080p video. It has 32GB of storage allowing you to store over 100 videos or 500 photos, according to Meta. 

What’s more is it comes with a snazzy-looking leather charging case similar to the kind you get with a normal pair of Ray-Bans. With the case, the Ray-Ban smart glasses can last up to 36 hours on a single charge.

It’s currently available for pre-order for $ 299 in either Wayfarer brown or Headliner black. It launches October 17 in 15 countries, “include the US, Canada, Australia, and throughout Europe.” 

9. Livestreaming

Ray-Ban Meta Smart Glasses

(Image credit: Meta)
  • Can connect to Instagram for livestreaming
  • Touch control activate certain features

Meta is giving its next-gen smart glasses the ability to livestream directly on Instagram and Facebook. In the demonstration, a new glasses icon will appear on the app’s video recording section. Turning on the icon and double-tapping the side of the glasses will connect the device to the app so viewers can see what you’re seeing. 

Additionally, tapping and holding the side of the frame lets you hear the latest comments out loud through their internal speakers. That way, streamers can stay in touch with their community.

This feature will be available when the updated Ray-Bans launch next month.

And that’s pretty much the entire event. As you can see, it was stacked. If you want to know more, be sure to check out TechRadar’s hands-on review of the Ray-Ban smart glasses.  

You might also like

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Meta Connect 2023: 4 things we expect to see at the Meta Quest 3 launch event

Meta has named the date for Meta Connect 2023 – September 27 – and we know that the highlight of the company's annual hardware and software showcase will be the full unveiling of the Meta Quest 3 headset.

Meta Connect is something of a mixed bag, in which we’ll not only find out about the new products that Meta is releasing in the near future, such as the Quest 3, but also its plans for stuff we might not get our hands on for the best part of a decade, if not longer.

Here are four announcements and updates that we expect to see at Meta Connect 2023 – as well as one announcement that we think Meta won’t be making.

More Meta Quest 3 details 

This isn’t much of a prediction, as Meta has confirmed that it'll be officially unveiling the Quest 3 during Meta Connect 2023. 

Meta Quest 3 floating next to its two controllers, they're all facing towards us, and are clad in white plastic

(Image credit: Meta )

We already know a fair amount about the Quest 3 – it’s a standalone VR headset that will succeed the Quest 2, it’s Meta’s “most powerful headset” yet, and it will start at $ 499 / £499 / AU$ 829. But we don’t have many specific details about its specs, release date, and what different Quest 3 models will offer – Meta’s “starts at” pricing suggests that more expensive upgraded versions of the Quest 3 will be available, and while we expect they’ll just be different storage options (as was the case with the Quest 2), we’ll have to see what Meta unveils.

We also don’t know if Meta will show off any new software to take advantage of the Quest 3’s specs. It may announce new VR games and apps that will take advantage of the Quest 3's improved performance over its predecessors, as well as mixed reality software that will be able to use the Quest 3’s improved color passthrough (the Quest Pro’s color-passthrough was okay but very grainy, and according to people in the know the Quest 3’s passthrough is significantly better).

Microsoft Office on Quest 

Speaking of Quest 3 software announcements, we hope that Meta and Microsoft will finally announce when native Office apps will be available on the Quest platform. During last year’s Meta Connect 2022 event, the companies announced that Office programs (like Word and Excel) were coming to the Quest headsets, but a year later they've yet to materialize – the only way to access them is via a virtual desktop app that’s synced with your real-world PC.

Microsoft 365 app logos including Teams, Word and Outlook surrounding the CoPilot hexagon

Maybe the AI Copilot will come to Quest as well (Image credit: Microsoft/GTS)

The companies also said that Xbox game streaming would be coming to VR, but since the announcement details have been scarce. Meta Connect 2023 would be the perfect time to finally give us a release date – and hopefully one that’s in 2023, as we’re tired of waiting.

Given that Apple is set to launch its Apple Vision Pro headset in early 2024, Meta only has a few months left to make its platform look as strong as possible before the new rival enters the space. This Apple headset will likely include VR versions of Apple’s catalog of productivity apps such as Pages and Keynote – so Meta would be smart to get Microsoft’s software onto its systems asap.

The metaverse and AI 

Horizon Worlds, Meta’s metaverse social media platform, usually gets a shout-out during Connect events, though the announcements are often a little lackluster – last year it was the news that avatars would be getting legs. And to make matters worse, we just hopped into Horizon Worlds, and as of August 14, 2023 our avatars are still legless…

Horizon Worlds doesn’t need legs; it needs reasons for people to use it. Meta has been steadily building up a catalog of VR experiences in the app and making various graphical improvements to it, but there’s still little reason to use Horizon Worlds over other VR software. Hopefully, at Connect 2023 Meta will finally give us a reason to try its metaverse out and not walk… sorry, bounce, away immediately.

smartphone screen with large shadow giving the feeling of floating on top of the background.

ChatGPT has stolen the metaverse’s thunder (Image credit: Diego Thomazini via Shutterstock)

One way Meta in which could try to reignite interest is to combine AI with the metaverse, and bring together two of tech’s biggest current subjects. Meta has been hard at work developing AI following the success of ChatGPT and other platforms, and it may want to harness those efforts to try and make its platform more appealing – perhaps by using AI to create bigger and better Horizon Worlds experiences, or to add NPC bots to make the service seem a bit more popular than it is.

With a smartphone version of Horizon Worlds also reportedly on the way for those who don't have a VR headset, some kind of AI integration could give more people a reason to try out the app.

AR hardware plans 

At Connect 2021 and 2022 Meta has mentioned augmented reality (AR) tech and shown off things it’s working on but it’s all been in-development tech and prototypes rather than something that we regular folks will ever get our hands on. We expect this might change at Meta Connect 2023.

Both the 2021 and 2022 events came with teases of hardware that would come in the following year; in 2021 this was Project Cambria (aka the Quest Pro) and in 2022 Meta teased a new consumer-friendly VR headset (the Quest 3). In 2023 we expect it’ll do the same but for AR hardware rather than VR.

model wearing facebook  Ray-Ban Stories smart glasses outdoors

Could Meta announce the Ray-Ban Stories 2 at Meta Connect? (Image credit: Facebook / Ray-Ban)

One big reason is that we don’t think Meta is ready to tease a new VR headset yet (more on that below), but even if it did we think an AR announcement makes more sense.

As we mentioned Meta has been publicly talking about its AR plans for some time – there are only so many times it kicks the ball down the road before we get tired of it teasing AR tech we can’t use. Additionally, with Apple preferring to use augmented reality when talking about its Vision Pro headset (rather than virtual reality), Meta may want to release its own AR tech to try and capture some of the renewed interest in the space that the Vision Pro has created.

If Meta does tease some kind of AR glasses, it’ll be interesting to see if they’re created in partnership with RayBan – like its Ray-Ban Stories glasses – or if they’re a completely Meta product. We’ll have to wait and see what happens on September 27.

No Meta Quest Pro 2 teaser 

If you’re hoping Meta will also tease a ‘Meta Quest Pro 2’ during the event, we wouldn’t recommend holding your breath. While this isn’t impossible, we feel Meta will keep its VR focus on the Quest 3 during the event (outside of showing us any prototypes for non-consumer products like it’s done in the past)

Our reasoning here is twofold. Firstly, teasing another VR headset just after the Quest 3 releases could put an instant dampener on any excitement people have for the new headset – it won’t get its time in the spotlight, which could hurt sales. 

Secondly, we don’t think Meta is ready to commit to launching a new VR headset in 2024 – which is when a Meta Quest Pro 2 teased at Meta Connect 2023 would arrive based on Meta’s usual tease-release cadence.

The Meta Quest Pro on its charging pad on a desk, in front of a window with the curtain closed

The Meta Quest Pro likely won’t get a sequel for a while (Image credit: Meta)

That’s because Meta reportedly recently canceled an in-development Quest headset prototype that leakers have said was the Quest Pro 2. While Meta has argued that the headset wasn’t the Quest Pro 2, and was instead an undesignated prototype, based on the leaks it sounds like this headset would have become the Pro 2 if development continued and it was given a proper name. No matter what the project was or wasn’t called Meta hasn’t argued it was canceled, and it will take time for Meta to develop a new prototype to replace it – suggesting the Quest Pro 2 is too far from ready to tease anything.

TechRadar – All the latest technology news

Read More

6 things we’ve learned about the Apple Vision Pro from the visionOS beta

Apple has launched its first-ever beta for visionOS – the operating system the upcoming Apple Vision Pro mixed-reality headset will use – giving us a glimpse at what its new gadget should be capable of at launch.

As explained in the Apple Developer blog post making the announcement, the launch of the visionOS SDK will give developers the chance to start working on spatial computing apps for the Vision Pro. It will also help developers understand the Vision Pro's capabilities. Even better, the SDK provides a visionOS simulator so that developers can test out their 3D interface in a number of room layouts with various lighting conditions. And those tests have already revealed a number of details about what the Vision Pro will and won’t be able to do at launch.

This is only the first beta, and users are accessing the simulator via a PC rather than a headset – so expect some changes to be made to visionOS before it officially launches. With that said, here’s what we’ve learned so far about the Apple Vision Pro from the visionOS beta.

1. Visual Search is coming 

Visual Search is basically the Vision Pro’s version of Google Lens or the Visual Lookup feature found on the best iPhones and best iPads (via MacRumors).

A man wearing the Apple Vision Pro headset and pressing its shutter button to take a photo

You can use the Vision Pro to scan real-world objects and text (Image credit: Apple)

According to info found in the visionOS beta, Vision Pro headset wearers will be able to use the headset’s cameras to find information about an item they scan and to interact with real-world text. This includes copying and pasting the text into Vision Pro apps, translating it between 17 supported languages, and converting units (like grams to ounces, or meters to feet). This sounds pretty neat, but unless you’re wearing your Vision Pro headset all the time while traveling abroad or baking with a recipe we aren’t too sure how often you’ll rely on these features.

2. The OS is intuitive 

While not the most flashy feature, intuitive OS design and windows management in 3D space will be crucial for the Vision Pro. The idea of having loads of software windows floating around us seems neat – it'd be like we’re a real-world Tony Stark – but if it's a pain to position them how we want, it’ll be easier to stick with a traditional PC and monitor.

Thankfully, it looks like it’s super easy to move, resize, and hide app windows in Vision Pro, as shown off by @Lascorbe on Twitter.

See more

The video also shows that you aren’t moving the apps on a fixed cylinder around you; you can take full advantage of the 3D space around you by bringing some windows closer while moving others further away – and even stacking them in front of each other if you want. While dragging a window it’ll turn translucent so you can see what’s behind it as you decide where to position it.

3. Porting iOS to visionOS is easy 

According to developers (like @lydakisg on Twitter) that have started working with visionOS, it’s incredibly easy to port iOS apps over to the new system – so many of the best iPhone apps could be available on the Vision Pro at launch. 

See more

This is great news for people that were worried that the Vision Pro might not have an app library comparable to the Quest Store found on Meta’s VR headsets like the Meta Quest Pro.

The only downside is that the ported iOS apps appear in a floating window as they would on a Mac rather than being a fully-fledged immersive experience. So while your favorite appears can easily appear on the Vision Pro, they might not take advantage of its new tech – at least not without the developers spending more time working on a dedicated visionOS version.

4. Battery percentages return 

Battery percentages are a sore spot for many iPhone users. When the iPhone X was released over five years ago it changed the battery status symbol – the percentage disappeared and only a steadily emptying symbol of a battery remained. While this symbol does give a visual indication of how much charge your phone has left, it’s not always as clear as a number; as such, it's been a constant request from iPhone users for Apple to bring back battery charge percentages – which it did with iOS 16 when the iPhone 14 launched.

A woman wears the Vision pro in front of a menu showing a battery icon that has no number inside of it

The Vision Pro trailer shows a battery icon with no percentage (Image credit: Apple)

Unfortunately, a brief section of Apple’s Vision Pro intro video showed us that the Vision Pro might make the iPhone X’s mistake by using a battery status symbol without a number.  

See more

Thankfully for fans of Apple’s more accurate battery symbol, users like @aaronp613 on Twitter have found that battery percentages do show up on Vision Pro. It’s not a massive win, but an important one for a lot of people. 

5. Apps can use unique control schemes 

The visionOS beta not only gives developers tools to create their own Vision Pro apps and to port their existing iOS software to the system; they’re also given details, sample code, and videos showing off the kinds of projects they could create for the upcoming Apple hardware.

One such game is Happy Beam, a video of which has been shared on Twitter by @SwiftlyAlex.

See more

Happy Beam doesn’t look super interesting in and of itself – one Twitter commenter noted it looks like the sort of AR game you could play on the Nintendo 3DS – but it shows that the Vision Pro is able to recognize different hand gestures (like forming a heart) and translate them to different in-game controls. 

We’ll have to wait and see how developers use these capabilities in their creations, but we can already imagine a few possible implementations. For example, rather than using button prompts you could make a scissors gesture with your hand to cut images and text from one document, then clap your hands to paste it in a new spot.

It also appears that Apple is conscious that its headset should remain accessible. As shown in the Happy Beam demo, there are alternative controls that allow Vision Pro users to rely on simpler gestures or controllers to play the game – with it serving as a reminder to other developers to consider similar alternative control schemes in their software.

This gameplay video shared by @wilburwongdev on YouTube shows how the game changes when not using your hands.

6. Fitness apps are discouraged

One last tidbit that has been spotted not in the visionOS beta but in the developer guidelines for the operating system. In its guidelines, Apple says app makers should “avoid encouraging people to move too much” while immersed in the headset. The wording is a little vague, but it seems as if Apple is against the development of fitness apps for Vision Pro at this time.

One notable omission from the Vision Pro reveal trailer was that there were no fitness apps featured. Many people (some of our writers included) use VR headsets for working out, or even just getting a bit active. There’s Beat Saber and Pistol Whip for more gamified workouts, or FitXR and Litesport for more traditional fitness options. These developer notes make the omission seem more intentional, suggesting fitness and activities involving a lot of movement are not in Apple’s current plan for the Vision Pro. We’ll have to wait and see if this changes when the device launches.


Want to learn more about the Vision Pro? Check this round-up of 5 features Apple may have removed from the Vision Pro before it was even out.

TechRadar – All the latest technology news

Read More