What is OpenAI’s Sora? The text-to-video tool explained and when you might be able to use it

ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.

It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.

Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.

OpenAI Sora release date and price

In February 2024, OpenAI Sora was made available to “red teamers” – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.

“We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” says OpenAI.

In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it. 

Two dogs on a mountain podcasting

(Image credit: OpenAI)

We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.    

It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.

As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $ 20 (about £16 / AU$ 30) per month. 

But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.

What is OpenAI Sora?

You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.

OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like “woman walking down a city street at night” or “car driving through a forest” and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.

See more

To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like “a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand”.

How does OpenAI Sora work?

On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.

Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on “internet-scale data” to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.

So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.

OpenAI Sora

Sora starts messier, then gets tidier (Image credit: OpenAI)

Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.

And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.

What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.

What can you do with OpenAI Sora?

At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.

It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.

OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.

In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.

Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.

How can you use OpenAI Sora?

At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.

Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.

Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.

You might also like

TechRadar – All the latest technology news

Read More

Should you upgrade to Google One AI Premium? Its AI features and pricing explained

Google has been busy revamping its AI offerings, renaming Bard to Gemini, pushing out a dedicated Android app, and lots more besides. There's also now a paid tier for Google's generative AI engine for the first time, which means another digital subscription for you to weigh up.

You can read our Google Gemini explained explainer for a broad overview of Google's AI tools. But here we'll be breaking down the Google Gemini Advanced features that come as part of the new Google One AI Premium tier. 

We'll be exploring how much this new cloud tier costs, plus all the AI features and benefits it brings, so you can decide whether or not you'd like to sign up. It's been added as one of the Google One plans, so you get some digital storage in the cloud included, too. Here's how Google One AI Premium is shaping up so far…

Google One AI Premium: price and availability

The Google One AI Premium plan is available to buy now and will cost you $ 19.99 / £18.99 / AU$ 32.99 a month. Unlike some other Google One plans, you can't pay annually to get a discount on the overall price, but you can cancel whenever you like.

At the time of writing, Google is offering free two-month trials of Google One AI Premium, so you won't have to pay anything for the first two months. You can sign up and compare plans on the Google One site.

Google One AI Premium: features and benefits

First of all, you get 2TB of storage to use across your Google services: Gmail, Google Drive, and Google Photos. If you've been hitting the limits of the free storage plan – a measly 15GB – then that's another reason to upgrade.

You'll notice a variety of other Google One plans are available, offering storage from 2TB to 30TB, but it's only the Google One AI Premium plan that comes with all of the Gemini Advanced features.

Besides the actual storage space, all Google One plans include priority support, 10% back in the Google Store, extra Google Photos editing features (including Magic Eraser), a dark web monitoring service that'll look for any leaks of your personal information, and use of the Google One VPN.

Google Gemini Advanced on the web

Google Gemini Advanced on the web (Image credit: Google)

It's the AI features that you're here for though, and the key part of Google One AI Premium is that you get access to Gemini Advanced: that means the “most capable” version of Google's Gemini model, known as Ultra 1.0. You can think of it a bit like paying for ChatGPT Plus compared to sticking on the free ChatGPT plan.

Google describes Gemini Ultra 1.0 as offering “state-of-the-art performance” that's capable of handling “highly complex tasks” – tasks that can involve text, images, and code. Longer conversations are possible with Gemini Advanced, and it understands context better too. If you want the most powerful AI that Google has to offer, this is it.

Google Gemini app

A premium subscription will supercharge the Gemini app (Image credit: Google)

“The largest model Ultra 1.0 is the first to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities,” writes Google CEO Sundar Pichai.

The dedicated Google Gemini app for Android, and the Gemini features built into the Google app for iOS, are available to everyone, whether they pay for a subscription or not – and it's the same with the web interface. However, if you're on the premium plan, you'll get the superior Ultra 1.0 model in all these places.

By the way, a standard 2TB Google One plan – with everything from the photo editing tricks to the VPN, but without the AI – will cost you $ 9.99 / £7.99 / AU$ 19.99 a month, so you're effectively paying $ 10 / £11 / AU$ 13 for Gemini Advanced.

A laptop on an orange background showing Gmail with Google Gemini

An example of Google Gemini in Gmail (Image credit: Google)

Gemini integration with Google's productivity apps – including Gmail, Google Docs, Google Meet, and Google Slides – is going to be “available soon”, Google says, and when it does become available, you'll get it as part of a Google One AI Premium plan. It'll give you help in composing your emails, designing your slideshows, and so on.

This is a rebranding of the Duet AI features that Google has previously rolled out for users of its apps, and it's now known as Gemini for Workspace. Whether you're an individual or a business user though, you'll be able to get these integrated AI tools if you sign up for the Google One AI Premium plan.

So there you have it: beyond the standard 2TB Google One plan, the main takeaway is that you get access to the latest and greatest Gemini AI features from Google, and the company is promising that there will be plenty more on the way in the future, too.

Google One AI Premium early verdict

On one hand, Google's free two-month trial of the One AI Premium Plan (which contains Gemini Advanced) feels like a no-brainer for those who want to tinker with some of the most powerful AI tools available right now. As long as you're fairly disciplined about canceling unwanted free trials, of course.

But it's also still very early days for Gemini Advanced. We haven't yet been able to put it through its paces or compare it to the likes of ChatGPT Plus. Its integration with Google's productivity apps is also only “available soon”, so it's not yet clear when that will happen.

The Google Gemini logo on a laptop screen that's on an orange background

(Image credit: Google)

If you want to deep dive into the performance of Google's latest AI models – including Gemini Advanced – you can read the company's Gemini benchmarking report. Some lucky testers like AI professor Ethan Mollick have also been tinkering with Gemini Advanced for some time after getting advanced access.

The early impressions seem to be that Gemini Advanced is shaping up to be a GPT-4 class AI contender that's capable of competing with ChatGPT Plus for demanding tasks like coding and advanced problem-solving. It also promises to integrate nicely with Google's apps. How well it does that in reality is something we'll have to wait a little while to find out, but that free trial is there for early adopters who want to dive straight in.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

What exactly is the Rabbit R1? CES 2024’s AI breakout hit explained

We were first introduced to the Rabbit R1 in January 2024, at CES 2024, but what exactly is it? The charming sidekick (designed by Teenage Engineering) is promising to take pocket gadgets to the next level – offering something like a smartphone, but with an intuitive, unified, AI-driven interface that means you (theoretically, at least) need to interact with individual apps and websites.

If you're curious about the Rabbit R1 and the ways in which it might change the course of personal computing – or at least show us how next-gen smartphone voice assistants might work – we've gathered together everything you need to know about it here. From what it costs and how it works, to the AI engine driving the R1 experience, all the details of this potentially revolutionary device are here.

The first batches of the Rabbit R1 are due to start shipping to users later in 2024, although it seems availability is going to be rather limited to begin with – so you might have to wait a while to get your very own Rabbit R1.

Rabbit R1: one-minute overview

Rabbit r1 device

The r1 (Image credit: Rabbit)

The Rabbit R1 is a lot like a phone in terms of its looks, and in some of its features: it has a camera and a SIM card slot, and it supports Wi-Fi and Bluetooth. What's different, and what makes the Rabbit R1 special, is the interface: instead of a grid of apps, you get an AI assistant that talks to your favorite apps and does everything for you.

For example, you could get the R1 to research a holiday destination and book flights to it, or queue up a playlist of your favorite music, or book you a cab. In theory, you can do anything you can already do on your phone, just by asking. That said, there remain a lot of questions over exactly how it works and protects your privacy in the way it describes.

We've seen next-gen personal assistants depicted in movies like Her, and the R1 is trying to make that a reality – leveraging the latest AI capabilities to replace the traditional smartphone interface with something a lot more intuitive and slick.

See more

Another way to think about the Rabbit R1 is as an evolution of the Amazon Echo, Google Nest, and Apple HomePod smart speakers. The voice-controlled digital assistants on these devices can do some rudimentary tasks – like check the weather or play music – but the R1 wants to go way beyond what they're capable of.

Rabbit says the R1 is “the future of human-machine interfaces”, and you can check out its pitch for the device in its very Apple-flavored CES 2024 keynote below.

Rabbit r1: release date and price

The first batch of 10,000 units of the Rabbit R1 were made available to preorder at the same time as the device was announced at CES, on January 9, 2024. Those units quickly sold out, as did a second batch of 10,000 units made available shortly after.

Rabbit says that the people who got their preorders in should start having their devices shipped to them in March and April 2024. At the time of writing, there's no indication yet when another batch of units will be made available to preorder, or when we might see the r1 go on sale more widely.

What we do know is that the price of the Rabbit R1 starts at $ 199, which works out at around £155 / AU$ 300. To begin with, the Rabbit R1 is available to order in the US, Canada, the UK, Denmark, France, Germany, Ireland, Italy, Netherlands, Spain, Sweden, South Korea, and Japan, from the Rabbit website.

What's more, unlike rival AI devices such as the Humane AI Pin, there's no ongoing subscription fee that you have to pay out.

Rabbit r1: hardware

Rabbit r1 device

The r1 comes in a distinctive color (Image credit: Rabbit)

The Rabbit r1 is square, and bright orange, and comes with a 2.88-inch color touchscreen on the front. It's quite a compact device, almost small enough to fit in the palm of a hand, and it weighs in at 115 grams (about 4 oz). There's only one design, for now – you can't pick this up in multiple colors.

We know there's a far-field mic embedded in the R1, as well as built-in speakers. There's an integrated 360-degree camera here too, which is apparently called the Rabbit Eye. You can interact with elements by touching the screen, and there's an analog scroll wheel at the side of the device as well, if you need it.

Rabbit r1 device

The r1 camera (Image credit: Rabbit)

On the right of the Rabbit R1 is a push-to-talk button, which you make use of whenever you want to talk to the AI assistant on the device. There's no need for any “hey Google” or “hey Siri” wake command, and it also means the assistant doesn't have to be constantly listening out for your voice. Double-tapping the button activates the on-board camera.

Under the hood we've got a 2.3GHz MediaTek Helio processor, and Rabbit says the device offers “all day” battery life. That battery can be charged with a USB-C charge cable and power adapter, but it's worth bearing in mind that these aren't included in the box, so you'll have to use ones you've already got.

Rabbit R1: software

With its bright orange casing, the Rabbit r1 looks kind of cute, but it's the software that really makes it stand out. If you've used something like ChatGPT or Google Bard already, then this is something similar: Rabbit OS is fronted by an AI chatbot, capable of both answering questions and performing tasks.

In the CES keynote demo, Rabbit founder and CEO Jesse Lyu showed the R1 answering philosophical questions, checking stock prices, looking up information about movies, playing music on Spotify, booking an Uber, ordering a pizza, and planning a vacation (complete with flights and hotel reservations).

Rabbit r1 software

The r1 runs Rabbit OS (Image credit: Rabbit)

To get some of this working, you need to connect the Rabbit OS with your various apps and services, which can be done through a web portal. From the demo we've seen, it looks as though Spotify, Apple Music, YouTube Music, Expedia, Uber, eBay, and Amazon will be some of the services you can connect to.

Rabbit is keen to emphasize that it doesn't store any of your login details or track you in any way – it simply connects you to the apps you need – though the specifics of how it does this via the cloud are still unclear. 

Rabbit's privacy page gives us a few more details, stating that “when you interact with rabbit OS, you will be assigned a dedicated and isolated environment on our cloud for your own LAM [large action model]. When our rabbits perform tasks for you, they will use your own accounts that you have securely granted us control over through our rabbit hole web portal”.

It also adds that “we do not store your passwords for these services. Rabbits will ask for permission and clarification during the execution of any tasks, especially those involving sensitive actions such as payments.” Exactly how Rabbit provides each user with a “dedicated and isolated environment” in its cloud isn't yet clear, but we should find out more once it lands with its first early adopters.

We've also been told the R1 can handle communication, real-time translation, and analyze images taken with the camera – show the R1 what's in your fridge, for example, and it could up with a dish you can cook.

The Rabbit R1 promises speedy responses too, quicker than you'd get with other generative AI bots. You can converse with the R1 as you would with Siri or Google Assistant, or you can bring up an on-screen keyboard by shaking the device. It calls its on-board AI a Large Action Model or LAM, similar to a Large Language Model or LLM (familiar from bots like ChatGPT), but with a lot more agency.

Rabbit r1 keynote

The r1 wants to take over multiple phone tasks (Image credit: Rabbit)

On top of all this, Rabbit says you can teach the R1 new skills. So, if you showed it how to go online and order your groceries for you, the next time it would be able to do that all by itself. In the CES demo, we saw the Rabbit R1 learning how to create AI images through Midjourney, and then replicating the process on its own.

Interestingly, Rabbit says it doesn't want the R1 to replace your phone – it wants to work alongside it. The R1 can't, for example, browse YouTube, check social media or let you organize your email (at least not yet), so it would seem that the humble smartphone will be sticking around for a while yet.

While some of the specifics about how the Rabbit R1 works and interacts with your favorite apps and services remain unclear, it's undoubtedly one of the most exciting pieces of AI hardware so far – as shown by the rapid sell-out of its early stock. We'll bring you more first impressions as soon as we've got our hands on one of 2024's early tech stars. 

You might also like

TechRadar – All the latest technology news

Read More

All the changes coming to macOS Sonoma in the latest 14.1 update explained

We’ve just got the first big update for macOS Sonoma (Apple’s latest operating system for Macs and MacBooks, which was released in September).

The Sonoma 14.1. update is available for all Mac users running macOS Sonoma, and can be downloaded and installed through the Software Update section found in System Settings

If you’re not running macOS Sonoma, you’re not being left out, as Apple also released updates for older devices and operating systems, macOS Ventura 13.6.1 and macOS Monterey 12.7.1, which include many of the security fixes that macOS Sonoma 14.1 has. 

The macOS Sonoma‌ 14.1 update brings some new features to a range of apps, including a new warranty section which details your AppleCare+ plan (if you have one) and the status of your coverage (including for connected devices like AirPods and Beats headphones), along with new sections in the Apple Music app allowing you to add your favorite songs, albums, and playlists.

MacRumors lists the full rundown of changes and fixes that Apple has made in the update, and you can see an even more detailed breakdown of the security-related changes on Apple’s support website.

This isn’t a massive update, and seems almost like routine maintenance with some new additions, so there’s still plenty of room for improvement for macOS Sonoma, which is a decent operating system – but still not perfect. Some users are reporting buggy performance while using macOS Sonoma, although not all performance issues are Apple’s fault. That said, it seems like this update at least shows that Apple is aware of user feedback, and is working to improve the OS. 

An Apple MacBook Pro on a desk with an iPhone being used as a webcam. The webcam is using Continuity Camera in macOS Ventura to show items on a desk using the Desk View feature.

(Image credit: Apple)

What's coming next down the Apple pipeline

Hopefully, we won’t have long to wait for more improvements, as AppleInsider reports that macOS Sonoma 14.2’s developer beta has already been released to testers. If you would like to try this even newer version of macOS Sonoma, you’ll be able to grab it once the public beta version is released via the Apple Beta Software Program. This is only recommended for those willing to experiment with their devices, so we don’t recommend installing the beta on devices used for critical activities. 

We recently learned that Apple has been tripling down on its AI efforts, and I think users are eager to see what this means for the company’s devices, such as the best MacBooks and Macs. Considering that Apple has been thought of as behind the curve in the recent round of the AI game, with competitors like Microsoft partnering with OpenAI and Amazon partnering with Anthropic (a rival of OpenAI working on innovative generative AI like its own AI chatbot, Claude), many people feel Apple needs to start showing off its AI products soon – maybe even in a future update for macOS Sonoma. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

What is xrOS? The Apple VR headset’s rumored software explained

The Apple VR headset is getting close to its rumored arrival at WWDC 2023 on June 5 – and the mixed-reality wearable is expected be launched alongside an exciting new operating system, likely called xrOS.

What is xrOS? We may now be approaching iOS 17, iPadOS 16 and macOS 13 Ventura on Apple's other tech, but the Apple VR headset – rumored to be called the Apple Reality One – is expected to debut the first version of a new operating system that'll likely get regular updates just like its equivalents on iPhone, iPad and Mac.

The latest leaks suggest that Apple has settled on the xrOS name for its AR/VR headset, but a lot of questions remain. For example, what new things might xrOS allow developers (and us) to do in mixed reality compared to the likes of iOS? And will xrOS run ports of existing Apple apps like Freeform?

Here's everything we know so far about xrOS and the kinds of things it could allow Apple's mixed-reality headset to do in both augmented and virtual reality.

xrOS release date

It looks likely that Apple will launch its new xrOS operating system, alongside its new AR/VR headset, at WWDC 2023 on June 5. If you're looking to tune in, the event's keynote is scheduled to kick off at 10am PT / 1pm ET / 6pm BST (or 3am ACT on June 6).

This doesn't necessarily mean that a final version of xrOS will be released on that day. A likely scenario is that Apple will launch an xrOS developer kit to allow software makers to develop apps and experiences for the new headset. 

See more

While not a typical Apple approach, this is something it has done previously for the Apple TV and other products. A full version of xrOS 1.0 could then follow when the headset hits shelves in late 2023.

The software's name now at least looks set in stone. As spotted by Parker Ortolani on Twitter on May 16, Apple trademarked the 'xrOS' name in its traditional 'SF Pro' typeface in New Zealand, via a shell company. 

We'd previously seen reports from Bloomberg  that 'xrOS' would be the name for Apple's mixed-reality operating system, but the timing of this discovery (and the font used) bolster the rumors that it'll be revealed at WWDC 2023.

Apple Glasses

(Image credit: Future)

A report from Apple leaker Mark Gurman on December 1, 2022, suggested that Apple had “recently changed the name of the operating system to “xrOS” from “realityOS,” and that the name stands for “extended reality”. This term covers both augmented reality (which overlays information on the real world) and virtual reality, a more sealed experience that we're familiar with on the likes of the Meta Quest 2.

While xrOS is expected to have an iOS-like familiarity – with apps, widgets and a homescreen – the fact that the Apple AR/VR headset will apparently run both AR and VR experiences, and also use gesture inputs, explains why a new operating system has been created and will likely be previewed for developers at WWDC.

What is xrOS?

Apple's xrOS platform could take advantage of the AR/VR headset's unique hardware, which includes an array of chips, cameras and sensors. It's different from ARKit, the software that lets your iPhone or iPad run AR apps. Apple's xrOS is also expected to lean heavily on the design language seen on the iPhone, in order to help  fans feel at home.

According to Bloomberg's Gurman, xrOS “will have many of the same features as an iPhone and iPad but in a 3D environment”. This means we can expect an iOS-like interface, complete with re-arrangeable apps, customizable widgets and a homescreen. Apple is apparently also creating an App Store for the headset.

See more

Stock apps on the AR/VR headset will apparently include Apple's Safari, Photos, Mail, Messages and Calendar apps, plus Apple TV Plus, Apple Music and Podcasts. App developers will also be able to take advantage of its health-tracking potential.

Gurman says that the headset experience will feel familiar to Apple fans – when you put it on, he claims that “the main interface will be nearly identical to that of the iPhone and iPad, featuring a home screen with a grid of icons that can be reorganized”. 

But how will you type when wearing the Apple Reality Pro (as it's rumored to be called)? After all, there probably won't be any controllers.

Spacetop computer used in public

The Sightful Spacetop (above) gives us a glimpse of how the Apple AR/VR headset could work us a virtual Mac display. (Image credit: Sightful)

Instead, you'll apparently be able to type using a keyboard on an iPhone, Mac or iPad. There's also the slightly less appealing prospect of using the Siri voice assistant. Apple is rumored to be creating a system that lets you type in mid-air, but Gurman claims that this feature “is unlikely to be ready for the initial launch”.

It's possible that you'll be able to connect the headset to a Mac, with the headset serving as the Mac's display. We've recently seen a glimpse of how this might work with the Spacetop (above), a laptop that connects to some NReal AR glasses to give you a massive 100-inch virtual display.

What apps will run on xrOS?

We've already mentioned that Apple's AR/VR headset will likely run some optimized versions of existing stock apps, including Safari, Photos, Mail, Messages, Contacts, Reminders, Maps and Calendar. 

But given that those apps aren't exactly crying out for a reinvention in AR or VR, they're likely to be sideshows to some of the more exciting offerings from both Apple and third-party developers. 

So what might those be? Here are some of the most interesting possibilities, based on the latest rumors and what we've seen on the likes of the Meta Quest Pro

1. Apple Fitness Plus

An AR fitness experience on the Litesport app

Apps like Litesport (above) give us a glimpse of AR fitness experiences that could arrive of Apple’s headset. (Image credit: Litesport)

Assuming the Apple AR/VR headset is light and practical enough for workouts – which is something we can't say for the Apple AirPods Max headphones – then it definitely has some AR fitness potential.

According to a report from Bloomberg's Mark Gurman on April 18, Apple is planning to tap that potential with “a version of its Fitness+ service for the headset, which will let users exercise while watching an instructor in VR”.

Of course, VR fitness experiences are nothing new, and we've certainly enjoyed some of the best Oculus Quest fitness games. An added AR component could make them even more powerful and motivating, with targets added to your real-world view.

2. Apple Freeform

The Freeform app on an iPad on an orange background

(Image credit: Apple)

We called Apple's Freeform, which gives you a blank canvas to brainstorm ideas with others, “one of its best software releases in years”. And it could be taken to the next level with a version of AR or VR.

Sure enough, Bloomberg's aforementioned report claims that “Apple is developing a version of its Freeform collaboration app for the headset”, which it apparently “sees as a major selling point for the product”.

Okay, work-themed AR/VR work experiences might not sound thrilling and we certainly had misgivings after working for a whole week in VR with the Meta Quest Pro. But mixed-reality whiteboards also sound potentially fun, particularly if we get to play around with them in work time.

3. Apple TV Plus

A basketball team scoring in a NextVR stream

(Image credit: NextVR)

Because Apple's headset will have a VR flipside to its AR mode, it has huge potential for letting us watch TV and video on giant virtual screens, or in entirely new ways. This means that Apple TV Plus will also likely be pre-installed in xrOS.  

Another claim from that Bloomberg report on April 18 was that “one selling point for the headset will be viewing sports in an immersive way”. This makes sense, given Apple already has deals for Major League Baseball and Major League Soccer on Apple TV Plus

And while they're only rumors, Apple has also considered bidding for Premier League soccer rights in the UK. Well, it'd be cheaper than a season ticket for Manchester United.

4. FaceTime

Joining a call through FaceTime links in macOS 12 Monterey

(Image credit: Apple)

While we haven't been blown away by our experiences with VR meetings in Horizon Workrooms on the Meta Quest, the Apple mixed-reality headset will apparently deliver a next-gen version of FaceTime – and the Reality Pro's hardware could take the whole experience up a notch,

With an earlier report from The Information suggesting that Apple's headset will have at least 12 cameras (possibly 14) to track your eyes, face, hands and body, it should do a decent job of creating a 3D version of you in virtual meeting rooms.

We still haven't really seen a major real-world benefit to VR video meets, even if you can do them from a virtual beach. But we're looking forward to trying it out, while crossing our virtual fingers that it works more consistently than today's non-VR FaceTime.

5. Adobe Substance 3D Modeler 

Adobe has already released some compelling demos, plus some beta software called Substance 3D Modeler (above), showing the potential of its creative apps in VR headsets. Will that software's list of compatible headsets soon include the Apple Reality Pro? It certainly seems possible.

The software effectively lets you design 3D objects using virtual clay in a VR playground. Quite how this would work with Apple's headset on xrOS isn't clear, given it's rumored to lack any kind of physical controllers. 

These kinds of design tools feel like a shoo-in for Apple's headset, given many of its users are already happy to shell out thousands on high-end Macs and MacBooks to use that kind of software in a 2D environment.

TechRadar – All the latest technology news

Read More

ChatGPT explained: everything you need to know about the AI chatbot

ChatGPT has quickly become one of the most significant tech launches since the original Apple iPhone in 2007. The chatbot is now the fastest-growing consumer app in history, hitting 100 million users in only two months – but it's also a rapidly-changing AI shapeshifter, which can make it confusing and overwhelming.

That's why we've put together this regularly-updated explainer to answer all your burning ChatGPT questions. What exactly can you use it for? What does ChatGPT stand for? And when will it move to the next-gen GPT-4 model? We've answered all of these questions and more below. And no, ChatGPT wasn't willing to comment on all of them either.

In this guide, we'll mainly be covering OpenAI's own ChatGPT model, launched in November 2022. Since then, ChatGPT has sparked an AI arms race, with Microsoft using a form of the chatbot in its new Bing search engine and Microsoft Edge browser. Google has also quickly responded by announcing a chatbot, tentatively described as an “experimental conversational AI service”, called Google Bard.

These will be just the start of the ChatGPT rivals and offshoots, as OpenAI is offering an API (or application programming interface) for developers to build its skills into other programs. In fact, Snapchat has recently announced a chatbot 'called My AI' that runs on the latest version of OpenAI's tech.

For now, though, here are all of the ChatGPT basics explained – along with our thoughts on where the AI chatbot is heading in the near future.

What is ChatGPT?

ChatGPT is an AI chatbot that's built on a family of large language models (LLMs) that are collectively called GPT-3. These models can understand and generate human-like answers to text prompts, because they've been trained on huge amounts of data.

For example, ChatGPT's most recent GPT-3.5 model was trained on 570GB of text data from the internet, which OpenAI says included books, articles, websites, and even social media. Because it's been trained on hundreds of billions of words, ChatGPT can create responses that make it seem like, in its own words, “a friendly and intelligent robot”.

A laptop on a green background showing ChatGPT

(Image credit: ChatGPT)

This ability to produce human-like, and frequently accurate, responses to a vast range of questions is why ChatGPT became the fastest-growing app of all time, reaching 100 million users in only two months. The fact that it can also generate essays, articles, and poetry has only added to its appeal (and controversy, in areas like education).

But early users have also revealed some of ChatGPT's limitations. OpenAI says that its responses “may be inaccurate, untruthful, and otherwise misleading at times”. OpenAI CEO Sam Altman also admitted in December 2022 that the AI chatbot is “incredibly limited” and that “it's a mistake to be relying on it for anything important right now”. But the world is currently having a ball exploring ChatGPT and, despite the arrival of a paid ChatGPT Plus version, you can still use it for free. 

What does ChatGPT stand for?

ChatGPT stands for “Chat Generative Pre-trained Transformer”. Let's take a look at each of those words in turn. 

The 'chat' naturally refers to the chatbot front-end that OpenAI has built for its GPT language model. The second and third words show that this model was created using 'generative pre-training', which means it's been trained on huge amounts of text data to predict the next word in a given sequence.

A laptop screen showing a word illustration from Google's Transformer research paper

An illustration from Google’s 2017 research paper for the Transformer architecture, which ChatGPT is based on. (Image credit: Google)

Lastly, there's the 'transformer' architecture, the type of neural network ChatGPT is based on. Interestingly, this transformer architecture was actually developed by Google researchers in 2017 and is particularly well-suited to natural language processing tasks, like answering questions or generating text. 

Google was only too keen to point out its role in developing the technology during its announcement of Google Bard. But ChatGPT was the AI chatbot that took the concept mainstream, earning it another multi-billion investment from Microsoft, which said that it was as important as the invention of the PC and the internet.

When was ChatGPT released?

ChatGPT was released as a “research preview” on November 30, 2022. A blog post casually introduced the AI chatbot to the world, with OpenAI stating that “we’ve trained a model called ChatGPT which interacts in a conversational way”.

The interface was, as it is now, a simple text box that allowed users to answer follow-up questions. OpenAI said that the dialogue format, which you can now see in the new Bing search engine, allows ChatGPT to “admit its mistakes, challenge incorrect premises, and reject inappropriate requests”.

A laptop screen showing the ChatGPT Plus welcome screen

(Image credit: OpenAI)

ChatGPT is based on a language model from the GPT-3.5 series, which OpenAI says finished its training in early 2022. But OpenAI did also previously release earlier GPT models in limited form – its GPT-2 language model, for example, was announced in February 2019, but the company said it wouldn't release the fully-trained model “due to our concerns about malicious applications of the technology”.

OpenAI also released a larger and more capable model, called GPT-3, in June 2020. But it was the full arrival of ChatGPT in November 2022 that saw the technology burst into the mainstream.

How much does ChatGPT cost?

ChatGPT is still available to use for free, but now also has a paid tier. After growing rumors of a ChatGPT Professional tier, OpenAI said in February that it was introducing a “pilot subscription plan” called ChatGPT Plus in the US. A week later, it made the subscription tier available to the rest of the world.

ChatGPT Plus costs $ 20 p/month (around £17 / AU$ 30) and brings a few benefits over the free tier. It promises to give you full access to ChatGPT even during peak times, which is when you'll otherwise frequently see “ChatGPT is at capacity right now” messages during down times.

A laptop screen on a green background showing the pricing for ChatGPT Plus

(Image credit: OpenAI)

OpenAI says the ChatGPT Plus subscribers also get “faster response times”, which means you should get answers around three times quicker than the free version (although this is no slouch). And the final benefit is “priority access to new features and improvements”, like the experimental 'Turbo' mode that boosts response times even further. 

It isn't clear how long OpenAI will keep its free ChatGPT tier, but the current signs are promising. The company says “we love our free users and will continue to offer free access to ChatGPT”. Right now, the subscription is apparently helping to support free access to ChatGPT. Whether that's something that continues long-term is another matter.

How does ChatGPT work?

ChatGPT has been created with one main objective – to predict the next word in a sentence, based on what's typically happened in the gigabytes of text data that it's been trained on.

Once you give ChatGPT a question or prompt, it passes through the AI model and the chatbot produces a response based on the information you've given and how that fits into its vast amount of training data. It's during this training that ChatGPT has learned what word, or sequence of words, typically follows the last one in a given context.

For a long deep dive into this process, we recommend setting aside a few hours to read this blog post from Stephen Wolfram (creator of the Wolfram Alpha search engine), which goes under the bonnet of 'large language models' like ChatGPT to take a peek at their inner workings.

But the short answer? ChatGPT works thanks to a combination of deep learning algorithms, a dash of natural language processing, and a generous dollop of generative pre-training, which all combine to help it produce disarmingly human-like responses to text questions. Even if all it's ultimately been trained to do is fill in the next word, based on its experience of being the world's most voracious reader.

What can you use ChatGPT for?

ChatGPT has been trained on a vast amount of text covering a huge range of subjects, so its possibilities are nearly endless. But in its early days, users have discovered several particularly useful ways to use the AI helper.

Broadly speaking, these can be divided into natural language tasks and coding assistance. In our guide to six exciting ways to use ChatGPT, we showed how you can use it for drafting letters, writing poetry, and creating (or adapting) fiction. That said, it does still have its limitations, as we found when ChatGPT showed us just how far it is from writing a blockbuster movie

That hasn't stopped self-publishing authors from embracing the tech, though. With YouTube and Reddit forums packed with tutorials on how to write a novel using the AI tech, the Amazon Kindle store is already on the cusp of being overrun with ChatGPT-authored books.

A laptop screen showing the MagicSlides Chrome extension for Google Slides

(Image credit: MagicSlides)

Other language-based tasks that ChatGPT enjoys are translations, helping you learn new languages (watch out, Duolingo), generating job descriptions, and creating meal plans. Just tell it the ingredients you have and the number of people you need to serve, and it'll rustle up some impressive ideas. 

But ChatGPT is also equally talented at coding and productivity tasks. For the former, its ability to create code from natural speech makes it a powerful ally for both new and experienced coders who either aren't familiar with a particular language or want to troubleshoot existing code. Unfortunately, there is also the potential for it to be misused to create malicious emails and malware

We're also particularly looking forward to seeing it integrated with some of our favorite cloud software and the best productivity tools. There are several ways that ChatGPT could transform Microsoft Office, and someone has already made a nifty ChatGPT plug-in for Google Slides. Microsoft has also announced that the AI tech will be baked into Skype, where it'll be able to produce meeting summaries or make suggestions based on questions that pop up in your group chat.

Does ChatGPT have an app?

ChatGPT doesn't currently have an official app, but that doesn't mean that you can't use the AI tech on your smartphone. Microsoft released new Bing and Edge apps for Android and iOS that give you access to their new ChatGPT-powered modes – and they even support voice search.

The AI helper has landed on social media, too. Snapchat announced a new ChatGPT sidekick called 'My AI', which is designed to help you with everything from designing dinner recipes to writing haikus. It's based on OpenAI's latest GPT-3.5 model and is an “experimental feature” that's currently restricted to Snapchat Plus subscribers (which costs $ 3.99 / £3.99 / AU$ 5.99 a month).

A phone screen showing Snapchat's My AI chatbot

(Image credit: Snap)

The arrival of a new ChatGPT API for businesses means we'll soon see an explosion of apps that are built around the AI chatbot. In the pipeline are ChatGPT-powered app features from the likes of Shopify (and its Shop app) and Instacart. The dating app OKCupid has also started dabbling with in-app questions that have been created by OpenAI's chatbot.

What is ChatGPT 4?

OpenAI's CEO Sam Altman has confirmed that it's working on a successor to the GPT-3.5 language model used to create ChatGPT, and according to the New York Times this is GPT-4.

Despite the huge number of rumors swirling around GPT-4, there is very little confirmed information describing its potential powers or release date. Some early rumors suggested GPT-4 might even arrive in the first few months of 2023, but more recent quotes from Sam Altman suggest that could be optimistic.

For example, in an interview with StrictlyVC in February the OpenAI CEO said in response to a question about GPT-4 that “in general we are going to release technology much more slowly than people would like”.

He also added that “people are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.” That said, rumors from the likes of the New York Times have suggested that Microsoft's new Bing search engine is actually based on a version of GPT-4.

While GPT-4 is unlikely to bring anything as drastic as graphics or visuals to the text-only chatbot, it is expected to improve on the current ChatGPT's already impressive skills in areas like coding. We'll update this article as soon as we hear any more official news on the next-gen ChatGPT technology.

TechRadar – All the latest technology news

Read More

What is VoIP jitter in VoIP phone systems? |Network jitter explained

What is VoIP jitter?

VoIP jitter, usually referred to as network jitter, is the time delay experienced by VoIP phone users between signal transmission and signal reception over a data network. During VoIP voice and video calls the audible and video performative effects of network jitter are usually seen as a loss of connection, glitches or ‘lag’.

VoIP jitter time delay

High quality data stream vs. same stream with jitter visualisation. (Image credit: Wikimedia Commons)

Do you recognize this situation?

“You’re currently number 23 in the queue.”

That’s my starting position as I begin the familiar, on-hold phone game with my gas supplier. Four and a half minutes in and waiting patiently. 

It’s not my position in the queue (down to position 17 now) that irritates me so much as the increasingly poor quality of the not-so-delightful hold music wailing in my ear. It’s choppy too, cutting in and out incessantly. 

Do I really need to speak to them today? Last time I was on hold for almost 48 whole minutes. I'd groan if I wasn't concentrating so hard on waiting to hear this sorrowful hold music stop. 

Oh, I could always call back. I won't though. I'm definitely not going through all of this again. Almost eight minutes now and my brain is starting to feel like a cheese grater. 

Woman is frustrated with customer service connection over VoIP phone

For some, the effects of VoIP network jitter are enraging.  (Image credit: Getty Images)

I can barely make out the audio, but it sounds like I’m now number 9 in the queue, progress! But of course, the longer I wait to speak to someone, the more dire the call quality becomes. Typical. Wait. Why does the hold music now resemble a toddler let loose in a Yamaha Music shop? 

“You’re currently number 7 in the queue. We're sorry, all of our customer service agents are busy at this time. Please continue to hold. ”

Honestly, if they’re going to make me sit and wait for this long, they could at least check the hold music and queue updates actually work… I can feel the blood pressure rising before I’ve even spoken with the poor customer service agent!

Why is network jitter important?

The scenario described above is precisely why combating VoIP network jitter is important. Being placed in virtual customer service queues with distorted hold music is enough to make anyone grouchy and grumpy. 

In this article, we look at how you can minimise customer irritation caused by network jitter on VoIP phones. Understanding VoIP jitter can help you increase customer satisfaction, minimise customer wait times and ultimately offer better customer service for increased sales. 

What we’re dealing with here is quite simple – Voice Over Internet Protocol (VoIP) jitter, often called network jitter.

VoIP or network jitter is congestion generated by millions of internet connections that are all active simultaneously, and which effectively start to clog up the ‘routes’ they’re taking to get to their individual destinations.

The technical definition of VoIP jitter is the variability over time of network latency

Latency, in turn, is defined as the time it takes for one packet of data to pass along its route. Learn more about VoIP Quality of Service, how data packets work and what packet loss is.  



It’s important to know about and understand VoIP jitter because it ultimately has an impact on how your business operates. It can be the difference between retaining a customer and losing them. 

VoIP phone user stands on car for network signal

When you’re trying to close a sale the last thing you need is network interference.  (Image credit: Getty Images)

Jitter is a common occurrence that affects online activities that depend on two-way, real-time communication. Examples include customer service lines, conference calls, IP security cameras, and more. Jitter problems can affect any network connection, but end users experience it most often on wi-fi.

❕ Example of jitter business interference:

Complain keyboard button

(Image credit: Future)

If your VoIP system connectivity is poor and your customer is being put on hold with crackly music and unintelligible muffled message updates, they’ll quickly hang up and start looking for alternatives, and alternatives often mean going with a competitor!

Thinking back to our data packets that transfer information along these communication lines, when packets arrive at different intervals, fluctuations result and voice packets end up being dropped. 

As VoIP converts sound into data packets, every packet matters. So packet delays can result in gaps in conversation or drops in sound quality. 

From an end user perspective, VoIP is particularly prone to jitter problems as people can perceive delays above 500 milliseconds (more on that below). 

Depending on the level of jitter, the sound can therefore be choppy or even incomprehensible – that cheese grater effect!

What is VoIP latency what does it have to do with network latency?

We mentioned earlier that the technical definition of VoIP jitter is the variability over time of network latency and that latency is defined as the time it takes for one packet of data to pass along its route. If you’ve ever tried watching a video over the internet that kept getting interrupted, then you’ll be familiar with this type of latency.

Example of network jitter, video buffering

Don’t worry it’s not your internet, this is an image to show the affects of VoIP jitter on data transmission.  (Image credit: Getty Images)

When it comes to VoIP specifically, latency generally occurs in two ways:

1) The delay between a person speaking, and then the recipient on the other end of the phone hearing those words.

2) The time it takes for the VoIP solution to actually process and convert the voice information into data packets.

A bundle of optical fibres: Even fibre optic broadband isn't safe from VoIP network jitter.

Even fibre optic broadband isn’t safe from VoIP network jitter. (Image credit: Denny Müller on Unsplash)

It’s easy to see how this directly impacts the quality of the call, leading to those long pauses we all know and love and, of course, speakers interrupting or talking over each other. Latency is usually impacted by a number of different factors. These include:

Network hardware – some routers can only transmit data at limited rates.

Wireless interference – this is down to the distance between devices and the lack of stability that comes with a wired connection.

Network software and set-up – firewalls that are incorrectly set up, or quality of service settings that aren’t configured correctly, can delay the transmission of data.

Location – this is the most common cause of latency. The further away, the longer it will take to transmit that data.

Congestion – think of your network as a road and latency as the congestion caused by extra traffic. The more data that’s being transmitted, the slower it goes.

Luckily, measuring latency is pretty easy to do – it’s calculated using what’s called a ping test. A ping test is really simple. You carry out a basic data transfer test (a ‘ping’) and measure the time it takes for your network to send and receive this data packet. You’ll then be able to work out your latency using the below equation:

Latency = ping send time + ping receipt time in milliseconds (ms).

What are the different types of VoIP jitter?

VoIP jitter definition: (n.) The technical definition of jitter defines the variability over time of network latency.

Synonyms include: Network stuttering, bandwidth issues, network connectivity problems, ping delays and pings.  (Image credit: Future)

The ultimate goal is to eliminate any form of VoIP jitter – there’s no such thing as good or bad, high or low, as it all contributes to poor communication quality and negative business outcomes.  However, there are acceptable levels of jitter depending on the situation. For interactive video streaming, Skype calls and the like, jitter tolerance is low.

According to Cisco, jitter tolerance, packet loss and network latency should be as follows: 

  • Jitter should be below 30 ms.
  • Packet loss should be no more than 1%. (Learn more here on how to measure packet loss).
  • Network latency should not go over 300 ms (for the full ping send and receipt time).

However, if you’re streaming a Netflix video, i.e., the communication is uni- or one directional, then a higher jitter tolerance can be exploited.  As a business relying on VoIP for business-critical customer service activities, the lower jitter tolerance level is a good best practice to follow.

VoIP jitter, ping delays and network stuttering: understanding VoIP terminology 

There are a plethora of different words and phrases used to describe VoIP jitter. Very often, however, they all describe the same thing. This goes for ‘network stuttering’, ‘bandwidth issues’, ‘network connectivity problems’, ‘ping delays’ and even simply ‘pings’. 

On the face of it, this might seem irrelevant, especially if you’re used to dealing internally with other network and IT-savvy people. 

Diagnostic tip 💡

Understanding the different terms used to describe network jitter as a professional means you can identify and diagnose network problems in a flash, and troubleshoot them faster.

However, if you’re dealing on a daily basis with other company departments and even customers, then you’ll want to familiarize yourself with all the different variations and even come up with a common ‘dictionary’ that your company uses. 

It may appear trivial, but a huge amount of time can be saved if you’re all talking the same language and can therefore identify, diagnose and fix problems more quickly.

How to fix VoIP jitter

Use a jitter buffer

A jitter buffer is a device installed on a VoIP system to counter delay and latency.

The way they work is to delay incoming voice packets and store them for a short period of time. They can be configured to buffer traffic for 30 to 200 milliseconds, before the traffic is then sent on to the end user. This process ensures the data packets arrive in order and with minimal delay.

It’s worth noting that using a jitter buffer won’t fix everything. While jitter buffering improves VoIP call quality, it also increases the overall network delay. This is because the jitter buffer holds traffic for up to 200 milliseconds, adding latency to the service. 

In effect, they don’t address the root cause of the issue, only the symptoms. For more on how to prevent jitter in the first place, scroll down to the next section on how to prevent VoIP jitter.

Prioritize packets

Packet prioritization refers to a VoIP Quality of Service (QoS) setting that gives certain traffic types priority over others. 

The traffic you decide to prioritize will depend on which service you want to maintain or enhance the quality of. Typically, packet prioritization is only used when the service you’re trying to uphold demands constant high performance and is of critical importance to your organization.

If you choose to support VoIP calls, then you’ll need to make sure any packets containing VoIP data are given priority over other traffic types.

How to prevent VoIP jitter

Of course, the best way to stay on top of issues with VoIP jitter is to avoid it in the first place. Thankfully, there are a number of preventative measures that can be taken to do this, so you can avoid headaches later on down the line with irate employees and unhappy customers. 

Test your connection’s quality

This may sound simple but poor internet connection may be the biggest cause of your jitter issues. Some VoIP providers already offer speed tests. 

They are designed to show you the level of quality you’d expect to see when making calls through their platform. You can get in touch with your provider to see if they offer these tests and how they can help improve connection quality.

Use an Ethernet cable 

Ethernet cables are an uncommon sight these days but they’re actually a great resource if you’re not reliant on constant mobile working and are at a fixed desk for periods of time. 

Ethernet cables generally provide a much more powerful connection that wi-fi ones so you’re less likely to experience jitter.

Check your hardware

Even the most basic of networks now consists of a good number of hardware components. Think of your company set-up – it’s probably made up of physical firewalls, digital converters, physical network cables, modems, switches, wi-fi components… and that’s just for starters!

If any of this equipment is outdated, or worse, damaged, then it’s probably going to give you jitter problems. So it’s really important you ensure hardware is in top shape and as modern as possible.

Configure Quality of Service (QoS) and other settings

QoS settings are typically included in routers – they’re what you use to prioritize data packets. 

But beware with data packet prioritization: on the one hand you are improving your VoIP services, on the other hand other traffic may suffer, so settings should be configured based on the specific needs of your business. 

You should explore the other QoS settings available through your router to optimize your VoIP service.  

Don’t scrimp on a good router

Routers are so important that they deserve their own mention. A router is effectively the brain of your internal network, connecting together the other components to create a complete circuit. 

They provide both wired and wireless connections, and can create a massive bottle neck if they aren’t up to the job. Good routers also have those QoS settings we’ve talked about and that you’ll want to take advantage of depending on your business.

Use a VoIP monitoring tool

Finally, there are VoIP monitoring tools designed to give you in-depth insights into critical call and QoS metrics. 

This one from solarwinds has a range of highly advanced features, including VoIP call quality troubleshooting, real-time WAN monitoring and visual VoIP call path tracing. 

In other words, they go beyond simple ping tests to offer a fully comprehensive solution. They can even let you generate simulated VoIP traffic so you can monitor network quality during periods of downtime when calls are less active. 

Final thoughts

VoIP is fast becoming a business-critical system for organizations of all sizes. Events of the past year or so have accelerated the move to VoIP for a number of reasons and have meant that even the smallest of businesses now rely on it to maintain their day-to-day operations.

Read next 💡

ringcentral logo

(Image credit: RingCentral)

We've listed the best VoIP services and best VoIP headsets available for businesses to help give you a head start in your search. 

Why not also take a look at our popular RingCentral VoIP services review or Nextiva vs RingCentral VoIP comparison? Or, if you're just starting out with VoIP learn the difference between VoIP and PBX.

But as with any high-performance system, it does need a degree of maintenance and upkeep to ensure it supports business teams in the right way and to guarantee that customers receive an optimal customer experience.

We’ve all been there ourselves on the other end of the phone when jitter is occurring. Thankfully, jitter is easily fixed and even prevented. 

Cutting corners when it comes to your foundational VoIP and internet infrastructure is not recommended; taking the proper measures to prevent jitter and minimize latency to ensure your VoIP runs smoothly is a much better route.

TechRadar – All the latest technology news

Read More

What is the Cameo app? Prices, how to use and more explained

A world in which Kevin from The Office, Jay from the Inbetweeners, and Karl Kennedy from Neighbours rule the roost may sound like the hazy figment of a beautiful imagination, but that's exactly what the Cameo app has made a reality.

Cameo saw its popularity explode during lockdown, as thousands of bored stars of film, TV, music and sport rushed to Cameo to showcase their talents, make a buck or two, and brighten fans' days in the process.

Dunder Mifflin's slickest accountant has lined his accounts to the tune of more than $ 1 million purely from his Cameo appearances and, having seen him in action, it's little wonder why fans of The Office can't get enough.

But what is Cameo, how is it any different to Instagram, Twitter and any other social platform, and what can you use it for? Keep reading for all you need to know about Cameo.

What is the Cameo app?

Have you ever dreamed of getting a pep talk from your team's star player? Fantasized about your favorite movie star reenacting *that* iconic line, just for you? Or wanted to be serenaded by your 90s crush?

Cameo is a website and app that is specifically dedicated to making your most outlandish, niche and even your dullest celebrity-oriented daydreams come true – within reason of course. 

The video-first platform literally allows you to pay famous stars (current and former) to either record a personalized video message, or chat one-to-one on a video call. 

It's the latest evolution of social media, and what many of us rather over-ambitiously envisaged when Twitter and Instagram first emerged all those years ago; only now camera phones are good enough and data speeds fast and reliable enough to make that vision a reality.

As well as making you feel like you've got a direct and intimate line to your idols, it gives bored and out-of-work celebrities something to do.

How does the Cameo app work?

Performers as they may be, your favorite celebs won't be able to churn out a personalized video on the spot. You instead need to submit a request, which you can do on behalf of yourself, as a gift to somebody else, or for an entire company, by visiting their Cameo page.

They'll then have up to seven days to fulfil it, although many guarantee a 24-hour turnaround. They also have the option to decline requests. 

The request process itself includes plenty of guidelines, but the trick is to be clear and concise. For instance, you won't need to set out that your request is for a birthday message, as there are dedicated special occasions toggles, which also include holiday, pep talk, and roast.

However, you may want to explain any potentially tricky name pronunciations, specify the tone of the message, and include precise details to make the video message or call as personal as possible. Or if you'd prefer them to freestyle, or to say or do something specific, say so in your request.

When your request has been fulfilled, you'll receive a link to your personalized video message, which you can download and share online.

A selection of the available celebrities on Cameo

(Image credit: Cameo)

What celebrities are on the Cameo app?

Hollywood royalty, sporting idols and global megastar musicians have flocked to the platform in their thousands.

Think you've got what it takes to withstand both barrels from Logan Roy himself? Brian Cox  from Succession is a relative newcomer to Cameo, and will happily sling a few f-words your way, whatever the occasion.

Tony Hawk, Magic Johnson and Brett Favre are just some of the legendary sportspeople on the app, plus you could just get Bruce Buffer to shout “Iiiiiiit's TIIIIIIIIIIME!” over and over again, if you wanted to. Unless you'd rather “Get rrrrrrrready to RRRRRUMBLE!” instead, of course, in which case Michael's also available.

Gloria Estefan's rhythm really could get you, while you no longer need wonder how you might live without LeAnn Rimes. And let's not forget about the smooth, sultry tones of Kenny G, who'll pull out all the stops if the price is right.

And who else but Caitlyn Jenner heads up Cameo's superabundance of personalities from reality TV? RuPaul's Drag Race's Divina de Campo and NYC's realest housewife Sonja Morgan are also immensely popular on the app.

But Cameo's biggest draws aren't necessarily who you might expect them to be. Cult TV actors James Buckley (Jay from The Inbetweeners), Alan Fletcher (Karl Kennedy from Neighbours), and the undisputed king of Cameo, Brian Baumgartner (Kevin from the US version of The Office), are amongst the app's star attractions and despite getting perfect reviews pretty much across the board, they're also far cheaper than many others.

In fact, when USA Today published a list of the biggest TV stars on Cameo in 2020, there were certainly some surprising names on there: 

  1. Ed Brown (90 Day franchise)
  2. Josh Sussman (Glee, Wizards of Waverly Place)
  3. Brian Baumgartner (The Office)
  4. Larry Thomas (Seinfeld)
  5. William Hung (American Idol)
  6. Fred Stoller (Handy Manny, Everybody Loves Raymond)
  7. Colin Mochrie (Whose Line Is It Anyway?)
  8. Ray Abruzzo (The Sopranos, The Practice)
  9. Sandra Diaz Twine (Survivor)
  10. Lee Rosbach aka Captain Lee (Below Deck)

What does the Cameo app cost?

The Cameo app is free to download, but the rich and famous don't just give away their time and energy for nothing. They set their own fees, which vary wildly.

Video messages, either for yourself or another individual, are by far the most affordable service available on Cameo, with prices starting at $ 15 per message at the time of publication. The, ahem, celebrities around this price point tend to be niche comedians and lookalikes, although in most cases they have excellent reviews.

The $ 50 mark is where you'll start recognizing household names. As a general rule of thumb, the more you're willing to spend, the more famous the face – though you'll find a few exceptions along the way.

Some celebrities have a higher opinion of themselves than others (more on this below), and charge several hundred dollars, or even thousands per message.

Live video calls tend to be a little pricier than pre-recorded messages, but not outrageously so. However, they do have the potential to be much more awkward.

Commercial bookings are by far the most expensive. Several hundred dollars would be at the cheaper end of the scale, with these types of bookings regularly climbing into the thousands.

Who is the most expensive person on Cameo?

He long ago made the pursuit of money, money and more money the central pillar of his persona, so it's little surprise that Floyd “Money” Mayweather is one of the most expensive people on Cameo, a fact that he's very proud of.

The former champ's prices are set at $ 15,000, though he only accepts commercial bookings.

However, fellow boxing Hall of Famer Mike Tyson has thrown his hat into the ring by charging $ 20,000, also strictly for business requests. 

The one-time “Baddest Man on the Planet” has rather mellowed out in his later years, and for that extra $ 5,000, is likely to deliver a few more pearls of wisdom than the abrasive Mayweather, a suspicion backed up by their ratings.

Cameo app on Android and iOS

(Image credit: Cameo)

How do I get the Cameo app?

Signing up to Cameo is straightforward, and takes just a couple of minutes.

You can do so via the app, which is available to download for free on both on iOS and Android, or via the Cameo website.

We'd recommend signing up on the website if you can, since some celebrities' Cameo profiles aren't accessible on the app at all due to Apple taking a significant chunk of the money made from bookings through the iOS app.

You just need a few key details to sign up, including your name, email address and a password.

How do 1:1 live video calls work on Cameo?

For some, the chance to speak to their idols one-to-one on a video call is the most exciting thing about Cameo, although a significant number of the celebrities on the platform have opted out of live video calls (and those that do offer them charge a premium for the privilege).

There are no phone numbers involved. Rather, you can join your favorite celebrity's Cameo Fan Club on their profile. You'll then be notified when they plan to go live.

When you join, prepare to wait a short while as you likely won't be first in the queue, and bear in mind that many of the celebrities who do offer live calls on Cameo prefer to keep conversations sharp.

While you can't record live video calls, you will get a photo opportunity at the end.

Get started by heading to the Cameo website

TechRadar – All the latest technology news

Read More

Disney Plus UK: how to sign up, movies, app links, Sky Q and more explained

Disney Plus is out now in the UK. For just £5.99 a month, you can stream a whole host of fantastic old movies for a reasonable price, with some classic TV shows thrown in for good measure. 

With Disney Plus, you can watch all the Star Wars, Disney, Marvel and The Simpsons you can handle. Think of it as like Netflix, but focused specifically on Disney-related and Disney-owned content, like Marvel, Star Wars, Pixar and more. Here's the full list of Disney Plus UK movies and TV shows that Disney released at launch, which will help you figure out if you want the service. 

The launch line-up was pretty good, even compared to the existing libraries in the US and Australia. Now we've spent plenty of time with the app, too, we can see it retains the Disney US feature of explaining when content is coming to the service in the future. 

That's how we know Frozen 2 will be on Disney Plus UK on July 17 2020, for example. We're less sure about Onward, which doesn't have a UK release date. 

If you were hoping you'd get to watch every episode of Star Wars show The Mandalorian at launch in the UK, episodes have only started to 'roll out'. Still, you've now got three episodes of this excellent show to enjoy. And let's not underestimate how awesome it is to have 30 seasons of The Simpsons available to stream.

Subscribe here, at monthly or yearly tiers:

Below, we'll talk you through everything we know about Disney Plus post-launch, including the price, compatible devices, free trial, shows, movies and more. You can also click here for our first impressions of Disney Plus UK.

Disney Plus UK release date: it's live!

Disney Plus is now live in the UK! You can start watching it now

How to sign up to Disney Plus

All you have to do is head to the Disney Plus website, create an account and enter your billing details to get started. With your login details to hand, you'll then want to download Disney Plus onto the device of your choice, say a smartphone, smart TV, games console or tablet. Scroll down for a list of compatible devices. 

Not sure you want it yet? Head here to grab a 7-day free trial of Disney Plus. It's easy to cancel if you don't want to commit (here's how you cancel Disney Plus). 

Disney Plus UK in April 2020: new movies and TV shows

As well as new episodes of all its originals, including The Mandalorian and The Clone Wars, April 2020 brings other new content to Disney Plus in the UK. That includes the Simpsons short film Playdate with Destiny (10 April), Edward Scissorhands (10 April), Descendants 3 (11 April) and Night at the Museum (10 April). A Celebration of the Music from Coco (10 April), Dolphin Reef (3 April) and Elephant (3 April) also join it this month.

Disney Plus app links: how to download Disney Plus

Below, we've added app links we've found so far for the UK launch, and we'll add more as they appear.

Disney Plus: UK price and subscription tiers explained

Disney Plus costs £59.99 for an annual subscription, or £5.99 per month. These are the two available tiers, and you can cancel at any time. Unlike in the US, where it's bundled in with ESPN and Hulu, in the UK Disney Plus is a standalone service. 

Either tier gets you four concurrent streams, unlimited downloads with a maximum of 10 devices and the option to create seven profiles. 

In the US, you can gift a year of Disney Plus either digitally or in the form of physical cards, but no such option has been announced for the UK yet.

Disney Plus supports 4K and HDR streams

Disney Plus indeed supports 4K and HDR. When you're in the app, head over to the 'details' tab of a given movie or show and you'll see a section that says 'available in the following formats', which will explain if the content in question features 4K Ultra HD and HDR. 

You now have every Star Wars movie to watch in 4K with HDR. Enjoy!

Disney Plus UK: compatible devices and apps

Disney Plus has launched on pretty much any device you can name in the UK, including mobile devices, games consoles, streaming media devices and smart TVs. You can take Disney Plus shows on the go, too, downloading as many movies and shows as you can fit on your device, as long as you have an active subscription and connect to the internet every 30 days.  

Disney Plus UK has launched on LG TVs, Sky Q, Apple TV, Roku streaming devices, Android (5.0 and later), iOS (11.0 and later), PS4, Xbox One, LG WebOS smart TVs, Samsung Tizen smart TVs, Google Chromecast and Amazon's Fire range of streaming devices. 

One notable exception is the Nintendo Switch, which is still pretty poor at supporting streaming services. 

Phillips' Android-based smart TVs support Disney Plus too. Your Samsung TV may be able to get Disney Plus, as well. Read our guide and discover if your TV can support it.

Disney Plus UK: shows and movies, including The Simpsons 

Click to see the full list of Disney Plus UK movies and shows at launch, and see what you can stream right now. Every Star Wars movie minus The Rise of Skywalker is on there, as well as a near-complete list of Pixar movies and Marvel movies. You've also got 2019's Aladdin and The Lion King movies on day one. Frozen 2, which just launched in the US, doesn't arrive until 17 July in the UK according to the app. 

Looking for recommendations? Check out our list of the best Disney Plus TV shows and best Disney Plus movies. Star Wars series The Mandalorian is the clear highlight of Disney Plus originals. Episodes are rolling out weekly, and the first two are available now. 

In the UK, all new episodes of original shows on Disney Plus will be released at 8am each Friday. Expect one new episode for each Disney Plus original show per week, except The Clone Wars, which will get two episodes per week until the show catches up with the US. 

Other originals include the live-action Lady and the Tramp, High School Musical: The Series, Encore!, The World According to Jeff Goldblum, Togo, Diary of a Future President, Forky Asks a Question and The Imagineering Story at launch, too. Expect one episode for each original at launch. 

Disney Plus: future shows and movies

In the future, Disney Plus is getting plenty of big exclusive shows. From the Marvel Cinematic Universe side of things, new shows include The Falcon and the Winter Soldier (August), WandaVision (November), Loki (2021), Hawkeye (2021) and animated show What If?. Further off, expect TV shows based on Moon Knight, Ms Marvel and She-Hulk. Unlike Marvel's Netflix shows, too, these will canonically be part of the MCU, and feature actors crossing over between the movies and these TV series.

Lucasfilm has a second season of The Mandalorian coming in October 2020, then further off it's making shows featuring Ewan McGregor's Obi-Wan Kenobi and Diego Luna's Cassian Andor from Rogue One. 

It's likely you can expect recent Disney-associated movies like Pixar's Onward, Star Wars: The Rise of Skywalker and Maleficent: Mistress of Evil on there before the end of 2020. In the US, Onward arrives early on April 3. Hopefully we'll see it in the UK before long. 

Disney Plus has launched with Sky Q and Now TV is coming at a later date

Disney Plus has made a deal with Sky to host Disney Plus on its Sky Q platform at launch. That means you can watch Disney Plus as well as your other Sky content – it'll just be added to your Sky bill. According to Pocket Lint, full integration into the Sky Q platform won't come until April, but you can watch Disney Plus through an app on your Sky Q box.

Disney Plus will be available on Now TV in the coming months, too. 

TechRadar – All the latest technology news

Read More