ChatGPT takes the mic as OpenAI unveils the Read Aloud feature for your listening pleasure

OpenAI looks like it’s been hard at work, making moves like continuing to improve the GPT store and recently sharing demonstrations of one of the other highly sophisticated models in its pipeline, the video-generation tool Sora. That said, it looks like it’s not completely resting on ChatGPT’s previous success and giving the impressive AI chatbot the capability to read its responses out loud. The feature is being rolled out on both the web version and the mobile versions of the chatbot. 

The new feature will be called 'Read Aloud', as per an official X (formerly Twitter) post from the generative artificial intelligence (AI) company. These will come in useful for many users, including those who have different accessibility needs and people using the chatbot while on the go.

Users can try it for themselves now, according to the Verge, either on the web version of ChatGPT or a mobile version (iOS and Android), and they will be given five different voices they can select from that ChatGPT can use. The feature is available to try whether you use the free version available to all users, GPT-3.5, or the premium paid version, GPT-4. When it comes to languages, users can expect to be able to use the Read Aloud feature in 37 languages (for now) and ChatGPT will be given the ability to autodetect the language that the conversation is happening in. 

If you want to try it on the desktop version of ChatGPT, there should be a speaker icon that shows up below the generated text that activates the feature. If you'd like to try it on a mobile app version, users can tap on and hold the text to open the Read Aloud feature player. In the player, users can play, pause, and rewind the reading of ChatGPTs’ response. Bear in mind that the feature is still being rolled out, so not every user in every region will have access just yet.

A step in the right direction for ChatGPT

This isn’t the first voice-related feature that ChatGPT has received, with Open AI introducing a voice chat feature in September 2023, which allowed users to make inquiries using voice input instead of typing. Users can keep this setting on, prompting ChatGPT to always respond out loud to their inputs.

The debut of this feature comes at an interesting time, as Anthropic recently introduced similar features to its own generative AI models, including Claude. Anthropic is an OpenAI competitor that’s recently seen major amounts of investment from Amazon. 

Overall, this new feature is great news in my eyes (or ears), primarily for expanding accessibility to ChatGPT, but also because I've had a Read-Aloud plugin for ChatGPT in my browser for a while now. I find it interesting to listen to and analyze ChatGPT’s responses out loud, especially as I’m researching and writing. After all, its responses are designed to be as human-like as possible, and a big part of how we process actual real-life human communication is by speaking and listening to each other. 

Giving Chat-GPT a capability like this can help users think about how well ChatGPT is responding, as it makes use of another one of our primary ways of receiving verbal information. Beyond the obvious accessibility benefits for blind or partially-sighted users, I think this is a solid move by OpenAI in cementing ChatGPT as the go-to generative AI tool, opening up another avenue for humans to connect to it. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Bye-bye, Bard – Google Gemini AI takes on Microsoft Copilot with new Android app you can try now

Google Bard has been officially renamed as Gemini – and as was recently rumored, there’s going to be a paid subscription to the AI in the same vein that Microsoft introduced with Copilot Pro not so long ago.

Gemini will, of course, sound familiar, as it’s actually the name of Google’s relatively recently introduced AI model which powered Bard – so basically, the latter name is being scrapped, simplifying matters so everything is called Gemini.

There’s another twist here, though, in that Google has a new sprawling AI model called Ultra 1.0, and this freshly built engine – which is the “first to outperform human experts on MMLU (massive multitask language understanding)” according to the company – will drive a new product called Gemini Advanced.

No prizes for guessing that Gemini Advanced is the paid subscription mentioned at the outset. Those who want Gemini Advanced will have to sign up to the Google One AI Premium plan (which is part of the wider Google One offering). That costs £19.99 / £18.99 per month and includes 2TB of cloud storage.

Google is really hammering home how much more advanced the paid Gemini AI will be, and how it’ll be much more capable in terms of reasoning skills, and taking on difficult tasks like coding.

We’re told Gemini Advanced will offer longer more in-depth conversations and will understand context to a higher level based on your previous input. Examples provided by Google include Gemini Advanced acting as a personal tutor capable of creating step-by-step tutorials based on the learning style it has determined is best for you.

Or for creative types, Gemini Advanced will help with content creation, taking into account considerations such as recent trends, and ways in which it might be best for creators to drive audience numbers upwards.

Google is also introducing a dedicated Gemini app for its Android OS (available in the US starting today, and rolling out to more locations “starting next week”). Gemini will be accessible via the Google app on iOS, too.

Owners of the best Android phones will get the ability to use Gemini via that standalone app, or can opt in via Google Assistant, and it’ll basically become your new generative AI-powered helper instead of the latter.

Long press the power button and you’ll summon Gemini (or use “Hey Google”) and you can ask for help in a context-sensitive fashion. Just taken a photo? Prod Gemini and the AI will pop up to suggest some captions for example, or you can get it to compose a text, clarify something about an article currently on-screen, and so on.

Google Assistant voice features will also be catered for by Gemini on Android, such as controlling smart home gadgets.

Naturally, the iOS implementation won’t be anything like this, but within the Google app you’ll have a Gemini button that can be used to create images, write texts, and deliver other more basic functions than you’ll see on Android.

The rollout of the Gemini app on Android, and iOS handsets, starts from today in the US, so some folks may be able to get it right now. It’ll be made available to others in the coming weeks.


Analysis: As Bard exits stage left, will Gemini shine in the spotlight?

Google is pretty stoked about the capabilities of Gemini Advanced, and notes that it employs a diverse set of 57 subjects – from math and physics, through to law and medicine – to power its knowledge base and problem-solving chops.

We’re told by Google that in “blind evaluations with our third-party raters” the Ultra 1.0-powered Gemini Advanced came out as the preferred chatbot to leading rivals (read: Copilot Pro).

Okay, that’s all well and good, but big talk is all part of a big launch – and make no mistake, this is a huge development for Google’s AI ambitions. How the supercharged Ultra 1.0 model pans out in reality, well, that’s the real question. (And we’re playing around with it already, rest assured – stay tuned for a hands-on experience soon).

The other question you’ll likely be mulling is how much will this AI subscription cost? In the US and UK it’ll run to $ 20 / £18.99 per month (about AU$ 30 per month), though you do get a free trial of two months to test the waters, which seems to suggest Google is fairly confident Gemini Advanced will impress.

If $ 20 monthly sounds familiar, well, guess what – that’s exactly what Microsoft charges for Copilot Pro. How’s that for a coincidence? That said, there’s an additional value spin for Google here – the Google One Premium plan doesn’t just have its AI, but other benefits, most notably 2TB worth of cloud storage. Copilot Pro doesn’t come with any extras as such (unless you count unlocking the AI in certain Microsoft apps, such as Word, Excel and so on, for Microsoft 365 subscribers).

So now, not only do we have the race between Google and Microsoft’s respective AIs, but we have the battle between the paid versions – and perhaps the most interesting part of the latter conflict will be how much in the way of functionality is gated from free users.

Thus farm, Copilot Pro is about making things faster and better for paying users, and adding some exclusive features, whereas Gemini Advanced seems to be built more around the idea of adding a lot more depth in terms of features and the overall experience. Furthermore, Google is chucking in bonuses like cloud storage, and looking to really compete on the value front.

However, as mentioned, we’ll need to spend some time with Google’s new paid AI offering before we can draw any real conclusions about how much smarter and more context-aware it is.

You might also like

TechRadar – All the latest technology news

Read More

The latest Meta Quest 3 update brings 4 useful upgrades, and takes away a feature

‘Tis season for a Meta Quest update, with new features, and even a performance boost, coming to your Oculus Quest 2, Meta Quest 3, and Meta Quest Pro VR headsets via update v60. Unfortunately, the update also means the removal of a feature – so long, phone notifications.

Per the announcement on Meta's blog, which change is the most impactful is a toss-up depending on which headset you own. For Meta Quest Pro users it’s likely going to be the mixed-reality performance boost that’s coming exclusively to your headset. Meta is enabling higher clock speeds for the Pro’s CPU and GPU that it says will result in a 34% and 19% increase in performance for these components respectively.

This boost won’t improve the passthrough video quality, just the rendering and responsiveness of the virtual objects in your MR space though – so it might not be enough to convince you to try more MR apps if you haven't already. 

If you don’t own a Quest Pro, the best upgrade coming in v60 is to the number of rooms your Quest device can remember. If you opt in to share your point cloud data, your VR headset will gain the ability to store information for more than one play space at a time – meaning you should be able to move your play space between rooms more easily, without having to redraw the boundaries every time.

Hamish interacting with objects in VR while wearing a Meta Quest 3. They stand in front of a plant while someone watches on.

You can now enjoy your Quest 3 in multiple rooms more easily (Image credit: Meta)

As we mentioned above, however, users are losing access to one feature – phone notifications will no longer show on your headset.

It’s not clear exactly why this tool is being taken away – our guess is that it has something to do with the feature not being popular enough – but those who do rely on it will notice a downgrade. You’ll now need to remove your headset every time you want to check why your phone has pinged, unless you have a Meta Quest 3; as we noted in our Meta Quest 3 review, this headset’s mixed-reality passthrough is a major leap forward, and it’s good enough for you to be able to make out what’s on a real-world screen. 

A new Horizon (Home)

A few other changes coming in v60 include new Meta Horizon Home environments – the Blue Hill Gold Mine, Storybook, and Lakeside Peak (which you can see in the GIF below). These visually distinct spaces will not only give you a nice space to load into when you boot up your headset, but a more personalized space that you can invite your VR friends to, to hang out and watch Meta Quest TV content together before jumping into a multiplayer experience.

The scene shifts between a pristine storybook world, a wild west saloon at night time, and a stunning mountain view

(Image credit: Meta)

Your profile is also getting a power-up. Now, unless you keep the info private by changing your account preferences, people who look at your profile can see more details about your shared VR interests, including the apps you both use and your mutual friends.

Neither is super-impactful right now, but as the metaverse becomes more social these sorts of minor tweaks will help to make the experience a lot more seamless, so they're certainly appreciated.

As with previous updates, v60 is gradually rolling out now, so if you don’t see the new features yet don’t panic – you shouldn’t have long to wait until the update installs and they unlock.

You Might Also Like

TechRadar – All the latest technology news

Read More

Google Search adds Notes tab and takes a small step toward social search

Google is updating its search engine with new tools to help you personalize the experience by making it tailored specifically to you.

Among the three reveals, the most impressive, in our opinion, has to be the upcoming Notes feature. This tool will pull together different “tips and advice” on a topic from across the internet if a search result isn't particularly helpful. In a demonstration shown to us, a Google rep played the role of someone looking up instructions on how to make frosting for a birthday cake. Let's say the first search result wasn't what you were looking for. 

Selecting the Notes button below that result connects you to content uploaded by other people offering unique insight to your query. By giving users the opportunity to learn from other people's experiences, Google explained, you might find information better suited to your needs that an official source may fail to address. On the surface, it feels like the tech giant is launching a mini-social media platform on Search since the feature allows for a free-flowing exchange of information.

Google Search Notes

(Image credit: Google)

Be aware that Notes is a type of short-form content. There doesn’t appear to be enough room for long pieces of text. You won't be able to write, for instance, a 500-word recipe for frosting. These Notes need to be short and sweet. It's all about having people provide bite-sized tips on how to make something better such as suggesting adding a bit of lemon zest to a batch of frosting.

Guardrails and limitations

Google is aware that implementing such an update could expose Google Search to a bunch of bad actors coming in and uploading a bunch of inappropriate content. To combat this, it’s adding several guardrails. First, Google will be “using a combination of algorithmic protections and human review” to double-check what is uploaded. Second, the content in “each Note is ranked” according to a search result. The more relevant it is, the higher it’ll place. Finally, “anyone can report a note… for human review” if they run into any inappropriate content.

There are several limitations you need to be aware of. The search feature launches today, however, it will only be made available to users living in the US or India, plus they must be a part of the Search Labs program. Additionally, it will be exclusive to the mobile web version of Google Search as well as the official Google app. 

If you want to try this out, we have a guide teaching people how to try out Google software betas. The guide describes gaining access to the Search Engine Experience, but it's the same process.

Notes will start as an “experiment”. The company wants to see how well this feature will work on a grand scale. It’s unknown how long the trial will last or if it'll see an official release.

Follow your favs

The rest of the update isn’t as dynamic, but it’s still interesting. Over the coming weeks, Google will introduce a Follow tool to American users on mobile. It’ll help you stay up to date on topics you frequently look up by providing “new-to-you” information. Follow can deliver news articles on the latest events of your favorite sports team or specific fashion trends. 

In the image below, you'll see how Follow changes. The left screenshot displays a fairly generic feed with a few pictures, but over time, Google will deliver videos from content creators specializing in your interest once it learns what you like.

Google's new Follow tool

(Image credit: Google)

Finally, the Perspectives tab will roll out to Google Search on desktop to, as you can probably guess, people living in the United States. This tool lets you find content from various online communities like forums or social media platforms. Prior to this, it was exclusive to mobile devices.

As you can see, the US is getting the lion's share of this update. We asked a Google rep if there are plans for an international launch. All we were told is that they working on bringing the “features to locations”, but have nothing more to share at the moment.

We can see these tools becoming really useful helping you track great deals for tech during the holiday season. If you want our advice, check out TechRadar's list of the nine Black Friday deals we recommend buying now.

You might also like

TechRadar – All the latest technology news

Read More

Say your goodbyes to Cortana: the unloved Windows 11 assistant is going the way of Clippy as Copilot takes over

The preview of the newest Windows 11 build is missing a big thing: the Cortana app. This change was detailed in an official Windows Insider blog post (aimed at people who help test out early versions of upcoming Windows 11 updates), providing a link to an extra page going into more detail about ending support for the Cortana standalone app. 

This isn’t the first time we’ve seen talk of effectively killing off the Cortana app. BleepingComputer reports about another Canary channel preview release that had the Cortana app and support for it removed earlier this year. The future of Cortana was originally announced back in June, when Microsoft first set out its plans to end support for the standalone app.

At the time, Microsoft wrote that support for Cortana would also eventually end for a range of Microsoft products including Teams mobile, the Teams display, and Teams Rooms, as well as ending voice assistance for Outlook mobile and Microsoft 365 mobile, in the later half of this year. 

This is a big step for Microsoft which committed a lot of time and resources to Cortana, integrating it deeply into the Windows operating system and tailoring it to work with a number of Microsoft apps and products. It was, however, long expected that it may end up getting culled after Microsoft put out an announcement on its official support blog two years ago that support for the Cortana mobile app would end.

Screenshot of Windows Copilot in use

(Image credit: Microsoft)

The new kid on the block, Copilot

Cortana’s exit is happening to make way for Microsoft’s new central focus, its AI-equipped assistant named Copilot, which was announced at this year’s Build conference. Users were able to try Copilot after the Windows 11 22H2 update was released on September 26. Microsoft’s CVP, Yusuf Mehdi, stated that “Copilot will uniquely incorporate the context and intelligence of the web, your work data, and what you are doing in the moment,” and emphasized that Microsoft was prioritizing privacy and security.

After an optional update (or eventually I assume a mandatory update), Copilot will be turned on by default, with users being able to configure settings with Microsoft’s Intune policy or Group Policy (for groups and organizations). This was clarified by Harjit Dhaliwal, a Product Marketing Manager at Microsoft, in a Microsoft enterprise blog post.

As well as Copilot, Microsoft has told users how they can utilize its AI-powered search engine Bing Search and enable voice assistance capabilities through Voice access in Windows 11. 

Cortana’s demise isn’t too surprising, as the voice assistant got a very mixed reception and saw a lot of criticism. Microsoft appears to want to have another try, and is clearly hoping that the AI-powered Copilot will fare better. Although Copilot has taken somewhat wobbly first steps, it’s innovative and has plenty of potential.

You might also like

TechRadar – All the latest technology news

Read More

Meta takes aim at GPT-4 for it’s next AI model

Meta is planning to meet, if not surpass, the powerful GPT-4 chatbots designed by OpenAI with its own sophisticated artificial intelligence bot. The company is planning on training the large language model (LLM) early next year, and likely hopes it will take the number one spot in the AI game. 

According to the Wall Street Journal, Meta has been buying up Nvidia H100 AI training chips and strengthening internal infrastructure to ensure that this time around, Meta won’t have to rely on Microsoft’s Azure cloud platform to train its new chatbot. 

The Verge notes that there’s already a group within the company that was put together earlier in the year to begin work building the model, with the apparent goal being to quickly create a tool that can closely emulate human expressions. 

Is this what we want? And do companies care?

Back in June, a leak suggested that a new Instagram feature would have chatbots integrated into the platform that could answer questions, give advice, and help users write messages. Interestingly, users would also be able to choose from “30 AI personalities and find which one [they] like best”. 

It seems like this leak might actually come to fruition if Meta is putting in this much time and effort to replicate human expressiveness. Of course, the company will probably look to Snapchat AI for a comprehensive look at what not to do when it comes to squeezing AI chatbots into its apps, hopefully skipping the part where Snapchat’s AI bot got bullied and gave users some pretty disturbing advice

Overall, the AI scramble carries on as big companies continue to climb to the summit of a mysterious, unexplored mountain. Meta makes a point of ensuring the potential new LLM will remain free for other companies to base their own AI tools on, a net positive in my books. We’ll just have to wait for next year to see what exactly is in store. 

TechRadar – All the latest technology news

Read More

This video maker’s new AI editing tool picks your best takes for you

Artificial intelligence may already be a staple in the best video editing software, but now Veed is launching what it calls an “industry-first editing tool” for its video maker platform. 

Every second counts when making online video, especially on platforms like TikTok and Instagram, where brands only have a few seconds to capture the audience. Presumably, Veed thinks our “umms” and “aahs” are wasting valuable time – with Magic Cut set to clean up content. 

The AI tool streamlines one of the most time-consuming (read: soul-destroying) parts of video editing – removing all the filler words and pauses. At the touch of a button, users can chop out all hesitation, deviation, or repetition. It’s joined by several other video editing tools aimed at polishing up post-production.

Critical content creation 

With its video maker service, Veed is no stranger to simplifying content editing. Unlike even the best free video editing software and video editing software for beginners, these services let businesses create a lot of content fast. It’s not Emmy award-winning material. But the videos are professional enough for social media channels. 

The arrival of AI tools like Magic Cut hardly comes as a surprise as developers streamline production processes in the drive for total accessibility. 

According to Veed's own research, over a third of consumers struggle with editing videos. It’s those users without the time or experience that tools like Magic Cut are really pitched at – an easy way to automatically clip the best takes for TikTok, Shorts, and Reels. 

“Magic Cut means people don’t have to worry about getting the perfect take or spend hours trying to cut out the bits they don’t want. This allows people to spend more time on the creative, fun parts of content creation,” said Veed CEO and co-founder Sabba Keynejad. 

The AI editor isn’t the only tool to find its way onto the platform. Generating subtitles, scripts, and images, removing background noise, and converting text to audio are all now featured. 

Veed’s toolset was one of the few areas we thought the platform really shone for us during our review. Green screen keying and a free screen recorder were two highlights. So, we’ll be interested to see how well Magic Cut performs in the line-up, especially once the fuller featured Clean Edit drops. Users can try it out for themselves by signing up for early access.  

TechRadar – All the latest technology news

Read More

Brave Summarizer takes on Bing and ChatGPT at the AI search results game

Hopping on the AI train, Brave is incorporating its own AI-powered search function to its web browser called Summarizer – similar to what Microsoft recently did to Bing.

The new feature “provides concise and to-the-point answers at the top of Brave Search results”. For example, if you want to learn about the chemical spill in East Palestine, Ohio, Summarizer will create a one paragraph summary of the event alongside some sources for you to read. Unlike Microsoft which only uses ChatGPT for Bing's chatbot, Summarizer uses three in-house large language models, LLMs for short, based on retrained versions of the BART and DeBERTa AI models to create the search-results snippets.

Retraining AI

To simplify the technology behind them, BART and DeBERTa are generative writing AIs like ChatGPT that have been specially trained to take into account word positioning as well as context so the text output reads well. What Brave did is take those models and retrain them using its own search result data to develop Summarizer.

Summarizer’s training regiment is a three-step process, according to the announcement. First, Brave taught the LLMs to prioritize answering the question being asked. Then, the company utilized “zero-shot classifiers” to categorize results so the given information is relevant. The final step helps the models rewrite the snippet so it’s more coherent. The result is an accurate answer written succinctly with multiple sources attached.

Be aware the feature is still in the early stages. Brave states Summarizer only utilizes about 17 percent of search queries to formulate an answer, but there are plans to scale that number even higher for better paragraphs. Its accuracy needs some work, too. The company admits Summarizer may produce what it calls “hallucinations” which are unrelated snippets mixed in with results. Plus there's the possibility of the feature throwing in some “false or offensive text” into an answer.

Availability

Summarizer is currently available to all Brave Search users on desktop and mobile with the exception of the Brave Search Goggles. It’s disabled there. You can turn it off anytime you want by going into the browser’s settings menu. The company is also asking users to give some feedback on how it can improve the tool. 

We tried out Summarizer ourselves, and as cool as it is, it does need some work. Not all search results will give you a snippet as it depends on what you ask, as well as which news topics are making the rounds. The East Palestine, Ohio chemical spill, for example, is currently a hot button issue so you get Summarizer working just fine there. However when we asked about the recent cold snap in Los Angeles and what’s going on with certain video game developers, we either got no summary or outdated information. But the latter did come with sources so it was at least accurate. Still better than having ChatGPT throw a temper tantrum or lie to your face.

Be sure to check out TechRadar’s list of the best AI writer for 2023 if you’re interested in learning what AI creativity can do for you. 

TechRadar – All the latest technology news

Read More

TikTok takes on YouTube with 10-minute videos – but will people watch?

TikTok has enabled the ability to create videos that can last for up to 10 minutes, an increase from three and five minutes for different creators.

Over the last 18 months, the company has been testing different length videos that creators could publish, with a limit of five minutes that's been in place since 2019.

However, some creators wanted TikTok to extend the length, to better compete with YouTube and Instagram Reels. Now that it's here, though, one wonders if TikTok users want 10-minute videos to scroll through in their 'For You' feed.

Analysis: 10-minute videos may be a niche appeal

TikTok is a social platform where you scroll vertically to watch videos. While you can watch videos from users you follow, or another called 'For You' where TikTok's algorithm curates new videos from creators you don't follow, the app's appeal is to watch short videos to pass the time.

10-minute videos may be a stretch. We're getting perilously close to the range of a web movie or TV show. The 2003 series Star Wars: The Clone Wars is a good example here, where episodes could range between three and twelve minutes. To be fair, we rather enjoyed that series. With the new 10-minute-range, TikTok could start bringing more episodic series to the platform

In the near term, though, TikTok's new competitor is clearly YouTube, a platform that's already attracting some TikTok creators anxious for more time on the digital stage.

See more

Longer videos on TikTok may help some creators in the topics they create, such as making pancakes, throwbacks to old TV shows, or a documentary on certain topics.

But 10-minute videos will require users to sit down and focus on what they're watching, instead of mindlessly scrolling through. On the other hand, these longer videos are entirely optional. It's possible that you won't see 10-minute TikToks in your feed. You might also choose to help the algorithm filter them out for you by not pausing to watch any of them. After all, who has an hour to spare for TikTok?

As for Tiktok, these extended videos are a sign that it wants some of its creators to cover topics that can only be explained in relatively long-form videos. Their success in that effort will depend on how users will respond to the change.

And as TikTok comes for YouTube, YouTube is coming for TikTok, too. YouTube has its own take on TikTok called Shorts, where creators can release shorter content, but it's a feature still in its early stages.

While TikTok takes on the video giant, it's also tackling its own monetary issues, making sure creators feel compensated so they don't jump to the potentially more lucrative YouTube.

The monetization efforts compared to YouTube are reportedly very small, which has meant that creators such as hankschannel are moving away from TikTok for more income on Google's video platform.

Essentially, TikTok's faced with a multi-pronged effort to excite and keep active creators: longer videos for more creative freedom and new monetization efforts to match the creators' extra effort with better revenue streams.

It's only then that the company has a chance to go head to head with YouTube, but it also depends on whether more creators and users will jump ship to TikTok and its new 10-minute video opportunity.

TechRadar – All the latest technology news

Read More