Google may be making AI versions of celebrities for you to chat up in YouTube

Google is working on creating artificial intelligence-powered chatbots mimicking famous people and fictional characters, according to a report from The Information. These AI celebrities, YouTube influencers, and imaginary people will also serve as a template for users to build their own generative AI chatbots with customized personalities and appearances.

At first glance, these chatbots sound similar to the recently released Gems, a customized version of the Google Gemini language models. But Gems are designed to handle a specific task, such as coding software or designing a fitness regimen. The chatbots described in the report focus on mimicking the personalities and responses of whichever character or celebrity they are based on. 

Google appears to be imitating and attempting to surpass companies like, an early proponent of custom chatbots based on famous and fictional people. That’s also what Meta and its Celebrity AI chatbots have pursued, with its official partnerships producing AI recreations of people like Paris Hilton and Snoop Dogg.

Where will they be?

Google may look to incorporate its generative AI chatbots through YouTube instead of using them as standalone. The obvious benefit is that it would let popular YouTube creators promote the service with their own AI personas. That’s what major YouTube star Mr. Beast already does on Meta. Presumably, Google would figure out a monetization method that would link to engagement and other YouTube metrics. 

The report doesn’t mention which celebrities Google might use, but connecting it to YouTube personalities and their popular pages may help the chatbots avoid the disinterest Meta’s celeb chatbots face. The Snoop Dogg dungeon master has only 14,600 followers on Instagram, for instance, compared with 87.5 million followers on the actual Snoop Dogg account. The same goes for Paris Hilton, who has 26.5 million followers compared to her AI detective character’s Instagram page, with just 13,300 followers.

Though there’s no confirmation from Google or an official rollout timeline yet, you can probably expect to see Google’s customizable chatbot platform on the Google Labs page if you want to be an early adopter of chatting with an AI celebrity clone or making an AI version of yourself to talk to.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 loses keyboard shortcut for Copilot, making us wonder if this is a cynical move by Microsoft to stoke Copilot+ PC sales

What’s going to drive Copilot+ PC sales, do you think? Superb AI acceleration chops? Windows on Arm getting emulation nailed for fast app and gaming performance (on Snapdragon X models)? No – it’s the Copilot key on the keyboard, dummy.

Surprised? Well, we certainly are, but apparently one of Microsoft’s selling points for Copilot+ PCs is the dedicated key to summon the AI on the keyboard.

We can draw that unexpected conclusion from a move Microsoft just made which seems pretty mystifying otherwise: namely the removal of the keyboard shortcut for Copilot from Windows 11.

As flagged up by Tom’s Hardware, the new Windows 11 preview (build 22635) in the Beta channel has dumped the keyboard shortcut (Windows key + C) that brings up the Copilot panel. This is an update that just happened (on June 19), after the preview build initially emerged on June 14.

Microsoft explains very vaguely that: “As part of the Copilot experience’s evolution on Windows to become an app that is pinned to the taskbar, we are retiring the WIN + C keyboard shortcut.”

An Acer Swift Go 14 on a desk

(Image credit: Future / James Holland)

Analysis: A cynical move by Microsoft?

What now? How is removing a useful keyboard shortcut part of the ‘evolution’ of Copilot? Surely, it’s a step backwards to drop one of the ways to invoke the AI assistant to the desktop?

Now, if Microsoft had big plans for the Windows + C shortcut elsewhere, say another piece of functionality that had come in which required this particular combo, the reasoning might at least be a little clearer. But by all accounts, there’s no replacement function here – Windows + C now does nothing.

As for the reason somehow being tied to Copilot shifting to become an app window, rather than a locked side panel in Windows 11, we don’t see how that has any relevance at all to whether you can open the AI with a keyboard shortcut or not.

As Tom’s Guide points out, seemingly the driver for this change is to make the Copilot key on the keyboard a more pivotal function, replacing the shortcut, but guess what – you only get that key on new Copilot+ PCs (right now anyway). So, the logical conclusion for the skeptical is that this is simply a fresh angle on helping to stoke sales for Copilot+ PCs.

It’s not like you can’t just click on the Copilot icon, of course, so you’re not lost at sea with no AI assistance all of a sudden – but that’s not the point. It is a lost convenience, clearly though, and it feels like a cynical move by Microsoft.

Tom’s Guide points out that you could use third-party key mapping software to restore the functionality of this particular shortcut, but the point is, you really shouldn’t have to bother jumping through such hoops. Come on, Microsoft – don’t pull stunts like this, or, if there is a good reason behind the change, share it, not some waffling soundbites about evolving Copilot.

You might also like…

TechRadar – All the latest technology news

Read More

Meta can’t stop making the Meta Quest 3’s mixed reality better with updates

June is here, and like clockwork the latest update for your Meta Quest 3 headset is ready to roll out. 

The standout upgrade for v66 is to the VR headset’s mixed reality (again) – after it was the main focus of Horizon OS v64, and got some subtle tweaks in v65 too.

We aren’t complaining though, as this improvement looks set to make the image quality even better, with reduced image distortion in general and a reduction to the warping effect that can appear around moving objects. The upshot is that you should notice that it’s easier to interact with real-world objects while in mixed reality, and the overlay that displays your virtual hands should better align with where your actual hands appear to be.

If you want to see a side-by-side, Meta has handily released a video showcasing the improvements to mixed reality.

If you’re using your hands instead of controllers, Meta is also adding new wrist buttons.

Should you choose to enable this option in the experimental settings menu, you’ll be able to tap on your right or left wrist to use the Meta or Menu buttons respectively.

According to Meta, wrist buttons will make it a lot easier to open a menu from within a game or app – either the in-game pause screen, or the system-level menu should you want to change to a different experience, take a screenshot or adjust your headset’s settings. We’ll have to try them out for ourselves, but they certainly sound like an improvement, and a similar feature could bring even more button controls to the hand-tracking experience.

A gif showing a person pinching their fingers to open the Quest menu

You’ll no longer need to pinch to open menus (Image credit: Meta)

Lastly Meta is making it easier to enjoy background audio – so if you start audio or a video in the Browser, it’ll keep playing when you minimize the app – as well as a few changes to Parental Supervision features. Namely, from June 27, children aged 10 to 12 who are supervised by the same parent account will automatically be able to see each other in the Family Center.

As Meta warns however its update is rolling out gradually, and because this month’s passthrough change is so big it’s saying it will be sending out updates even more slowly than usual – and what’s more, some people who update to v66 might not get all the improvements right away.

So if you don’t see the option to update right away, or any passthrough improvements once you've installed v66 on your Meta Quest 3, don’t fret. You will get the upgrade eventually.

You might also like

TechRadar – All the latest technology news

Read More

WhatsApp for Android is making it much easier to find older messages

WhatsApp users on Android just got access to a feature that iPhone owners have been making use of for a while now: the ability to search through conversations by date, which makes it much easier to dig out old chats.

The new feature was announced by Meta CEO Mark Zuckerberg on his WhatsApp channel (via TechCrunch), and is apparently rolling out to Android devices now – so if you don't have it already, you should see it soon.

To use it, head into any of your chats, then tap the three dots (top right) and Search. You should then see a calendar icon in the top-right corner, which you can tap on to jump to messages sent and received on a particular day. You can also tap on the name of the conversation at the top to find the Search option.

This is all very similar to how the 'search by date' function works on other platforms, but Android has been lagging behind in this respect – even WhatsApp for the web offers the option to search through chats by date.

Regular updates

WhatsApp date search

How the ‘search by date’ feature looks on Android (Image credit: Future)

This is of course a handy and welcome addition for WhatsApp users on Android, as it could save a serious amount of scrolling – assuming of course, that you can remember the date when you got the message or media file you're looking for.

To give the WhatsApp team credit, it's an app that gets new features on a regular basis, though not always at the same time on Android and iOS. The app actually looks different depending on which mobile OS you're using – Android puts the navigation tabs at the top, for example, but they're underneath the chat list on iOS.

Despite these disparities, the app continues to grow in popularity as a cross-platform, secure, and reliable messaging platform. It's estimated to have around 2 billion active users worldwide, which is a fair chunk of the global population.

In recent months we've seen WhatsApp roll out upgrades for photo and video sharing, as well as test an expansion of the Chat Lock feature, making it easier to protect certain conversations across multiple devices.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT and other AI chatbots will never stop making stuff up, experts warn

OpenAI ChatGPT, Google Bard, and Microsoft Bing AI are incredibly popular for their ability to generate a large volume of text quickly and can be convincingly human, but AI “hallucination”, also known as making stuff up, is a major problem with these chatbots. Unfortunately, experts warn, this will probably always be the case.

A new report from the Associated Press highlights that the problem with Large Language Model (LLM) confabulation might not be as easily fixed as many tech founders and AI proponents claim, at least according to University of Washington (UW) professor Emily Bender, a linguistics professor at UW's Computational Linguistics Laboratory.

“This isn’t fixable,” Bender said. “It’s inherent in the mismatch between the technology and the proposed use cases.”

In some instances, the making-stuff-up problem is actually a benefit, according to Jasper AI president, Shane Orlick.

“Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas—how Jasper created takes on stories or angles that they would have never thought of themselves.”

Similarly, AI hallucinations are a huge draw for AI image generation, where models like Dall-E and Midjourney can produce striking images as a result. 

For text generation though, the problem of hallucinations remains a real issue, especially when it comes to news reporting where accuracy is vital.

“[LLMs] are designed to make things up. That’s all they do,” Bender said. “But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes—and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

Unfortunately, when all you have is a hammer, the whole world can look like a nail

LLMs are powerful tools that can do remarkable things, but companies and the tech industry must understand that just because something is powerful doesn't mean it's a good tool to use.

A jackhammer is the right tool for the job of breaking up a sidewalk and asphalt, but you wouldn't bring one onto an archaeological dig site. Similarly, bringing an AI chatbot into reputable news organizations and pitching these tools as a time-saving innovation for journalists is a fundamental misunderstanding of how we use language to communicate important information. Just ask the recently sanctioned lawyers who got caught out using fabricated case law produced by an AI chatbot.

As Bender noted, a LLM is built from the ground up to predict the next word in a sequence based on the prompt you give it. Every word in its training data has been given a weight or a percentage that it will follow any given word in a given context. What those words don't have associated with them is actual meaning or important context to go with them to ensure that the output is accurate. These large language models are magnificent mimics that have no idea what they are actually saying, and treating them as anything else is bound to get you into trouble.

This weakness is baked into the LLM itself, and while “hallucinations” (clever technobabble designed to cover for the fact that these AI models simply produce false information purported to be factual) might be diminished in future iterations, they can't be permanently fixed, so there is always the risk of failure. 

TechRadar – All the latest technology news

Read More

Apple isn’t making game controllers for Vision Pro – Microsoft and Sony may have it covered

If you're wondering what Apple's official Vision Pro controllers are going to look like, just imagine something that isn't there. That's because Apple is reportedly determined to make its AR/VR headset a controller-free zone. 

The report comes via Apple watcher Mark Gurman, who wrote in a Bloomberg newsletter that Apple had experimented with a finger-based controller device. It also confirms that the company reportedly tried third-party VR controllers including models from HTC, but the decision has been made. For Apple, controlling the Vision Pro means hand and eye scanning and Siri voice controls, not the kind of hand controllers you get with headsets such as the HTC Vive Pro 2.

Apple had also reportedly experimented with a physical Bluetooth or Mac keyboard, but has decided instead to go with an in-air keyboard for those moments when you really have to type something, such as a password you haven't already stored in your iCloud Keychain.

Does Vision Pro support third-party controllers?

Yes and no. According to Gurman, while Apple won't make a physical controller for what's expected to be the best VR headset, it will support PS5 and Xbox controllers for gaming. 

However, Apple has no plans to make its own Vision Pro game controller, and it has no plans to support third-party VR accessories. Whether that'll change with time and Apple will find a VR equivalent of the Made for iPhone certification scheme, something that's been a nice little earner for Apple over the years, is unknown.

I don't think the lack of third-party support or a hardware handheld controller is going to be a big deal, especially based on all the early verdicts so far. When we tried the Vision Pro, we found gesture and vision tracking to work very well after a brief setup routine: “if I looked at an app like Photos, I could then pinch together my thumb and index finger to open it. To scroll in a window, I would pinch, hold and drag my hand left or right or up or down.” 

Once you get used to it, it's very simple and straightforward. And there's still many months left for Apple to refine it further, and many more before the average consumer is using an Apple headset.

TechRadar – All the latest technology news

Read More

YouTube is making it easier for creators to make money — here’s how

In a surprising move from the massive video platform, YouTube has announced that it would be lowering the requirements for its YouTube Partner Program, which will make it easier for content creators to monetize their content.

Under these new requirements, YouTubers will be eligible to apply for partnership at 500 subscribers, a 50% cut from the previous 1,000 needed. Other requirements will also be lowered, such as creators only needing 3,000 valid watch hours instead of 4,000, as well as 3 million YouTube Shorts views compared to 10 million before.

According to The Verge, the site is also “opening up a handful of monetization methods to smaller creators, including paid chat, tipping, channel memberships, and shopping features.”

The shopping affiliate program is especially interesting. It was previously only available by invitation to select creators, but thanks to these sweeping changes, YouTube Partner Program participants in the US with at least 20,000 subscribers can now apply to it.

These changes will be initially rolling out in the US, UK, Canada, Taiwan, and South Korea, with plans to increase the number of regions later on.

YouTube is actually doing some good (TikTok too!) 

YouTube has been rolling out some pro-creator and user-friendly changes to its site as of late. Some of these include retiring overlaying banner ads on the desktop version, YouTube Premium for iOS getting better quality videos, and harnessing the power of AI to create real-time translations for its videos.

While some changes have been well received, like the feature that lets viewers see the most-watched parts of a video via a clear graph, others, like the site's continuous attempt to block ad-blockers, have been less popular.

Regardless, it’s good to see that YouTube is working to actively improve the experience. And it’s not only YouTube, as other social media platforms like TikTok have been working to make similar quality-of-life changes. 

For instance, The Verge details how TikTok’s “video paywall feature, Series, would be available to creators with more than 10,000 followers but that users with 1,000 followers who met other requirements could also apply to participate in the program.”

It’s good to see some positive news surrounding these sites, and fingers crossed that YouTube doesn’t end up in some serious hot water soon after this announcement. I’m afraid it’s a little too late for TikTok, though.

If you want to see how to make money using YouTube, or how to create a YouTube channel, we have you covered! Also, make sure you check out our best webcams guide as well.

TechRadar – All the latest technology news

Read More

Microsoft is finally making Edge a much more secure place to surf the web

Keeping safe online is about to get a lot easier for Edge users thanks to a major security update from Microsoft.

The software giant has revealed it is working on an upgrade for its web browser that will bring “enhanced security” as a default for users everywhere.

This includes adding additional operating system and hardware protections for Edge that the company says, when combined, will help provide “defense in depth”, making it more difficult than ever before for a malicious site to use an unpatched vulnerability to write to executable memory and attack an end user.

Edge enhanced security

Going forward, users will now see an additional banner with the words “added security” in the URL navigation bar in Edge, instantly letting you know you have extra protection for that specific site.

“Microsoft Edge is adding enhanced security protections to provide an extra layer of protection when browsing the web and visiting unfamiliar sites,” the company wrote in a blog post announcing the news.

“The web platform is designed to give you a rich browsing experience using powerful technologies like JavaScript. On the other hand, that power can translate to more exposure when you visit a malicious site. With enhanced security mode, Microsoft Edge helps reduce the risk of an attack by automatically applying more conservative security settings on unfamiliar sites and adapts over time as you continue to browse.”

More security for Edge

Users will be able to create exceptions for certain trusted websites, where enhanced security can either be disabled or enabled permanently. Enterprise admins can also configure for certain websites to be blocked or allowed, 

In its entry on the official Microsoft 365 roadmap, the company noted enhanced security mode is being turned on by default to “Balanced” mode for x64 Windows, x64 macOS, x64 Linux, and ARM64 systems.

The update is still listed as being “in development” for the time being, but has a scheduled rollout start date of July 2023, when users across the globe will be able to access it.

Recent Statcounter figures show that Microsoft's ongoing efforts to push users towards Edge may not be having the desired effect. Its most recent report found that Edge had lost its second place in the global browser market to Apple's Safari offering, which now claims 11.87% of users, compared to Edge's 11% – although both trail far behind runaway leader Google Chrome (66.13%).

TechRadar – All the latest technology news

Read More

Xbox App for Windows is making PC gaming more accessible

Microsoft has just powered up the Xbox App for Windows in a new update that brings in a lot of useful changes, on the accessibility front for starters, and also with game cards, better filtering for your games library to find what you want, and more.

Windows Central reports that the May update for the Xbox App on PC is now out, reworking accessibility settings to make them more, well, accessible, bringing all these options together in a new menu.

Xbox App for Windows Accessibility Menu

(Image credit: Microsoft)

Essentially, this acts as a one-stop-shop hub where you can access accessibility settings for the Xbox app – for example, disabling animations or background images (those are actually two new features designed to remove what might be unnecessary distractions for some folks). Also, the menu offers convenient shortcuts to other accessibility options (for Windows in general, for instance, or the Xbox Game Bar).

Another significant change has been introduced for game cards, which offer up more info. So you can now see at a glance how long a game takes to finish (typically), details on pricing, and relevant info on when the title is coming to Game Pass (or indeed being dropped).

Xbox App for Windows Filters

(Image credit: Microsoft)

There are also new options to filter your game library, so for example, it’s possible to look for games you can beat in a few hours (under five) if you just want a quick fix for your next venture into PC gaming. It’s also possible to sort games via accessibility features, too.

Microsoft has implemented tweaks on the social side for the Xbox App, too, allowing you to pop out your friends list (or a chat) into a separate window. If you have two desktops going, you can have a game running full-screen in one, and your social stuff popped onto the other.

Xbox App for Windows Social

(Image credit: Microsoft)

Analysis: Impressive steps forward

There’s some very useful stuff added here, with the extra details on game cards, and additional filter options likely to prove very handy (especially the idea of looking for quick fix games, or indeed the opposite end of the spectrum – games that will consume your life for the next month or three, perhaps). Note that the estimations of game lengths are drawn from a website (

Furthermore, Microsoft continues to put its best foot forward with further efforts on the accessibility front. We’ve seen a lot of such work in Windows 11 at a broader level – with lots of progress with Voice Access in particular of late (courtesy of the Moment 3 update) – and it’s great to see this happening on the gaming side of the equation in the OS, too.

As a final note, one thing PC gamers might have missed is that Windows 11’s live captions work in games, too – and the feature does a pretty good job for those titles which don’t have native captions.

TechRadar – All the latest technology news

Read More