YouTube reveals powerful new AI tools for content creators – and we’re scared, frankly

YouTube has announced a whole bunch of AI-powered tools (on top of its existing bits and pieces) that are designed to make life easier for content creators on the platform.

As The Verge spotted, at the ‘Made on YouTube’ event which just took place, one of the big AI revelations made was something called ‘Dream Screen’, an image and video generation facility for YouTube Shorts.

See more

This lets a video creator just type in something that they’d like for a background. Such as, for example, a panda drinking a cup of coffee – given that request, the AI will take the reins and produce such a video background for the clip (or image).

This is how the process will be implemented to begin with – you prompt the AI, and it makes something for you – but eventually, creators will be able to remix content to produce something new, we’re told.

YouTube Studio is also getting an infusion of AI tools that will suggest content that could be made by individual creators, generating topic ideas for videos that might suit them, based on what’s trending with viewers interested in the kind of content that creator normally deals in.

A system of AI-powered music recommendations will also come into play to furnish audio for any given video.


Analysis: Grab the shovel?

Is it us, or does this sound rather scary? Okay, so content creators may find it useful and convenient to be able to drop in AI generated video or image backgrounds really quickly, and have some music layered on top, and so on.

But isn’t this going to just ensure a whole heap of bland – and perhaps homogenous – content flooding onto YouTube? That seems the obvious danger, and maybe one compounded by the broader idea of suggested content that people want to see (according to the great YouTube algorithm) being provided to creators on YouTube.

Is YouTube set to become a video platform groaning under the collective weight of content that gets quickly put together, thanks to AI tools, and shoveled out by the half-ton?

While YouTube seems highly excited about all these new AI utilities and tools, we can’t help but think it’s the beginning of the end for the video site – at least when it comes to meaningful, not generic, content.

We hope we’re wrong, but this whole brave new direction fills us with trepidation more than anything else. A tidal wave of AI-generated this, that, and the other, eclipsing everything else is clearly a prospect that should be heavily guarded against.

You might also like

TechRadar – All the latest technology news

Read More

Google Bard update reveals a more powerful AI – but it might scare privacy purists

Google has built a new model for Bard which it is calling the most capable iteration of the AI yet.

Google provided an update on the new version of Bard which it calls “more intuitive, imaginative and responsive than ever before,” offering greater levels of quality and accuracy in the chatbot’s responses.

A whole bunch of new features have been brought into the mix for Bard, and that starts with support for 40+ languages, and some tight integration with existing Google products elsewhere.

That includes giving Bard the ability to get its hooks into your emails in Gmail, and data in Google Drive and Docs, meaning you can get the AI to find info across your various files, or indeed summarize a piece of content if needed.

Bard will also be able to pull data in real-time as needed from Google Maps, Google’s travel features (hotels and flights), and YouTube, all of which will be extensions that are enabled by default (you can disable them if you wish, but they’re switched on by default in the new Bard).

Another big move here is the ability to check Bard’s answers. Not too sure about any given response from the AI? A ‘Google It’ button can be clicked to bring up additional info around any query, which is drawn from Google search (where supported), so you can check for yourself to see if there’s any doubt, or difference of opinion, elsewhere online compared to what Bard is telling you.

A further fresh introduction gives Bard users the ability to share a conversation via a public link, allowing others to continue that conversation with Google’s AI themselves, should they wish.


Analysis: The distant but distinct sound of alarm bells

This is indeed a major update for Bard, and there are some useful elements in here for sure. Better quality and accuracy, and the ability to check Bard’s responses, are obviously welcome features.

Some other stuff will set some alarm bells ringing for folks, particularly the more privacy-conscious out there. Do you really want Bard’s tendrils snaking into every corner of your Google Drive, Docs, and Gmail? Doesn’t that sound like the beginning of a scenario of a nightmarish overreach from the AI?

Well, Google is pretty careful here to clarify that your personal data absolutely isn’t being hoovered up to train Bard in any way. As the company puts it: “Your Google Workspace data won’t be used to train Bard’s public model and you can disable access to it at any time.”

So, the only use of the data will be to furnish you with convenient replies to queries, and that could be pretty handy. Know you’ve got a document somewhere on a certain topic, but can’t remember where it is in your Google account, or what it’s called? You should be able to prompt Bard to find it for you.

Don’t like the idea of Bard accessing your stuff in any way, shape, or form? Then you don’t have to use these abilities, they can be switched off (and the mentioned extensions don’t have to be enabled). Indeed, whatever assurances Google makes about Bard not snuffling around in your data for its own purposes, there will be folks immediately reaching for the ‘off’ switch in these cases, you can absolutely bank on it.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft quietly reveals Windows 11’s next big update could be about to arrive

If you were wondering when Windows 11’s big upgrade for this year will turn up, the answer is soon, with Microsoft now making the final preparations to deploy the 23H2 update – with a revelation apparently imminent.

As Windows Latest tells us, Microsoft just shipped a ‘Windows Configuration Update’ which is readying the toggle to allow users to select ‘Get the latest updates as soon as they’re available’ and be first in line to receive the 23H2 update.

Note that nothing is actually happening yet, just that this is a piece of necessary groundwork (confirmed via an internal document from Microsoft, we’re told) ahead of the rollout of the Windows 11 23H2 update.

Okay, so when is the 23H2 update actually going to turn up? Well, Windows Latest has heard further chatter from sources that indicates Microsoft is going to announce the upgrade at an event later this week.

That would be the ‘special event’ Microsoft revealed a while back, taking place in New York on September 21 (Thursday). As well as the expected Surface hardware launches, we will also evidently get our first tease of the 23H2 update, at least in theory.


Analysis: Copilot on the horizon

An announcement this week makes sense to us, ahead of a broader rollout that’ll be coming soon enough.

As Windows Latest further points out, the 23H2 update will likely become available next month – at least in limited form. This means those who have ticked that toggle to get updates as soon as possible may receive it in October – at least some of those folks, in the usual phased deployment – before that wider rollout kicks off in November, and everyone gets the new features contained within the upgrade.

In theory, that means Windows Copilot, though we suspect the initial incarnation of the AI assistant is still going to be pretty limited. (And we do wonder why Microsoft isn’t going to keep on baking it until next year, but that’s a whole other argument – it seems like with AI, everything has to be done in quite the rush).

It’s also worth bearing in mind that if you’re still on the original version of Windows 11, 21H2, you’ll need to upgrade anyway – as support for that runs out on October 10, 2023. PCs on 21H2 are being force-upgraded to 22H2 right now, although you’ll pretty much be able to skip straight to 23H2 after that, should you wish.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 firmware leak reveals big depth mapping upgrade

All the signs are that Meta is going to fully unveil the Meta Quest 3, and give us an on sale date, at its special event on September 27. Until then, we're learning more about the device through leaks – such as the latest one that reveals its depth mapping capabilities.

In a clip dug out of the Meta Quest 3 firmware and posted to Reddit (via user Samulia and UploadVR), we see a short animation visualizing how the depth mapping is going to work. In short, it looks pretty advanced, and way ahead of the Oculus Quest 2.

We see a detailed mesh covering all of the objects in the room, and there seems to be some kind of object identification going on here as well – the couch is labeled with a couch icon, for example, so the Meta Quest 3 clearly knows what it is.

The player avatar is then shown chasing a digital character around the room, as it jumps on and behind furniture. This is an example of mixed reality occlusion, where digital elements appear to be in the same world as physical elements, and it hints at some of the experiences that will be possible on the new headset.

Meta Quest 3 room mapping visualization found in the firmware. from r/OculusQuest

A room with a view

On the current Oculus Quest 2, you're required to manually map out a free space inside a room. You can also mark out rectangular cuboids for pieces of furniture and walls, but it takes a while – and these maps aren't fully used by developers anyway.

This looks like a much more slick and comprehensive solution, and it matches up with another clip revealed in June. Meta has made noises about the Meta Quest 3 “intelligently understanding” what's inside a room, but that's all that's been made official so far.

The depth mapping and the way that mapping is used would appear to even go beyond the latest Meta Quest Pro headset. That device does have some automatic room mapping capabilities, but it doesn't have a dedicated depth sensor inside it.

Meta has another of its Connect showcases scheduled for September 27, and all should be revealed by then. While you're waiting, you can check out the latest teaser trailer for the device, and everything we know about it so far.

You might also like

TechRadar – All the latest technology news

Read More

Finally, a good use for AI: Meta reveals bot that can translate almost 100 languages

Meta might have arrived late to the AI party, but the Facebook owner is showing no signs of giving up. This week, the social media giant unveiled yet another AI tool: this time, it’s an ‘all-in-one’ translation model capable of understanding close to 100 different languages.

The new AI model, named SeamlessMT4, was detailed in a blog post from Meta, which referenced the famed ‘universal translator’ trope prevalent in a great deal of sci-fi media; in this case, the Babel Fish from Douglas Adams’ The Hitchiker’s Guide to the Galaxy. It’s a snippet of technology that has long remained out of reach within the bounds of fiction, but Meta considers this to be a vital step in making universal translators a reality.

SeamlessM4T is differentiated from existing translation AI tools since it uses a single large language model, as opposed to multiple models working in conjunction. Meta claims this improves the “efficiency and quality of the translation process”.

The new AI can read, write, listen, and talk – capable of parsing and producing both speech and text. While text and speech recognition covers almost 100 languages, SeamlessM4T is currently only able to generate its own speech in 36 output languages (including English). It was built on SeamlessAlign, which Meta calls “the biggest open multimodal translation dataset to date”, containing a whopping 270,000 hours of speech and text training data.

Speaking to machines

Logo Meta

(Image credit: Artapixel / Pixabay)

Meta has been going pretty hard on AI recently, producing multiple new AI models and even committing to developing its own AI chip. SeamlessM4T is the latest step in a push for language-focused AI use, following on from speech-generating AI Voicebox, which Meta (probably wisely) judged was too dangerous to release to the public right now.

SeamlessM4T (and the SeamlessAlign metadata) will be made publicly available under a research license, as part of Meta’s ongoing commitment to transparency in AI development. It’s a canny move from the tech titan, allowing it to both claim openness and fairness within the AI arena while also ensuring that it can take partial credit for future work done using its tools.

Anyone who follows my work closely will be well aware that I’ve been pretty darn critical of AI since the rise of the seemingly omnipresent ChatGPT. But, as I’ve said before, my qualms are mostly focused on the human uses of AI; I personally struggle to see the value in cramming AI into every corner of Windows, but even an AI skeptic like myself has to acknowledge the huge potential of tools such as SeamlessM4T.

I’ll be honest: despite being a writer by trade, I’m rubbish at learning other languages. That wretched Duolingo owl haunts my dreams, taunting me for my inability to properly conjugate in Spanish. But with SeamlessM4T, I’m envisaging a beautiful utopian future where I can visit any country and speak to any local in any native tongue, with their words translated in real-time by a nifty little earpiece loaded with AI tech.

I’m not crazy about the idea of needing to buy that earpiece from Mark Zuckerberg, but hey – one step at a time.

You might also like

TechRadar – All the latest technology news

Read More

Arm reveals the hardware that will power the smartphones of 2021

Arm has unveiled its next generation mobile CPU the Cortex-A78 and GPU the Mali-G78 which will be used to power the flagship smartphones of 2021.

The UK-based company provides the chip designs that Qualcomm, Huawei, Samsung and other chipmakers license and then use to to create their own customized system-on-a-chip designs that are found in high-end Android smartphones, tablets and now even laptops such as Microsoft's Surface Pro X.

The new Arm Cortex-A78 CPU will provide increased performance gains as well as greater power efficiency. According to Arm, the new CPU is its most efficient Cortex-A CPU ever designed for mobile devices. The Cortex-A78 will also be able to deliver more immersive 5G experiences as the result of a 20 percent increase in sustained performance over Cortex-A77-based devices with a 1-watt power budget.

The performance-per-watt of the chip will make it better suited for the greater overall computing needs of foldable devices such as the Samsung Galaxy Fold and devices with multiple screens like the LG V60.

Cortex-X Custom program and Mali-G78

Arm also announced a new engagement program called the Cortex-X Custom program which will give its partners the option of having more flexibility and scalability to increase performance. This will allow its partners to develop solutions for providing the ultimate performance for specific use cases.

The Arm-Cortex-X1 is the program's first CPU as well as the most powerful Cortex CPU to date. It features a 30 percent peak performance increase over the Cortex-A77 and offers an even more competitive solution for flagship smartphones as well as large-screen devices.

Last year Arm introduced the Mali-G77 GPU based on its new Valhall architecture and the company's new Mali-G78 builds on the advancements it made to deliver a 25 percent increase in graphics performance over its predecessor. The new GPU supports up to 24 cores and will help extend the battery life of mobile devices.

Finally, based on demand from partners, Arm made the decision to introduce a new sub-premium tier of GPUs. The first GPU in this new tier is the Arm Mali-G68 which supports up to 6 cores and has all the latest features from the Mali-G78.

It will still be some time before the Arm's partners begin to release chips based on its new designs but based on the information the company released, flagship smartphones in 2021 will likely have improved battery life, graphics and 5G performance.

Via The Verge

TechRadar – All the latest technology news

Read More