OpenAI’s impressive new Sora videos show it has serious sci-fi potential

OpenAI's Sora, its equivalent of image creation but for videos, made huge shockwaves in the swiftly advancing world of AI last month, and we’ve just caught a few new videos which are even more jaw-slackening than what we have already been treated to.

In case you somehow missed it, Sora is a text-to-video AI meaning you can write a simple request and it’ll compose a video (just as image generation previously worked, but obviously a much more complex endeavor).

An eye with the iris being a globe

(Image credit: OpenAI)

Now OpenAI’s Sora research lead Tim Brooks has released some new content generated by Sora on X (formerly Twitter). 

This is Sora’s crack at fulfilling the following request: “Fly through tour of a museum with many paintings and sculptures and beautiful works of art in all styles.”

Pretty impressive to say the least. On top of that, Bill Peebles, also a Sora research lead, showed us a clip generated from the following prompt: “An alien blending in naturally with new york city, paranoia thriller style, 35mm film.”

An alien character walking through a street

(Image credit: OpenAI)

Content creator Blaine Brown then stepped in to embellish the above clip, cutting it to repeat the footage and make it longer, while having the alien rapping, complete with lip-syncing. The music is generated by Suno AI by the way (with the lyrics written by Brown, mind), and lip-syncing is done with Pika Labs AI.

See more

Analysis: Still early days for Sora

Two people having dinner

(Image credit: OpenAI)

It’s worth underlining how fast things seem to be progressing with the capabilities of AI. Image creation powers were one thing – and extremely impressive in themselves – but this is entirely another. Especially when you remember that Sora is still just in testing at OpenAI, with a limited set of ‘red teamers’ (testers hunting out bugs and smoothing over those wrinkles).

The camera work in the museum fly-through flows realistically and feels nicely imaginative in the way it swoops around (albeit with the occasional judder). And the last tweet shows how you can take a base clip and flesh it out with content including AI-generated music.

Of course, AI can write a script as well, and so it begs the question: how long will it be before a blue alien is starring in an AI-generated post-apocalyptic drama. Or an (unintentional) comedy perhaps?

You get the idea, and we’re getting carried away, of course, but still – what AI could be capable of in just a few years is potentially mind-blowing, frankly.

Naturally, we’ll be seeing the cream of the crop of what Sora is capable of in these teasers, and there have been some buggy and weird efforts aired too. (Just as when ChatGPT and other AI chatbots first rolled onto the scene, we saw AI hallucinations and general unhinged behavior and replies).

Perhaps the broader worry with Sora, though, is how this might eventually displace, rather than assist, content creators. But that’s a fear to chew over on another day – not forgetting the potential for misuse with AI-created videos which we recently discussed in more depth here.

You might also like

TechRadar – All the latest technology news

Read More

These fake Blue Screen of Death mock-ups highlight a serious problem with Windows 11

Windows 11 getting a redesigned BSOD – the dreaded Blue Screen of Death that pops up when a PC crashes – might be a joke on X (formerly Twitter) right now, but it highlights a serious issue.

OK, 'joke' might be a strong word, but the BSOD mock-ups presented by Lucia Scarlet on X are certainly tongue-in-cheek, featuring colorful emojis which are rather cutesy – not what you really want to see when your PC has just crashed and burned.

See more

That said, the overall theme of the design, giving the BSOD a more modern look, isn’t unwelcome, even if the emojis aren’t appropriate in our book.

That said, there are comments in the threads of those tweets that highlight how some folks are disappointed that these aren’t real incoming redesigns for Windows 11. In some cases, there are people who appreciate a more friendly emoji appearing, as opposed to the frowny face (a text-based one, mind) which has been present on BSODs.

See more

Analysis: The blue screen blues

That disappointment is likely, at least in part, to be a more general indicator of the level of dissatisfaction with the BSOD – particularly in regards to the lack of information the screen provides, and shortfalls with the help that is supplied.

When a BSOD appears, it’s usually highly generic, and tells the Windows 11 (or Windows 10) user very little – you’ll read something like “a problem happened” with no elaboration on exactly what went wrong.

Meaningless error messages (known as stop codes that can pop up elsewhere in Windows 11, too) which are a jumble of hexadecimal letters and numbers might be cited, or a techie reference to a DLL perhaps, none of which are likely to be a jot of help in discerning what actually misfired in your system.

Never mind visual redesigns, Microsoft improving the info and help provided with BSODs would be the biggest step forward that could be taken with these screens. We've witnessed one innovation in the form of the QR codes provided – as seen in the mock-ups above – but these were introduced way back in 2016, and haven’t progressed much in the best part of a decade, often linking through to not fully relevant or up-to-date information.

We feel there’s definitely more Microsoft could do to improve BSODs, and in fairness, a more modern touch for the visuals wouldn’t hurt – though there’s another thought that occurs. Should we still be getting full system lock-ups at this point in the evolution of desktop operating systems?

Ideally not, of course, but to be fair to Microsoft, BSODs are definitely a whole lot less common these days than in the past. For those who do encounter them, though, we have a handy Blue Screen of Death survival guide.

You might also like…

TechRadar – All the latest technology news

Read More

The latest Oculus Quest 2 update comes with a serious performance boost

The latest software update for the Oculus Quest 2 and Meta Quest Pro is here, and it’s bringing some serious performance upgrades to both of Meta’s VR headsets.

Meta teased this update following the Meta Quest 3 announcement where a press release revealed that the Quest Pro and Quest 2 would see their CPU speed rise by up to 26% each. What’s more, the Quest 2’s GPU will, according to Meta, improve by up to 19%, while the Quest Pro’s GPU will improve by 11%.

These hardware upgrades are achievable via a software update because Meta’s new update is allowing the CPU and GPU in each headset to run at a higher clock speed. Previously both headsets ran underclocked systems – read: maximum performance is being held back – in order to prevent the headsets from getting too hot and causing discomfort for the player. Clearly, Meta decided that it was a bit too conservative with its underclocked approach, so now it's releasing a bit more power.

On top of its faster processing, Meta has announced that the Quest Pro is getting a boost to its eye-tracking accuracy. While the update post doesn’t go into much detail we can’t help but feel like this is Meta’s first step to helping the Quest Pro catch up to Apple’s newly unveiled Vision Pro headset – which threatens to usurp Meta’s spot at the top of the best VR headsets list.

The Apple Vision Pro headset on a stand at the Apple headquarters

What will Meta learn from the Apple Vision Pro? (Image credit: Future)

One innovation Apple’s headset has is that it uses eye-tracking to make hand-tracking navigation more accurate. Rather than awkwardly pointing at an app you just have to look at it and then pinch your fingers.

The Quest Pro’s improved eye-tracking accuracy could allow Meta to implement a similar system to the Apple Vision Pro – and help make its eye-tracking technology more useful.

More minor changes

Beyond these performance boosts, the Meta Quest v55 update brings a few minor software improvements.

Now when using Direct Touch hand tracking, you’ll be able to tap swipe notifications away or tap on them like buttons as you can with other menu items. If this doesn’t make interacting with your headset feel enough like using a smartphone, Meta has also said that the full Messenger app will now launch on the Quest platform – allowing you to call and message any of your contacts through the app, not just the people that use VR.

Two new virtual environments will be made available too. The Futurescape – which was featured in the 2023 Meta Quest Gaming Showcase – combines futurism with nature, while the Great Sand Sea is a vast desert world that’s an exclusive space for people who have preordered Asgard’s Wrath 2. To change your current environment to either of these options you’ll need to go into your Quest headset Settings and find the Personalization menu. You should see the option to change your environment to either one of these new spaces or the previously released virtual homes. 

Check out our interview with one of the developers to find out how Asgard's Wrath 2 will bring out the best of the Oculus Quest 2.

TechRadar – All the latest technology news

Read More

Google Docs is having some serious issues with its new “inclusive language” warnings

Google is nothing if not helpful: the search giant has built its reputation on making the internet more accessible and easier to navigate. But not all of its innovations are either clever or welcome. 

Take the latest change to Google Docs, which aims to highlight examples of non-inclusive language through pop-up warnings. 

You might think this is a good idea, helping to avoid “chairman” or “fireman” and other gendered language – and you'd be right. But Google has taken things a step further than it really needed to, leading to some pretty hilarious results.

Inclusive?

A viral tweet was the first warning sign that perhaps, just perhaps, this feature was a little overeager to correct common word usages. After all, is “landlord” really an example of of “words that may not be inclusive to all readers”? 

As Vice has ably demonstrated, Google's latest update to Docs – while undoubtedly well-intentioned – is annoying and broken, jumping in to suggest corrections to some things while blatantly ignoring others. 

See more

A good idea, poorly executed 

The idea behind the feature is well-meaning and will likely help in certain cases. The execution, on the other hand, is poor. 

Vice found that Docs suggested more inclusive language in a range of scenarios, such as for “annoyed” or “Motherboard”, but failed to suggest anything when a speech from neo-Nazi Klan leader David Duke was pasted in, containing the N-word. 

In fact, Valerie Solanas’ SCUM Manifesto – a legendary piece of literature – got more edits than Duke's speech, including suggesting “police officers” instead of “policemen”. 

All in all, it's the latest example of an AI-powered feature that seems like a good idea but in practice has more holes than a Swiss cheese. 

Helping people write in a more inclusive way is a lofty goal, but the implementation leaves a lot to be desired and, ultimately, makes the process of writing harder. 

Via Vice

TechRadar – All the latest technology news

Read More

GitHub wants to help developers spot security issues before they get too serious

In an effort to further secure open source software, GitHub has announced that the GitHub Advisory Database is now open to community contributions.

While the company has its own teams of security researchers that carefully review all changes and help keep security advisories up to date, community members often have additional insights and intelligence on CVEs but lack a place to share this knowledge.

This is why GitHub is publishing the full contents of its Advisory Database to a new public repository to make it easier for the community to leverage this data. At the same time, the company has built a new user interface for security researchers, academics and enthusiasts to make contributions.

All of the data in the GitHub Advisory Database is licensed under a Creative Commons license and has been since the database was first created to ensure that it remains free and usable by the community.

Contributing to a security advisory

In order to provide a community contribution to a security advisory, GitHub users first need to navigate to the advisory they wish to contribute to and submit their research through the “suggest improvements for this vulnerability” workflow. Here they can suggest changes or provide more context on packages, affected versions, impacted ecosystems and more.

The form will then walk users through opening a pull request that details their suggested changes. Once this done, security researchers from the GitHub Security Lab as well as the maintainer of the project who filed the CVE  will be able to review the request. Contributors will also get public credit on their GitHub profile once their contribution has been merged.

In an attempt to further interoperability, advisories in the GitHub Advisory Database repository use the Open Source Vulnerabilities (OSV) format. Software engineer for Google's Open Source Security Team, Oliver Chang provided further details on the OSV format in a blog post, saying:

“In order for vulnerability management in open source to scale, security advisories need to be broadly accessible and easily contributed to by all. OSV provides that capability.”

We'll likely more on this change to the GitHub Advisory Database once security researchers, academics and enthusiasts begin making their own contributions to the company's database.

TechRadar – All the latest technology news

Read More