OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie down

OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.

To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.

The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.

Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.

See more

But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.

The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.

How was the video made?

A video created by OpenAI Sora for TED Talks

(Image credit: OpenAI / TED Talks)

OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.

Trillo told Business Insider about the kinds of prompts he uses, including “a cocktail of words that I use to make sure that it feels less like a video game and something more filmic”. Apparently these include prompts like “35 millimeter”, “anamorphic lens”, and “depth of field lens vignette”, which are needed or else Sora will “kind of default to this very digital-looking output”.

Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently “like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it”.

This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora “currently exhibits numerous limitations as a simulator”, including the fact that “it does not accurately model the physics of many basic interactions, like glass shattering”.

These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.

You might also like

TechRadar – All the latest technology news

Read More

NVIDIA Instant NeRFs need just a few images to make 3D scenes

NVIDIA sees AI as a means of putting new tools into the hands of gamers and creators alike. NVIDIA Instant NeRF is one such tool, leveraging the power of NVIDIA’s GPUs to make complex 3D creations orders of magnitude easier to generate. Instant NeRF is an especially powerful tool in its ability to create these 3D scenes and objects. 

In effect, NVIDIA Instant NeRF takes a series of 2D images, figures out how they overlap, and uses that knowledge to create an entire 3D scene. A NeRF (or Neural Radiance Field) isn’t a new thing, but the process to create one was not fast. By applying machine learning techniques to the process and specialized hardware, NVIDIA was able to make it much quicker, enough to be almost instant — thus Instant NeRF. 

Being able to snap a series of photos or even record a video of a scene and then turn it into a freely explorable 3D environment offers a new realm of creative possibility for artists. It also provides a quick way to turn a real-world object into a 3D one. 

Some artists are already realizing the potential of Instant NeRF. In a few artist showcases, NVIDIA highlights artists’ abilities to share historic artworks, capture memories, and allow viewers of the artworks to more fully immerse themselves in the scenes without being beholden to the original composition.

Karen X. Cheng explores the potential of this tool in her creation, Through the Looking Glass, which uses NVIDIA Instant NeRF to create the 3D scene through which her camera ventures, eventually slipping through a mirror into an inverted world. 

Hugues Bruyère uses Instant NeRF in his creation, Zeus, to present a historic sculpture from the Royal Ontario Museum in a new way. This gives those who may never have a chance to see it in person the ability to view it from all angles nonetheless.

Instant NeRF of inside NVIDIA HQ

(Image credit: NVIDIA)

With tools like Instant NeRF, it’s clear that NVIDIA’s latest hardware has much more than just gamers in mind. With more and more dedicated AI power built into each chip, NVIDIA RTX GPUs are bringing new levels of AI performance to the table that can serve gamers and creators alike. 

The same Tensor Cores that make it possible to infer what a 4K frame in a game would look like using a 1080p frame as a reference are also making it possible to infer what a fully fleshed out 3D scene would look like using a series of 2D images. And NVIDIA’s latest GPUs put those tools right into your hands. 

Instant NeRF isn’t something you just get to hear about. It’s actually a tool you can try for yourself. Developers can dive right in with this guide, and less technical users can grab a simpler Windows installer here which even includes a demo photo set. Since Instant NeRF runs on RTX GPUs, it’s widely available, though the latest RTX 40 Series and RTX Ada GPUs can turn out results even faster. 

The ability of NVIDIA’s hardware to accelerate AI is key to powering a new generation of AI PCs. Instant NeRF is just one of many examples of how NVIDIA’s GPUs are enabling new capabilities or dramatically speeding up existing tools. To help you explore the latest developments in AI and present them in an easy-to-understand format, NVIDIA has introduced the AI Decoded blog series. You can also see all the ways NVIDIA is boosting AI performance at NVIDIA’s RTX for AI page. 

TechRadar – All the latest technology news

Read More

Confused about Google’s Find My Device? Here are 7 things you need to know

It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.

Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.

1. It’s a big upgrade for Google’s old Find My Device network 

Google's Find My Device feature

(Image credit: Google)

The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers. 

Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.

Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.

2. Google was waiting for Apple to add support to iPhones 

iPhone 15 from the front

(Image credit: Future)

The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.

The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled. 

Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.

3. It will roll out globally

Android

(Image credit: Future)

Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.

Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.

If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.

4. You’ll receive unwanted tracker alerts

Apple AirTags

(Image credit: Apple)

Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.

For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.

You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information. 

5. Chipolo and Pebblebee are launching new trackers for it soon

Chipolo's new trackers

(Image credit: Chipolo)

Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.

On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.

Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given. 

6. It’ll work nicely with your Nest products

Google Nest Wifi

(Image credit: Google )

For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.

A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.

Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.

7. Headphones are invited to the tracking party too

Someone wearing the Sony WH-1000XM5 headphones against a green backdrop

(Image credit: Gerald Lynch/TechRadar/Future)

Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.

Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe. 

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 could soon deliver updates that don’t need a reboot

Windows 11 could soon run updates without rebooting, if the rumor mill is right – and there’s already evidence this is the path Microsoft is taking in a preview build.

This comes from a regular source of Microsoft-related leaks, namely Zac Bowden of Windows Central, who first of all spotted that Windows 11 preview build 26058 (in the Canary and Dev channels) was recently updated with an interesting change.

Microsoft is pushing out updates to testers that do nothing and are merely “designed to test our servicing pipeline for Windows 11, version 24H2.” The the key part is we’re informed that those who have VBS (Virtualization Based Security) turned on “may not experience a restart upon installing the update.”

Running an update without requiring a reboot is known as “hot patching” and this method of delivery – which is obviously far more convenient for the user – could be realized in the next major update for Windows 11 later this year (24H2), Bowden asserts.

The leaker has tapped sources for further details, and observes that we’re talking about hot patching for the monthly cumulative updates for Windows 11 here. So the bigger upgrades (the likes of 24H2) wouldn’t be hot-patched in, as clearly there’s too much work going on under the hood for that to happen.

Indeed, not every cumulative update would be applied without a reboot, Bowden further explains. This is because hot patching uses a baseline update, one that can be patched on top of, but that baseline model needs to be refreshed every few months.

Add seasoning with all this info, naturally, but it looks like Microsoft is up to something here based on the testing going on, which specifically mentions 24H2, as well.


Analysis: How would this work exactly?

What does this mean for the future of Windows 11? Well, possibly nothing. After all, this is mostly chatter from the grapevine, and what’s apparently happening in early testing could simply be abandoned if it doesn’t work out.

However, hot patching is something that is already employed with Windows Server, and the Xbox console as well, so it makes sense that Microsoft would want to use the tech to benefit Windows 11 users. It’s certainly a very convenient touch, though as noted, not every cumulative update would be hot-patched.

Bowden believes the likely scenario would be quarterly cumulative updates that need a reboot, followed by hot patches in between. In other words, we’d get a reboot-laden update in January, say, followed by two hot-patched cumulative updates in February and March that could be completed quickly with no reboot needed. Then, April’s cumulative update would need a reboot, but May and June wouldn’t, and so on.

As mentioned, annual updates certainly wouldn’t be hot-patched, and neither would out-of-band security fixes for example (as the reboot-less updates rely on that baseline patch, and such a fix wouldn’t be based on that, of course).

This would be a pretty cool feature for Windows 11 users, because dropping the need to reboot – to be forced to restart in some cases – is obviously a major benefit. Is it enough to tempt upgrades from Windows 10? Well, maybe not, but it is another boon to add to the pile for those holding out on Microsoft’s older operating system. (Assuming they can upgrade to Windows 11 at all, of course, which is a stumbling block for some due to PC requirements like TPM).

You might also like…

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is adding ChatGPT-powered AI to its iconic Notepad app – but does it need it?

Do you think the iconic Windows Notepad app lacks flashy features? Then don’t worry – Microsoft is integrating ChatGPT AI into Notepad for Windows 11. 

Microsoft’s newest all-purpose digital AI assistant, Windows Copilot, has been around for a little while now, and it’s currently fairly limited in what it can actually do. Microsoft is no doubt working on adding features, such as the recently-added ability to analyze user-uploaded screenshots. Alongside Copilot, Microsoft announced a specific assistant AI bot for Paint named Cocreator, an AI image generator that generates images from a user-provided description. 

Now, it looks like Notepad, a Windows staple and simple text editor that’s been included as default on Windows devices since 1983, is also getting a Cocreator of sorts (possibly named Cowriter). Windows Latest reports that Microsoft is testing out an AI bot powered by GPT-4, OpenAI’s large language model (LLM) and its most advanced language generation system. 

References to this feature (yet to be officially announced and released by Microsoft) have been spotted in the app package folder of Notepad by Windows enthusiasts. The updated Notepad app package reportedly has files with prefixes like “CoWriterCreditLimitDialog”, “CoWriterDropDownButton”, and “CoWriterWaitlistDialog” in their names. According to Windows Latest, these refer to user interface (UI) elements and dialogs that we could possibly see in Notepad AI’s UI.

Copilot in Windows

(Image credit: Microsoft)

Sneaking a peak at what's coming to Notepad's UI

From what we’ve seen so far, an AI-assist bot in Notepad will enable users to enlist ChatCPT-powered text generation directly in the Notepad app. That said, it looks like there will be limits in place, with the reference “CreditLimitDialog” suggesting a potential usage quota and “credit” system for how much you can use the AI feature. If it’s similar to Bing Image Generator or Cocreator in Paint, you’ll probably receive boosts (or credits), to generate unique content within Notepad. After this initial bonus amount, you might still be able to generate content with Notepad’s AI feature, but it’ll take longer than it does using the boosts. 

Because Microsoft itself hasn’t announced the feature yet, we don’t know if the credits will be on a word-by-word basis. 

Other references have been spotted that might indicate what Notepad’s AI will look like in Notepad’s UI. A reference to “CoWriterDropDownButton” points to a button on the right hand side of Notepad that allows you to open up the Notepad AI feature’s panel to use it. This was spotted by Windows Insiders, members of the Windows Insider Program which allows enthusiasts and developers to previous upcoming Windows features and builds, who publicized their findings on X (formerly Twitter). 

See more

One other UI-type reference that was found was “CoWriterInfoButton” which could be a button that might work like a “Help” button. This could provide users with more information such as instructions on how to use it, ideas for how users can use the feature, and other help and troubleshooting information. 

Windows Latest speculates that Notepad’s AI feature might start rolling out to tests (presumably Windows Insiders) very soon, but there might be a waitlist (according to references found by some Windows testers according to The Verge). 

This isn’t the first AI-powered text editing feature that Microsoft has worked on – it introduced an Editor feature to Microsoft Edge last year that was capable of a range of text-related functions. These include spelling and grammar suggestions, autocompletion functions, help with research and formatting, and rewriting and clarity-related suggestions. 

In a similar way, Notepad’s AI tool will seek to make suggestions relevant to the context of the document and specific to the type of content you’re writing. In a promotional image for the feature, found in Notepad’s updated app package, there’s a counter in the bottom ribbon of Notepad that reads “1 of 4,” indicating that you can get multiple suggestions for a text selection that you can browse and choose one to your liking. You can ask for modifications to do with “Length,” “Tone,” “Format,” and “Instructions” for a selection of text, similar to how Windows Copilot functions in Office apps like Word, Powerpoint, and Outlook.

Microsoft Office Visual Refresh

(Image credit: Microsoft)

The AI tool might be in testing – but opinions are already coming out

Vigilant observers also pointed out that there’s a “thumbs up” icon with a counter to allow users to give their opinion of the output that the AI tool produces, similar to the feedback function you can see in ChatGPT itself after it gives you a response. Feedback helps the developers of these AI tools fine-tune them to provide better responses. 

When Copilot was first introduced, Microsoft made it clear that it wants to transform how you interact with Windows altogether with the help of Copilot and that Copilot was going to make its way through Microsoft 365’s apps, and be deeply embedded in Windows 11 to help you with all kinds of tasks. This development shows just how insistent Microsoft seems to be about Copilot, and AI-assistant bots and features in general. Some people point out that apps like Notepad and Paint are known for their straightforwardness, and that an AI-assist bot detracts more from that than it helps. The feature has not yet officially been debuted for beta testing in testing channels, but Microsoft seems very keen to push forward with AI on as many fronts as possible. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Been putting off that free Windows 11 or 10 upgrade? Windows 7 and 8 diehards need to move fast

Microsoft just implemented something we never thought we’d see the software giant do – namely closing the loophole allowing for Windows 7 and 8 users to upgrade to Windows 10 or 11 at no cost.

We need to rewind time considerably to return to the start of this particular story, all the way back to when Windows 10 was first launched, and Windows 7 and 8 users were allowed a free upgrade to the new OS.

That freebie offer only lasted for a year after the launch of Windows 10, officially, but even after the deadline expired, it actually remained in place.

In short, anyone with a valid Windows 7 or 8 key could still upgrade their PC to Windows 10 just fine (and by extension, Windows 11 too, when that emerged – assuming the various additional system requirements were met including TPM).

Essentially, this was a loophole Microsoft never bothered to close – until now, because as Windows Central spotted, the company just made an official announcement that this unofficial upgrade path is now blocked (with a caveat).

The software giant said: “Microsoft’s free upgrade offer for Windows 10 / 11 ended July 29, 2016. The installation path to obtain the Windows 7 / 8 free upgrade is now removed as well. Upgrades to Windows 11 from Windows 10 are still free.”

However, as Windows Central points out, it’s important to note that technically, an upgrade is still possible as we write this. This change has just been applied with Windows 11 preview builds for now, but it will come through to the release version of the OS before long, no doubt.

So, if you do want to avail yourself of a free upgrade from Windows 7 or 8, you better move sharpish. It may even no longer be possible by the time you read this.


Analysis: An unexpected development

This is something we didn’t believe would ever happen, frankly, simply because the free upgrade has remained in place, on the sly, for so long. As Microsoft points out, the offer officially expired in mid-2016, over seven years ago – yes, seven years.

So, we just figured, like many others, that Microsoft was happy enough to let Windows 7 and 8 users continue to upgrade at no expense. Our presumption was that bolstered adoption figures for newer versions of Windows were to be welcomed. Apparently, this is no longer a concern for Microsoft (if it ever was – but we can’t imagine why the loophole remained open if it wasn’t).

Anyhow, as we observed above, act quickly if you have been holding off an upgrade, but intend to make the move. You may not have long at all left to pull the trigger.

You might also like

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More