Microsoft could turbocharge Edge browser’s autofill game by using AI to help fill out more complex forms

Microsoft Edge looks like it’s getting a new feature that could help you fill out forms more easily thanks to a boost from GPT-4 (the most up-to-date large language model from the creators of ChatGPT, OpenAI).

Browsers like Edge already have auto-fill assistance features to help fill out fields asking for personal information that’s requested frequently, and this ability could see even more improvement thanks to GPT-4’s technology.

The digital assistant currently on offer from Microsoft, Copilot, is also powered by GPT-4, and has seen some considerable integration into Edge already. In theory, the new GPT-4 driven form-filling feature will help Edge users tackle more complex or unusual questions, rather than typical basic fields (name, address, email etc) that existing auto-fill functionality handles just fine.

However, right now this supercharged auto-fill is a feature hidden within the Edge codebase (it’s called “msEdgeAutofillUseGPTForAISuggestions”), so it’s not yet active even in testing. Windows Latest did attempt to activate the new feature, but with no luck – so it’s yet to be seen how the feature works in action. 

A close up of a woman sitting at a table and typing on a computer (a laptop)

(Image credit: Shutterstock/Gorodenkoff)

Bolstering the powers of Edge and Copilot

Of course, as noted, Edge’s current auto-fill feature is sufficient for most form-filling needs, but that won’t help with form fields that require more complex or longer answers. As Windows Latest observes, what you can do, if you wish, is just paste those kind of questions directly into Edge’s Copilot sidebar, and the AI can help you craft an answer that way. Furthermore, you could also experiment with different conversation modes to obtain different answers, perhaps. 

This pepped-up auto-fill could be a useful addition for Edge, and Microsoft is clearly trying to develop both its browser, and the Copilot AI itself, to be more helpful and generally smarter.

That said, it’s hard to say how much Microsoft is prioritizing user satisfaction, as equally, it’s implementing measures which are set to potentially annoy some users. We’re thinking about its recent aggressive advertising strategy and curbing of access to settings if your copy of Windows is unactivated, to pick a couple of examples. Not forgetting the quickly approaching deprecation date for Windows 10 (its most popular operating system).

Copilot was presented as an all-purpose assistant, but the AI still leaves a lot to be desired. However, it’s gradually seeing improvements and integration into existing Microsoft products, and we’ll have to see if the big bet on Copilot pans out as envisioned. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

OpenAI’s big Google Search rival could launch within days – and kickstart a new era for search

When OpenAI launched ChatGPT in 2022, it set off alarm bells at Google HQ about what OpenAI’s artificial intelligence (AI) tool could mean for Google’s lucrative search business. Now, those fears seem to be coming true, as OpenAI is set for a surprise announcement next week that could upend the search world forever.

According to Reuters, OpenAI plans to launch a Google search competitor that would be underpinned by its large language model (LLM) tech. The big scoop here is the date that OpenAI has apparently set for the unveiling: Monday, May 13.

Intriguingly, that’s just one day before the mammoth Google I/O 2024 show, which is usually one of the biggest Google events of the year. Google often uses the event to promote its latest advances in search and AI, so it will have little time to react to whatever OpenAI decides to reveal the day before.

The timing suggests that OpenAI is really gunning for Google’s crown and aims to upstage the search giant on its home turf. The stakes, therefore, could not be higher for both firms.

OpenAI vs Google

OpenAI logo on wall

(Image credit: Shutterstock.com / rafapress)

We’ve heard rumors before that OpenAI has an AI-based search engine up its sleeve. Bloomberg, for example, recently reported that OpenAI’s search engine will be able to pull in data from the web and include citations in its results. News outlet The Information, meanwhile, has made similar claims that OpenAI is “developing a web search product”, and there has been a near-constant stream of whispers to this effect for months.

But even without the direct leaks and rumors, it has been clear for a while that tools like ChatGPT present an alternative way of sourcing information to the more traditional search engines. You can ask ChatGPT to fetch information on almost any topic you can think of and it will bring up the answers in seconds (albeit sometimes with factual inaccuracies). ChatGPT Plus can access information on the web if you’re a paid subscriber, and it looks like this will soon be joined by OpenAI’s dedicated search engine.

Of course, Google isn’t going to go down without a fight. The company has been pumping out updates to its Gemini chatbot, as well as incorporating various AI features into its existing search engine, including AI-generated answers in a box on the results page.

Whether OpenAI’s search engine will be enough to knock Google off its perch is anyone’s guess, but it’s clear that the company’s success with ChatGPT has prompted Google to radically rethink its search offering. Come next week, we might get a clearer picture of how the future of search will look.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 could get a shiny new feature to share files and links with QR codes, because apparently copy and paste is so last year

Windows 11’s Share menu is getting a new feature – the ability to share links as QR codes that a smartphone or other suitable device can scan (you can check out our guides on how to scan QR codes with an iPhone or with an Android).

The Share menu isn’t the most widely used, especially outside of Microsoft’s own apps and services, but Microsoft looks like it’s hoping to boost its popularity by making the sharing of web pages more seamless, especially across different devices. 

This feature is part of a new preview version, Windows 11 build 26212, available to Windows Insiders through the Canary Channel. The build saw the introduction of a button that generates a QR code within the Share menu dialog box, which will apply to Microsoft Edge and other supported apps. People can generate QR codes for URL addresses and cloud files in the Windows 11 Share menu, which is opened in most apps by clicking the share button in the app’s toolbar.

Once you have the preview build installed and you follow the process to generate a QR code, you can then open the Camera app or dedicated QR scanner on your device, and hold it up to the screen. 

A man holding a smartphone and pointing his finger

(Image credit: Shutterstock/pongsuk sapukdee)

More about the new Share window

Writing in a blog post publicizing the development, Microsoft explains that the Share menu will not close if you accidentally (or deliberately) click outside of it. To close it, you’ll have to click the close button in the top right corner.

There’s also an added provision if you use your Gmail address for your Microsoft Account: you can send yourself an email from the share window and receive it in your Gmail inbox (instead of just Outlook/Hotmail accounts).

A similar process already exists in Windows 11 for people who have Phone Link set up on multiple devices. These users can send a link via the Share menu, but this development makes it even easier to share things across devices as you don’t have to log in or set up anything after installing the preview build. 

We’ll have to see if this makes the Share menu more popular with users, as most people are used to the clipboard functions in Windows for moving information from one place to another, or they just save the data to the device they’re currently using to retrieve when they need it.

This development isn’t a dramatically big change, which means it can be easy to adopt, but also easy to miss. It’s also still in the testing stage, so we’ll have to wait and see if and when Microsoft chooses to fully adopt it in a future Windows 11 update.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Tim Cook explains why Apple’s generative AI could be the best on smartphones – and he might have a point

It’s an open secret that Apple is going to unveil a whole host of new artificial intelligence (AI) software features in the coming weeks, with major overhauls planned for iOS 18, macOS 15, and more. But it’s not just new features that Apple is hoping to hype up – it’s the way in which those AI tools are put to use.

Tim Cook has just let slip that Apple’s generative AI will have some major “advantages” over its rivals. While the Apple CEO didn’t explain exactly what Apple’s generative AI will entail (we can expect to hear about that at WWDC in June), what he did say makes a whole lot of sense.

Speaking on Apple’s latest earnings call yesterday, Cook said: “We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.”

Cook also said Apple is making “significant investments” in generative AI, and that he has “some very exciting things” to unveil in the near future. “We continue to feel very bullish about our opportunity in generative AI,” he added.

Why Tim Cook might be right

Siri

(Image credit: Unsplash [Omid Armin])

There are plenty of reasons why Apple’s AI implementation could be an improvement over what's come before it, not least of which is Apple’s strong track record when it comes to privacy. The company often prefers to encrypt data and run tasks on your device, rather than sending anything to the cloud, which helps ensure that it can’t be accessed by nefarious third parties – and when it comes to AI, it looks like this approach might play out again.

Bloomberg's Mark Gurman, for example, has reported that Apple’s upcoming AI features will work entirely on your device, thereby continuing Apple’s commitment to privacy, amid concerns that the rapid development of AI is putting security and privacy at risk. If successful, it could also be a more ethical approach to AI than that employed by Apple’s rivals.

In addition, the fact that Apple creates both the hardware and software in its products allows them to be seamlessly integrated in ways most of its competitors can’t match. It also means devices can be designed with specific use cases in mind that rely on hardware and software working together, rather than Apple having to rely on outside manufacturers to play ball. When it comes to AI, that could result in all kinds of benefits, from performance improvements to new app features.

We’ll find out for sure in the coming weeks. Apple is hosting an iPad event on May 7, which reports have suggested Apple might use to hint at upcoming AI capabilities. Beyond that, the company’s Worldwide Developers Conference (WWDC) lands on June 10, where Apple is expected to devote significant energy to its AI efforts. Watch this space.

You might also like

TechRadar – All the latest technology news

Read More

Could generative AI work without online data theft? Nvidia’s ChatRTX aims to prove it can

Nvidia continues to invest in AI initiatives and the most recent one, ChatRTX, is no exception thanks to its most recent update. 

ChatRTX is, according to the tech giant, a “demo app that lets you personalize a GPT large language model (LLM) connected to your own content.” This content comprises your PC’s local documents, files, folders, etc., and essentially builds a custom AI chatbox from that information.

Because it doesn’t require an internet connection, it gives users speedy access to query answers that might be buried under all those computer files. With the latest update, it has access to even more data and LLMs including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It also can locally search for photos, and has Whisper support, allowing users to converse with ChatRTX through an AI-automated speech recognition program.

Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s far more secure than online AI chatbots. You can download ChatRTX here to try it out for free.

Can AI escape its ethical dilemma?

The concept of an AI chatbot using local data off your PC, instead of training on (read: stealing) other people’s online works, is rather intriguing. It seems to solve the ethical dilemma of using copyrighted works without permission and hoarding it. It also seems to solve another long-term problem that’s plagued many a PC user — actually finding long-buried files in your file explorer, or at least the information trapped within it.

However, there’s the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, it could end up becoming a serious issue in the future. Of course, only using it to locate information on your PC is perfectly fine and most likely the proper use. 

But the point of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time in which we could have done that without the rampant theft, but corporations have powered their AI with stolen words from other sites and now it’s irrevocably tied.

Given that it's highly unethical that data theft is the vital part of the process that allows you to make chats well-rounded enough not to get trapped in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical transgression to power and shape them, so here's to hoping Nvidia can get it right.

You might also like

TechRadar – All the latest technology news

Read More

A cheaper Apple Vision Pro might not land until 2026 – and Samsung’s XR/VR headset could steal its lunch

The current Apple Vision Pro is a fantastic bit of mixed reality kit, blending impressive hardware and an innovative user interface. But as you’ll see in our Apple Visio Pro review, it’s far from perfect; throw in a $ 3,499 price tag and other early-adopter woes, and the headset isn’t something for most people. 

As such, Apple has been tipped to be working on next-generation and potentially cheaper versions of the Vision Pro. But Bloomberg's Mark Gurman, who’s a renowned and accurate Apple tipster, has said the Cupertino crew is some 18 months away from releasing a ‘Vision Pro 2’, with a roadmap that reportedly won't see a second-generation model ready until the end of 2026. 

Apparently, Apple will try and bring the cheaper version to the market before then, but Gurman says, per his sources, that Apple is “flummoxed” about how exactly to bring the headset's cost down. 

So that arguably leaves a gap in the mixed reality (or XR for extended reality) market, one that Apple has injected interest into for others to join in. Enter Samsung. 

Samsung, Sony and Snapdragon

The South Korean tech giant is working on an XR headset that's likely to come with some impressive specs. We’re talking about a Sony-made micro-OLED display with a resolution of 3,840 x 3,552 pixels, 90Hz refresh rate and a peak brightness of 1,000-nits; those are Vision Pro-challenging screen specs. A Snapdragon XR2 Plus Gen 2 chipset is set to power the Samsung XR/VR headset, which could arrive at some point this year.

While Apple has a knack for creating slick interconnected product ecosystems, Samsung has got a lot better at building out its device ecosystem, in addition to having its phones, tablets and other gadgets play nice with Windows 11. So it could make an impressive XR headset that arguably has more flexibility that the Vision Pro by working with more devices and a wider range of laptops.

Now that’s all speculation on my part, but Samsung has made VR headsets in the past and worked closely with Microsoft, which could give it an ace up the sleeve by working well with Microsoft MR platform and perhaps Steam VR; the latter would arguably give it a gaming advantage over the Vision Pro.

The Samsung Gear VR headset on a red desk

The Samsung Gear VR – you needed a phone to operate it (Image credit: samsung)

Working with a more open-ended platform like Windows 11 could potentially make it easier for more developers to get on board with making XR/MR apps and services. That would make jumping into XR a more appealing prospect if would-be buyers could be assured of plenty of apps and software compatibility.

Furthermore, Samsung is potentially closer to supply chains than Apple – not least of all because it has its own display arm – so could stand to make a high-end XR headset that undercuts the Vision Pro.

While I need to be convinced that extended and mixed reality (which blends virtual and augmented reality) has a viable spot in the future of computing, I’m keen to see Apple have some clear competition in the area – there are other MR headsets but they haven’t really grabbed the limelight or developed a system to compete with Apple’s visionOS; that’s not counting the likes of the Meta Quest 3

Samsung basically competes with Apple in the smartphone arena, so I see no reason why it can't lock horns in the XR world, and with a reported wait for a next-gen Vision Pro, Samsung could take a bite out of the MR pie for itself.

You might also like

TechRadar – All the latest technology news

Read More

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. I’ll dig into the implications of that further down, but for now, let’s explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been ‘instruction-tuned’ by Apple; a process by which an AI model’s learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to “empower and enrich” public AI research by releasing the OpenELMs to the wider AI community.

So what does this actually mean for users?

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8’s AI-powered Tensor chip and Qualcomm’s latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software – something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

It’s worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the company’s A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope it’s used for clever and unique new features, rather than Microsoft’s constant Copilot nagging.

You might also like…

TechRadar – All the latest technology news

Read More

These are the Meta Quest alternatives we could get soon with Horizon OS, according to Mark Zuckerberg

Meta is making its Horizon OS – the operating system its Quest headsets run on – available to third-party XR devices (XR is a catchall for virtual, augmented, and mixed reality), and it might be the biggest VR announcement anyone makes this decade.

The first batch will include headsets from Asus, Lenovo, and Xbox, and while we have an idea what these gadgets might offer, Meta CEO Mark Zuckerberg may have just provided us with a few more details, or outlined other non-Quest hardware we might see running Horizon OS in the future.

To get you up to speed, the three devices that were teased in the Horizon OS-sharing announcement are a “performance gaming headset” from Asus, “mixed reality devices for productivity” from Lenovo, and a more Quest-like headset from Xbox.

And in Meta’s Q1 2024 earnings call, Zuckerberg discussed the recent announcement by explaining the sorts of diverse XR hardware we might see by providing some pretty specific examples.

One was a “work-focused headset” that’s “lighter” and “less designed for motion;” you plug it into a laptop to use it, and this could be something we see from laptop-manufacturer Lenovo’s device. Another Zuckerberg description was for a “gaming-focused headset” that prioritizes “peripherals and haptics,” which could be the Asus headset.

Then there was a device that would be packaged with “Xbox controllers and a Game Pass subscription out of the box” – with Zuckerberg specifically connecting it to the announced Xbox device.

More Horizon OS headsets incoming?

The Meta Quest 3 being used while someone boxes in a home gym

A fitness-focused VR headset could be coming (Image credit: Meta)

He also detailed two devices which haven’t yet been teased: “An entertainment-focused headset” designed with the “highest-resolution displays” at the expense of its other specs, and “a fitness-focused headset” that’s “lighter with sweat-wicking materials.”

It’s possible that these suggestions were simply Zuckerberg riffing on potential Horizon OS devices rather than gadgets that are currently in the works. But we wouldn’t be surprised to hear that these gadgets are on their way given how plausible they sound – with it being that the partners aren’t yet ready to reveal they're working on something.

There’s also the question of other already-announced XR devices like the Samsung XR headset and if they’ll run on Horizon OS. Given Samsung has partnered with Google on its upcoming headset – which we assume is for software support – we posit that won't be the case. But we’ll have to wait and see what’s announced when the device is revealed (hopefully later this year).

All we can say is that Meta’s approach already looks to be paving the way for more diverse hardware than we’ve previously seen before in VR, finally giving us some fantastic options that aren’t just the Meta Quest 3 or whatever comes next to replace it.

Speaking of which, Zuckerberg added in the investor call that he thinks that Meta’s “first-party Quest devices will continue to be the most popular headsets as we see today” and that the company plans to keep making its state-of-the-art VR tech accessible to everyone. So if you do want to wait for the next Quest – perhaps the rumored Meta Quest 3 Lite or Meta Quest Pro 2 – then know that it’s likely still on the way.

You might also like

TechRadar – All the latest technology news

Read More

AI Explorer could revolutionize Windows 11, but can your PC run it? Here’s how to check

Microsoft is going full speed ahead with its upcoming Windows 11 update 24H2, also known as Hudson Valley. It's bringing more artificial intelligence (AI) features to the operating system, including AI Explorer, and as such Microsoft will be adding a feature that can tell users whether their PC will support it. Or you can look that information up for yourself.

Update 24H2, most likely launching in September or October 2024, will not only require PopCnt but a mandatory SSE4.2 requirement added to the undelaying code. This update will feature some truly great AI tools and enhancements to many Windows apps and programs like Windows Copilot and Cocreator AI-powered assistants for apps like Notepad and Paint. 

Of course, the biggest feature is the aforementioned AI Explorer, which will make records of a user's previous actions and transform them into ‘searchable moments,’ thereby allowing users to search as well as retract them.

Over on X, Albacore found out (and Neowin reported on) that “a cautionary message will be displayed on such systems not meeting the requirements.” It’s a handy system for those who aren’t sure whether their PCs can handle these new AI features. However, instead of waiting until the update drops later in 2024 users can check if their PCs are AI-enabled right now.

See more

Is your PC AI-enabled? Check now

Dell published a support page that instructs users how to accomplish this themselves. A key aspect here is your computer needs to have a built-in neural processing unit (NPU); that's a specialized processor designed for handling AI-based tasks, doing so more efficiently and using less power in the process that a CPU. 

Dedicated NPUs are often found in PCs featuring Intel's 14th-Gen processors, AMD's Ryzen 7000 and 8000 series, and Qualcomm's Snapdragon 8cx Gen 2 or Snapdragon X Elite and newer.

You can see the steps for checking for NPUs in general, as well as how to check for the required drivers for Intel and AMD processors. As for Qualcomm processors, all AI hardware drivers are pre-installed and are updated via Windows Update.

Check for NPUs on your computer

(Image: © Dell)

Open the Task Manager.

Click Performance.

Confirm that NPU is listed.

Check for correct drivers installed in Intel processors:

(Image: © Dell)

Open the Device Manager.

Expand Neural processors and then select Intel(R) AI Boost.

On the View menu, click Devices by connection.

Now Windows Studio Effects Driver should be seen under Intel(R) AI Boost.

Check for correct drivers installed in AMD processors:

(Image: © Dell)

Open the Device Manager.

Expand System devices > AMD IPU Device.

On the View menu, click Devices by connection.

Now Windows Studio Effects Driver should be seen under AMD IPU Device.

You might also like

TechRadar – All the latest technology news

Read More

Gemini’s next evolution could let you use the AI while you browse the internet

Gemini may receive a big update on mobile in the near future where it’ll gain several new features including a text box overlay. Details of the upgrade come from industry insider AssembleDebug who shared his findings with a couple of publications.

PiunikaWeb gained insight into the overlay and it’s quite fascinating seeing it in action. It converts the AI’s input box into a small floating window located at the bottom of a smartphone display, staying there even if you close the app. You could, for example, talk to Gemini while browsing the internet or checking your email. 

AssembleDebug was able to activate the window and get it working on his phone while on X (the platform formerly known as Twitter). His demo video shows it behaving exactly like the Gemini app. You ask the AI a question, and after a few seconds, a response comes out complete with source links, images, as well as YouTube videos if the inquiry calls for it. Answers have the potential to obscure the app behind it. 

AssembleDebug’s video reveals the length depends on whether the question requires a long-form answer. We should mention that the overlay is multimodal so you can write out an inquiry, verbally command the AI, or upload an image.

Smarter AI

The other notable changes were shared with Android Authority. First, Gemini on Android will gain the ability to accept different types of files besides photographs. Images show a tester uploading a PDF, and then asking the AI to summarize the text inside it. Apparently, the feature is present in the current version of Gemini however activating it doesn’t do anything. Android Authority speculates the update may be exclusive to either Google Workspace or Gemini Advanced; maybe both. It’s hard to tell at the moment.

Second is a pretty basic tool, but useful nonetheless called Select Text. The way Gemini works right now is you’re forced to copy a whole block of text even if you just want a small portion. Select Text solves this issue by allowing you to grab a specific line or paragraph. 

Yeah, it’s not a flashy upgrade. Almost every app in the world has the same capability. Yet, the tool has “huge implications for Gemini’s usability”. It greatly improves the AI’s ease of use by not being so restrictive.  

See more

A fourth, smaller update was found by AssembleDebug. It’s known as Real-time Responses. The descriptor text found alongside it claims the tool lets you see answers being written out in real-time. However, as PiunikaWeb points out, it’s only an animation change. There aren’t any “practical benefits.” Instead of waiting for Gemini to generate a response as one solid mass, you can choose to see the AI write everything out line by line similar to its desktop counterpart.

Google I/O 2024 kicks off in about three weeks on May 14. No word on when these features will roll out, but we'll learn a lot more during the event.

While you wait, check out TechRadar's roundup of the best Android smartphones for 2024 if you're looking to upgrade.

You might also like

TechRadar – All the latest technology news

Read More