Apple is forging a path towards more ethical generative AI – something sorely needed in today’s AI-powered world

Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. 

Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. 

The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses – and as far as these companies are concerned, copyrighted materials are fair game.

Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space.  

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models. 

In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement.  This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that “everyone else is doing it” when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard. 

As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to.

A man editing a photo on a Mac Mini

(Image credit: Apple)

The Apple approach to ethical AI training (that we know of so far)

It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible – and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax. 

It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini. 

A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material. 

There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It's also worth noting that Apple isn't technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe's lead.

I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated – not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement – especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google might have a new AI-powered password-generating trick up its sleeve – but can Gemini keep your secrets safe?

If you’ve been using Google Chrome for the past few years, you may have noticed that whenever you’ve had to think up a new password, or change your existing one, for a site or app, a little “Suggest strong password” dialog box would pop up – and it looks like it could soon offer AI-powered password suggestions. 

A keen-eyed software development observer has spotted that Google might be gearing up to infuse this feature with the capabilities of Gemini, its latest large language model (LLM).

The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products like Android

These findings appear to be backed up by screenshots that show glimpses of how Gemini could be incorporated into Chrome to give you even better password suggestions when you’re looking to create a new password or change from one you’ve previously set.

See more

Gemini guesswork

One line of code that caught my attention is that “deleting all passwords will turn this feature off.” I wonder if this does what it says on the tin: shutting the feature off if a user deletes all of their passwords, or if this just means all of the passwords generated by the “Suggest strong passwords” feature. 

The final screenshot that @Leopeva64 provides is also intriguing as it seems to show the prompt that Google engineers have included to get Gemini to generate a suitable password. 

This is a really interesting move by Google and it could play out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this method of password generation, similar to risks you find with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks are designed to trick the AI models to give out information that their creators, individuals, or organizations might want to keep private, like someone’s login information.

A woman working on a laptop in a shared working space sitting next to a man working at a computer

(Image credit: Shutterstock/Gorodenkoff)

An important security consideration 

Now, that sounds scary and as far as we know, this hasn’t happened yet with any widely-deployed LLM, including Gemini. It’s a theoretical fear and there are standard password security practices that tech organizations like Google employ to prevent data breaches. 

These include encryption technologies, which encode data so that only authorized parties can access it for multiple stages of the password generation and storage process, and hashing, a one-way data conversion process that’s intended to make data reverse-engineering hard to do. 

You could also use any other LLM like ChatGPT to generate a strong password manually, although I feel like Google knows more about how to do this, and I’d only advise experimenting with that if you’re a software data professional. 

It’s not a bad idea as a proposition and a use of AI that could actually be very beneficial for users, but Google will have to put an equal (if not greater) amount of effort into making sure Gemini is bolted down and as impenetrable to outside attacks as can be. If it implements this and by some chance it does cause a huge data breach, that will likely damage people’s trust of LLMs and could impact the reputations of the tech companies, including Google, who are championing them.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Meta AR glasses: everything we know about the AI-powered AR smart glasses

After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech).

While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package.

We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it.

Meta AR glasses: Price

We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either.

Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $ 300 to $ 500 / £300 to £500 / AU$ 450 to AU$ 800; Meta’s teased specs, however, sound more advanced than what we have currently.

Lance Ulanoff showing off Google Glass

Meta’s glasses could cost as much as Google Glass (Image credit: Future)

As such, the Meta AR glasses might cost nearer $ 1,500 (around £1,200 / AU$ 2300)  – which is what the Google Glass smart glasses launched at.

A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices.

We’ll have to wait and see what gets leaked and officially revealed in the future.

Meta AR glasses: Release date

Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027.

That’s according to a leaked Meta internal roadmap shared by  The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027.

RayBan Meta Smart Glasses close up with the camera flashing

(Image credit: Meta)

In February 2024  Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers.

Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if.

Meta AR glasses: Specs and features

We haven't heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships.

Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit.

Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The AR glasses could let you bust ghost wherever you go (Image credit: Meta)

As for features, Meta’s already teased the two standouts: AR and AI abilities.

What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play.

AI-wise, Meta is giving us a sneak peek of what's coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice.

It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it.

The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive.

Meta AR glasses: What we want to see

A slick Ray-Ban-like design 

RayBan Meta Smart Glasses

The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta)

While Meta’s smart specs aren't amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it's gorgeous.

While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore.

If the partnership does end, we'd like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point.

Swappable lenses 

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

We want to change our lenses Meta! (Image credit: Meta)

While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses.

While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather.

As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun's out.

Speakers you can (quietly) rave too 

JBL Soundgear Sense

These open ear headphones are amazing, Meta take notes (Image credit: Future)

Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones.

We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver.

The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities.

You might also like

TechRadar – All the latest technology news

Read More

Samsung Galaxy Ring could help cook up AI-powered meal plans to boost your diet

As we get closer to the full launch of the Samsung Galaxy Ring, we're slowly learning more about its many talents – and some fresh rumors suggest these could include planning meals to improve your diet.

According to the Korean site Chosun Biz (via GSMArena), Samsung plans to integrate the Galaxy Ring with its new Samsung Food app, launched in August 2023

Samsung calls this app an “AI-powered food and recipe platform”, as it can whip up tailored meal plans and even give you step-by-step guides to making specific dishes. The exact integration with the Galaxy Ring isn't clear, but according to the Korean site, the wearable will help make dietary suggestions based on your calorie consumption and body mass index (BMI).

The ultimate aim is apparently to integrate this system with smart appliances (made by Samsung, of course) like refrigerators and ovens. While they aren't yet widely available, appliances like Samsung Bespoke 4-Door Flex Refrigerator and Bespoke AI Oven include cameras that can design or cook recipes based on your dietary needs.

It sounds like the Galaxy Ring, and presumably smartwatches like the incoming Galaxy Watch 7 series, are the missing links in a system that can monitor your health and feed that info into the Samsung Food app, which you can download now for Android and iOS.

The Ring's role in this process will presumably be more limited than smartwatches, whose screens can help you log meals and more. But the rumors hint at how big Samsung's ambitions are for its long-awaited ring, which will be a strong new challenger in our best smart rings guide when it lands (most likely in July).

Hungry for data

A phone on a grey background showing the Samsung Food app

(Image credit: Samsung)

During our early hands-on with the Galaxy Ring, it was clear that Samsung is mostly focusing on its sleep-tracking potential. It goes beyond Samsung's smartwatches here, offering unique insights including night movement, resting heart rate during sleep, and sleep latency (the time it takes to fall asleep).

But Samsung has also talked up the Galaxy Ring's broader health potential more recently. It'll apparently be able to generate a My Vitality Score in Samsung's Health app (by crunching together data like your activity and heart rate) and eventually integrate with appliances like smart fridges.

This means it's no surprise to hear that the Galaxy Ring could also play nice with the Samsung Food app. That said, the ring's hardware limitations mean this will likely be a minor feature initially, as its tracking is more focused on sleep and exercise. 

We're actually more excited about the Ring's potential to control our smart home than integrate with appliances like smart ovens, but more features are never a bad thing – as long as you're happy to give up significant amounts of health data to Samsung.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Ray-Ban smart glasses are becoming AI-powered tour guides

While Meta’s most recognizable hardware is its Quest VR headsets, its smart glasses created in collaboration with Ray-Ban are proving to be popular thanks to their sleek design and unique AI tools – tools that are getting an upgrade to turn them into a wearable tourist guide.

In a post on Threads – Meta’s Twitter-like Instagram spinoff – Meta CTO Andrew Bosworth showed off a new Look and Ask feature that can recognize landmarks and tell you facts about them. Bosworth demonstrated it using examples from San Francisco such as the Golden Gate Bridge, the Painted Ladies, and Coit Tower.

As with other Look and Ask prompts, you give a command like “Look and tell me a cool fact about this bridge.” The Ray-Ban Meta Smart Glasses then use their in-built camera to scan the scene in front of you, and cross-reference the image with info in the Meta AI’s knowledge database (which includes access to the Bing search engine). 

The specs then respond with the cool fact you requested – in this case explaining the Golden Gate Bridge (which it recognized in the photo it took) is painted “International Orange” so that it would be more visible in foggy conditions.

Screen shots from Threads showing the Meta Ray-Ban Smart Glasses being used to give the suer information about San Francisco landmarks

(Image credit: Andrew Bosworth / Threads)

Bosworth added in a follow-up message that other improvements are being rolled out, including new voice commands so you can share your latest Meta AI interaction on WhatsApp and Messenger. 

Down the line, Bosworth says you’ll also be able to change the speed of Meta AI readouts in the voice settings menu to have them go faster or slower.

Still not for everyone 

One huge caveat is that – much like the glasses’ other Look and Ask AI features – this new landmark recognition feature is still only in beta. As such, it might not always be the most accurate – so take its tourist guidance with a pinch of salt.

Orange RayBan Meta Smart Glasses

(Image credit: Meta)

The good news is Meta has at least opened up its waitlist to join the beta so more of us can try these experimental features. Go to the official page, input your glasses serial number, and wait to get contacted – though this option is only available if you’re based in the US.

In his post Bosworth did say that the team is working to “make this available to more people,” but neither he nor Meta have given a precise timeline of when the impressive AI features will be more widely available.

You might also like

TechRadar – All the latest technology news

Read More

Oppo’s new AI-powered AR smart glasses give us a glimpse of the next tech revolution


  • Oppo has shown off its Air Glass 3 AR glasses at MWC 2024
  • They’re powered by its AndesGPT AI model and can answer questions
  • They’re just a prototype, but the tech might not be far from launching

While there’s a slight weirdness to the Meta Ray-Ban Smart Glasses – they are a wearable camera, after all – the onboard AI is pretty neat, even if some of its best features are still in beta. So it’s unsurprising that other companies are looking to launch their own AI-powered specs, with Oppo being the latest in unveiling its new Air Glass 3 at MWC 2024.

In a demo video, Oppo shows how the specs have seemingly revolutionized someone's working day. When they boot up, the Air Glass 3's 1,000-nit displays show the user a breakdown of their schedule, and while making a coffee ahead of a meeting they get a message saying that it's started early.

While in the meeting the specs pick up on a question that’s been asked, and Oppo's AndesGPT AI model (which runs on a connected smartphone) is able to provide some possible answers. Later it uses the design details that have been discussed to create an image of a possible prototype design which the wearer then brings to life.

After a good day’s work they can kick back to some of their favorite tunes that play through the glasses’ in-built speakers. All of this is crammed into a 50g design. 

Now, the big caveat here is the Air Glass 3 AR glasses are just a prototype. What’s more, neither of the previous Air Glass models were released outside of China – so there’s a higher than likely chance the Air Glass 3 won’t be either.

But what Oppo is showing off isn’t far from being mimicked by its rivals, and a lot of it is pretty much possible in tech that you can go out and buy today – including those Meta Ray-Ban Smart Glasses.

The future is now

The Ray-Ban Meta Smart Glasses already have an AI that can answer questions like a voice-controlled ChatGPT

They can also scan the environment around you using the camera to get context for questions – for example, “what meal can I make with these ingredients?” – via their 'Look and Ask' feature. These tools are currently in beta, but the tech is working and the AI features will hopefully be more widely available soon.

They can also alert you to texts and calls that you’re getting and play music, just like the Oppo Air Glass 3 concept.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban Meta glasses ooze style and have neat AI tools (Image credit: Meta)

Then there’s the likes of the Xreal Air 2. While their AR display is a little more distracting than the screen found on the Oppo Air Glass 3, they are a consumer product that isn’t mind-blowingly expensive to buy – just $ 399 / £399 for the base model.

If you combine these two glasses then you’re already very close to Oppo’s concept; you’d just need to clean up the design a little, and probably splash out a little more as I expect lenses with built-in displays won’t come cheap.

The only thing I can’t see happening soon is the AI creating a working prototype product design for you. It might be able to provide some inspiration for a designer to work off, but reliably creating a fully functional model seems more than a little beyond existing AI image generation tools' capabilities.

While the Oppo Air Glass 3 certainly look like a promising glimpse of the future, we'll have to see what they're actually capable of if and when they launch outside China.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft does DLSS? Look out world, AI-powered upscaling feature for PC games has been spotted in Windows 11

Windows 11’s big update for this year could come with an operating system-wide upscaling feature for PC games in the same vein as Nvidia DLSS or AMD FSR (or Intel XeSS).

The idea would be to get smoother frame rates by upscaling the game’s resolution. In other words, running at a lower resolution, and artificially ramping it up to a higher level of detail, but with a greater level of fluidity than running natively, all of which would be driven by AI.

The ‘Automatic Super Resolution’ option is currently hidden in test builds of Windows 11 (version 26052 to be precise). Leaker PhantomOfEarth enabled the feature and shared some screenshots of what it looks like in the Graphics panel in the Settings app.

See more

There’s a system-wide toggle for Microsoft’s own take on AI upscaling, and per-app settings if you wish to be a bit more judicious about how the tech is applied.

In theory, this will be ushered in with Windows 11 24H2 – which is now confirmed by Microsoft as the major update for its desktop OS this year. (There’ll be no Windows 12 in 2024, as older rumors had suggested was a possibility).

We don’t know that Automatic Super Resolution will be in 24H2 for sure, though, as it could be intended for a later release, or indeed it might be a concept that’s scrapped during the testing process.


A PC gamer looking happy

(Image credit: Shutterstock)

Analysis: Microsoft’s angle

This is still in its very early stages, of course – and not even officially in testing yet – so there are a lot of questions about how it will work.

In theory, it should be a widely applicable upscaling feature for games that leverages the power of AI, either via a Neural Processing Unit – the NPUs now included in Intel’s new Meteor Lake CPUs, or AMD’s Ryzen 8000 silicon – or the GPU itself (employing Nvidia’s Tensor cores, for example, which are used to drive its own DLSS).

As noted, though, we can’t be sure exactly how this will be applied, though it’s certainly a game-targeted feature – the text accompanying it tells us that much – likely to be used for older PC games, or those not supported by Nvidia DLSS, AMD FSR, or Intel XeSS for that matter.

We don’t expect Microsoft will try and butt heads with Nvidia in terms of attempting to outdo Team Green’s own upscaling, but rather supply a more broadly supported alternative, one which won’t be as good. The trade-off is that wider level of support, much as already seen with AMD’s Radeon Super Resolution (RSR), which is, in all likelihood, what this Windows 11 feature will resemble the most.

Outside of gaming, Automatic Super Resolution may also be applicable to videos, and perhaps other apps – video chatting, maybe, at a guess – to provide some AI supercharging for the provided footage.

Again, there are already features from Nvidia and AMD (the latter is still incoming) that do video upscaling, but again Microsoft would offer broader coverage (as the name suggests, Nvidia’s RTX Video Super Resolution is only supported by RTX graphics cards, so other GPUs are left out in the cold).

We expect Automatic Super Resolution is something Microsoft will certainly be looking to implement, more likely than not, to complement other OS-wide technologies for PC gamers. That includes Auto HDR, which brings HDR (or an approximation of it) to SDR games. (And funnily enough, it looks like Nvidia is working on its own take on that ability, building on RTX Video HDR which is already here for video playback).

As you may have noticed at this point, there are a lot of this kind of performance-enhancing technologies around these days, which is telling in itself. Perhaps part of Microsoft’s angle is a simple system-level switch that confused users can just turn on for upscaling trickery across the board, and ‘it just works’ to quote another famous tech giant.

You might also like…

TechRadar – All the latest technology news

Read More

Apple working on a new AI-powered editing tool and you can try out the demo now

Apple says it plans on introducing generative AI features to iPhones later this year. It’s unknown what they are, however, a recently published research paper indicates one of them may be a new type of editing software that can alter images via text prompts.

It’s called MGIE, or MLLM-Guided (multimodal large language model) Image Editing. The tech is the result of a collaboration between Apple and researchers from the University of California, Santa Barbara. The paper states MGIE is capable of “Photoshop-style [modifications]” ranging from simple tweaks like cropping to more complex edits such as removing objects from a picture. This is made possible by the MLLM (multimodal large language model), a type of AI capable of processing both “ text and images” at the same time.

VentureBeat in their report explains MLLMs show “remarkable capabilities in cross-model understanding”, although they have not been widely implemented in image editing software despite their supposed efficacy.

Public demonstration

The way MGIE works is pretty straightforward. You upload an image to the AI engine and give it clear, concise instructions on the changes you want it to make. VentureBeat says people will need to “provide explicit guidance”. As an example, you can upload a picture of a bright, sunny day and tell MGIE to “make the sky more blue.” It’ll proceed to saturate the color of the sky a bit, but it may not be as vivid as you would like. You’ll have to guide it further to get the results you want. 

MGIE is currently available on GitHub as an open-source project. The researchers are offering “code, data, [pre-trained models]”, as well as a notebook teaching people how to use the AI for editing tasks. There’s also a web demo available to the public on the collaborative tech platform Hugging Face. With access to this demo, we decided to take Apple’s AI out for a spin.

Image 1 of 3

Cat picture new background on MGIE

(Image credit: Cédric VT/Unsplash/Apple)
Image 2 of 3

Cat picture lightning background on MGIE

(Image credit: Cédric VT/Unsplash/Apple)
Image 3 of 3

Cat picture on MGIE

(Image credit: Cédric VT/Unsplash/Apple)

In our test, we uploaded a picture of a cat that we got from Unsplash and then proceeded to instruct MGIE to make several changes. And in our experience, it did okay. In one instance, we told it to change the background from blue to red. However, MGIE instead made the background a darker shade of blue with static-like texturing. On another, we prompted the engine to add a purple background with lightning strikes and it created something much more dynamic.

Inclusion in future iPhones

At the time of this writing, you may experience long queue times while attempting to generate content. If it doesn’t work, the Hugging Face page has a link to the same AI hosted over on Gradio which is the one we used. There doesn't appear to be any difference between the two.

Now the question is: will this technology come out to a future iPhone or iOS 18? Maybe. As alluded to at the beginning, company CEO Tim Cook told investors AI tools are coming to its devices later on in the year but didn’t give any specifics. Personally, we can see MGIE morph into the iPhone version of Google’s Magic Editor; a feature that can completely alter the contents of a picture. If you read the research paper on arXiv, that certainly seems to be the path Apple is taking with its AI.

MGIE is still a work in progress. Outputs are not perfect. One of the sample images shows the kitten turn into a monstrosity. But we do expect all the bugs to be worked out down the line. If you prefer a more hands-on approach, check out TechRadar's guide on the best photo editors for 2024.

You might also like

TechRadar – All the latest technology news

Read More

This new AI-powered iPhone browser trumps Safari by searching the web for you

Sick of struggling to find the answers to your search queries in Safari, Chrome, or any of the other best browsers on iOS? A new alternative has just emerged that uses artificial intelligence (AI) to do the searching for you, potentially helping you find accurate results much more quickly.

Called Arc Search, the app is made by The Browser Company of New York, an outfit that has also made the desktop Arc browser that captured headlines in 2023. With Arc Search, the developer has added a bunch of interesting features that could see it supplant your current favorite browser on your iPhone.

First among them is the app’s 'Browse for Me' feature. When you enter a search query, you can view a standard page of results in your search engine of choice, or you can instead tap the Browse for Me button. This uses AI to gather information from six different sources, then builds a custom web page that displays all the key information you need to answer your search query.

This can include useful photos and videos, bullet-pointed text, and more. It’s a clever way to pull in information from a variety of sources and ensure you stand a good chance of getting what you need at the first attempt, without having to endlessly scroll through useless information and unhelpful websites.

Privacy protections

The Arc Search web browser for iOS running on an iPhone, with various search results displayed.

(Image credit: Future)

Arc Search comes with other handy features besides Browse for Me. For instance, you can tell it to block ads, trackers and GDPR cookie banners on all websites. That’s a great way to protect your privacy by default, although it’s not clear if the app actually opts out of cookies on GDPR banners or simply hides them.

Arc Search will also automatically archive inactive tabs after one day, which might come in handy for people who struggle to control their tab overload (such as yours truly). And there’s a reader mode that strips out unnecessary visual elements to give you a more focused experience.

Some of features aren’t available in Browse for Me, though. For instance, you can’t share your custom pages or copy a link to them, nor can you view them in reader mode. Perhaps these tools will come later.

Regardless, Arc Search is an intriguing alternative to the usual suspects when it comes to iOS browsers, and could make its own claim to the best browser title if it continues to add interesting features. If you want to try it out, it’s free to download on the iOS App Store with no subscriptions or in-app purchases to worry about.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s AI-powered Voice Clarity feature improves your video chats, plus setup has a new look (finally)

Windows 11 has a new preview build out that improves audio quality for your video chats and more besides.

Windows 11 preview build 26040 has been released in the Canary channel (the earliest test builds) complete with the Voice Clarity feature which was previously exclusive to owners of Surface devices.

Voice Clarity leverages AI to improve audio chat on your end, canceling out echo, reducing reverberation or other unwanted effects, and suppressing any intrusive background noises. In short, it helps you to be heard better, and your voice to be clearer.

The catch is that apps need to use Communications Signal Processing Mode to have the benefit of this feature, which is unsurprisingly what Microsoft’s own Phone Link app uses. WhatsApp is another example, plus some PC games will be good to go with this tech, so you can shout at your teammates and be crystal clear when doing so.

Voice Clarity is on by default – after all, there’s no real downside here, save for using a bit of CPU juice – but you can turn it off if you want.

Another smart addition here is a hook-up between your Android phone and Windows 11 PC for editing photos. Whenever you take a photo on your smartphone, it’ll be available on the desktop PC straight away (you’ll get a notification), and you can edit it in the Snipping Tool (rather than struggling to deal with the image on your handset).

For the full list of changes in build 26040, see Microsoft’s blog post, but another of the bigger introductions worth highlighting here is that the Windows 11 setup experience has been given a long overdue lick of paint.

Windows 11 Setup

(Image credit: Microsoft)

Analysis: Setting the scene

It’s about time Windows setup got some attention, as it has had the same basic look for a long time now. It’d be nice for the modernization to get a touch more sparkle, we reckon, though the improvement is a good one, and it’s not exactly a crucial part of the interface (given that you don’t see it after you’ve installed the operating system, anyway).

We have already seen the capability for Android phone photos to be piped to the Snipping Tool appear in the Dev channel last week, but it’s good to see a broader rollout to Canary testers. It is only rolling out, though, so bear in mind that you might not see it yet if you’re a denizen of the Canary channel.

As for Voice Clarity, clearly that’s a welcome touch of AI for all Windows 11 users. Whether you’re chatting to your family to catch up at the weekend, or you work remotely and use your Windows 11 PC for meetings, being able to be heard better by the person (or people) on the other end of the call is obviously a good thing.

You might also like…

TechRadar – All the latest technology news

Read More