Apple is forging a path towards more ethical generative AI – something sorely needed in today’s AI-powered world

Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. 

Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. 

The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses – and as far as these companies are concerned, copyrighted materials are fair game.

Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space.  

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models. 

In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement.  This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that “everyone else is doing it” when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard. 

As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to.

A man editing a photo on a Mac Mini

(Image credit: Apple)

The Apple approach to ethical AI training (that we know of so far)

It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible – and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax. 

It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini. 

A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material. 

There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It's also worth noting that Apple isn't technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe's lead.

I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated – not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement – especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

New Rabbit R1 demo promises a world without apps – and a lot more talking to your tech

We’ve already talked about the Rabbit R1 before here on TechRadar: an ambitious little pocket-friendly device that contains an AI-powered personal assistant, capable of doing everything from curating a music playlist to booking you a last-minute flight to Rome. Now, the pint-sized companion tool has been shown demonstrating its note-taking capabilities.

The latest demo comes from Jesse Lyu on X, founder and CEO of Rabbit Inc., and shows how the R1 can be used for note-taking and transcription via some simple voice controls. The video (see the tweet below) shows that note-taking can be started with a short voice command, and ended with a single button press.

See more

It’s a relatively early tech demo – Lyu notes that it “still need bit of touch” [sic] – but it’s a solid demonstration of Rabbit Inc.’s objectives when it comes to user simplicity. The R1 has very little in terms of a physical interface, and doubles down by having as basic a software interface as possible: there’s no Android-style app grid in sight here, just an AI capable of connecting to web apps to carry out tasks.

Once you’ve recorded your notes, you can either view a full transcription, see an AI-generated summary, or replay the audio recording (the latter of which requires you to access a web portal). The Rabbit R1 is primarily driven by cloud computing, meaning that you’ll need a constant internet connection to get the full experience.

Opinion: A nifty gadget that might not hold up to criticism

As someone who personally spent a lot of time interviewing people and frantically scribbling down notes in my early journo days, I can definitely see the value of a tool like the Rabbit R1. I’m also a sucker for purpose-built hardware, so despite my frequent reservations about AI, I truly like the concept of the R1 as a ‘one-stop shop’ for your AI chatbot needs.

My main issue is that this latest tech demo doesn’t actually do anything I can’t do with my phone. I’ve got a Google Pixel 8, and nowadays I use the Otter.ai app for interview transcriptions and voice notes. It’s not a perfect tool, but it does the job as well as the R1 can right now.

Rabbit r1

The Rabbit R1’s simplicity is part of its appeal – though it does still have a touchscreen. (Image credit: Rabbit)

As much as I love the Rabbit R1’s charming analog design, it’s still going to cost $ 199 (£159 / around AU$ 300) – and I just don’t see the point in spending that money when the phone I’ve already paid for can do all the same tasks. An AI-powered pocket companion sounds like an excellent idea on paper, but when you take a look at the current widespread proliferation of AI tools like Windows Copilot and Google Gemini in our existing tech products, it feels a tad redundant.

The big players such as Google and Microsoft aren’t about to stop cramming AI features into our everyday hardware anytime soon, so dedicated AI gadgets like Rabbit Inc.’s dinky pocket helper will need to work hard to prove themselves. The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future. And yet, as our Editor-in-Chief Lance Ulanoff puts it, I might still end up loving the R1…

You might also like

TechRadar – All the latest technology news

Read More

Feeling lost in the concrete jungles of the world? Fear not, Google Maps introduces a new feature to help you find entrances and exits

Picture this: you’re using Google Maps to navigate to a place you’ve never been and time is pressing, but you’ve made it! You’ve found the location, but there’s a problem: you don’t know how to get into whatever building you’re trying to access, and panic sets in. Maybe that’s just me, but if you can relate it looks like we’re getting some good news – Google Maps is testing a feature that shows you exactly where you can enter buildings.

According to Android Police, Google Maps is working on a feature showing users entrance indicator icons for selected buildings. I can immediately see how this could make it easier to find your way in and out of a location. Loading markers like this would require a lot of internet data if done for every suitable building in a given area, especially metropolitan and densely packed areas, but it seems Google has accounted for this; the entrance icons will only become visible when you select a precise location and zoom in closely. 

Google Maps is an immensely popular app for navigation as well as looking up recommendations for various activities, like finding attractions or places to eat. If you’ve ever actually done this in practice, you’ve possibly had a situation like I’ve described above, especially if you’re trying to find your way around a larger attraction or building. Trying to find the correct entrance to an expo center or sports stadium can be a nightmare. Places like these will often have multiple entrances with different accessibility options – such as underground train stations that stretch across several streets.

Google's experimentation should help users manage those parts of their journeys better, starting with only certain users and certain buildings for now, displaying icons that indicate both where you can enter a place and exit it (if there are exit/entrance-only doors, for example). This feature follows the introduction of Google Maps’ recent addition of indicators of the best station exits and entrances for users of public transport.

Google Maps being used to travel across New York

(Image credit: Shutterstock / TY Lim)

The present state of the new feature

Android Police tested the new feature on Google Maps version 11.17.0101 on a Google Pixel 7a. As Google seemingly intended, Google Maps showed entrances for a place only when it was selected and while the user zoomed in on it, showing a white circle with a symbol indicating ‘entry’ on it. That said, Android Police wasn’t able to use the feature on other devices running the latest version of Google Maps for different regions, which indicates that Google Maps is rolling this feature out gradually following limited and measured testing. 

While using the Google Pixel 7a, Android Police tested various types of buildings including hotels, doctors’ offices, supermarkets, hardware stores, cafes, and restaurants in cities that include New York City, Las Vegas, San Francisco, and Berlin. Some places had these new entrance and exit markers and some didn’t, which probably means that Google is still in the process of gathering accurate and up-to-date information on these places, most likely via its StreetView tool. Another issue that came up was that some of the indicated entrances were not in the right place, but teething issues are inevitable and this problem seemed more common for smaller buildings where it’s actually easier to find the entrance once you’re there in person.

The entrances were sometimes marked by a green arrow instead of a white circle, and it’s not clear at this point exactly what it means when a green arrow or a white circle is used. Google Maps has a reputation as a very helpful, functional, and often dependable app, so whatever new features are rolled out, Google probably wants to make sure they’re up to a certain standard. I hope they complete the necessary stages of experimenting and implementing this new feature, and I look forward to using it as soon as I can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft does DLSS? Look out world, AI-powered upscaling feature for PC games has been spotted in Windows 11

Windows 11’s big update for this year could come with an operating system-wide upscaling feature for PC games in the same vein as Nvidia DLSS or AMD FSR (or Intel XeSS).

The idea would be to get smoother frame rates by upscaling the game’s resolution. In other words, running at a lower resolution, and artificially ramping it up to a higher level of detail, but with a greater level of fluidity than running natively, all of which would be driven by AI.

The ‘Automatic Super Resolution’ option is currently hidden in test builds of Windows 11 (version 26052 to be precise). Leaker PhantomOfEarth enabled the feature and shared some screenshots of what it looks like in the Graphics panel in the Settings app.

See more

There’s a system-wide toggle for Microsoft’s own take on AI upscaling, and per-app settings if you wish to be a bit more judicious about how the tech is applied.

In theory, this will be ushered in with Windows 11 24H2 – which is now confirmed by Microsoft as the major update for its desktop OS this year. (There’ll be no Windows 12 in 2024, as older rumors had suggested was a possibility).

We don’t know that Automatic Super Resolution will be in 24H2 for sure, though, as it could be intended for a later release, or indeed it might be a concept that’s scrapped during the testing process.


A PC gamer looking happy

(Image credit: Shutterstock)

Analysis: Microsoft’s angle

This is still in its very early stages, of course – and not even officially in testing yet – so there are a lot of questions about how it will work.

In theory, it should be a widely applicable upscaling feature for games that leverages the power of AI, either via a Neural Processing Unit – the NPUs now included in Intel’s new Meteor Lake CPUs, or AMD’s Ryzen 8000 silicon – or the GPU itself (employing Nvidia’s Tensor cores, for example, which are used to drive its own DLSS).

As noted, though, we can’t be sure exactly how this will be applied, though it’s certainly a game-targeted feature – the text accompanying it tells us that much – likely to be used for older PC games, or those not supported by Nvidia DLSS, AMD FSR, or Intel XeSS for that matter.

We don’t expect Microsoft will try and butt heads with Nvidia in terms of attempting to outdo Team Green’s own upscaling, but rather supply a more broadly supported alternative, one which won’t be as good. The trade-off is that wider level of support, much as already seen with AMD’s Radeon Super Resolution (RSR), which is, in all likelihood, what this Windows 11 feature will resemble the most.

Outside of gaming, Automatic Super Resolution may also be applicable to videos, and perhaps other apps – video chatting, maybe, at a guess – to provide some AI supercharging for the provided footage.

Again, there are already features from Nvidia and AMD (the latter is still incoming) that do video upscaling, but again Microsoft would offer broader coverage (as the name suggests, Nvidia’s RTX Video Super Resolution is only supported by RTX graphics cards, so other GPUs are left out in the cold).

We expect Automatic Super Resolution is something Microsoft will certainly be looking to implement, more likely than not, to complement other OS-wide technologies for PC gamers. That includes Auto HDR, which brings HDR (or an approximation of it) to SDR games. (And funnily enough, it looks like Nvidia is working on its own take on that ability, building on RTX Video HDR which is already here for video playback).

As you may have noticed at this point, there are a lot of this kind of performance-enhancing technologies around these days, which is telling in itself. Perhaps part of Microsoft’s angle is a simple system-level switch that confused users can just turn on for upscaling trickery across the board, and ‘it just works’ to quote another famous tech giant.

You might also like…

TechRadar – All the latest technology news

Read More

These new smart glasses can teach people about the world thanks to generative AI

It was only a matter of time before someone added generative AI to an AR headset and taking the plunge is start-up company Brilliant Labs with their recently revealed Frame smart glasses.

Looking like a pair of Where’s Waldo glasses (or Where’s Wally to our UK readers), the Frame houses a multimodal digital assistant called Noa. It consists of multiple AI models from other brands working together in unison to help users learn about the world around them. These lessons can be done just by looking at something and then issuing a command. Let’s say you want to know more about the nutritional value of a raspberry. Thanks to OpenAI tech, you can command Noa to perform a “visual analysis” of the subject. The read-out appears on the outer AR lens. Additionally, it can offer real-time language translation via Whisper AI.

The Frame can also search the internet via its Perplexity AI model. Search results will even provide price tags for potential purchases. In a recent VentureBeat article, Brilliant Labs claims Noa can provide instantaneous price checks for clothes just by scanning the piece, or fish out home listings for new houses on the market. All you have to do is look at the house in question. It can even generate images on the fly through Stable Diffusion, according to ZDNET

Evolving assistant

Going back to VentureBeat, their report offers a deeper insight into how Noa works. 

The digital assistant is always on, constantly taking in information from its environment. And it’ll apparently “adopt a unique personality” over time. The publication explains that upon activating for the first time, Noa appears as an “egg” on the display. Owners will have to answer a series of questions, and upon finishing, the egg hatches into a character avatar whose personality reflects the user. As the Frame is used, Noa analyzes the interactions between it and the user, evolving to become better at tackling tasks.

Brilliant Labs Frame exploded view

(Image credit: Brilliant Labs)

An exploded view of the Frame can be found on Brilliant Labs’ official website providing interesting insight into how the tech works. On-screen content is projected by a micro-OLED onto a “geometric prism” in the lens. 9To5Google points out this is reminiscent of how Google Glass worked. On the nose bridge is the Frame’s camera sitting on a PCBA (printed circuit board assembly). 

At the end of the stems, you have the batteries inside two big hubs. Brilliant Labs states the frames can last a whole day, and to charge them, you’ll have to plug in the Mister Power dongle, inadvertently turning the glasses into a high-tech Groucho Marx impersonation.

Brilliant Labs Frame with Mister Power

(Image credit: Brilliant Labs)

Availability

Currently open for pre-order, the Frame will run you $ 350 a pair. It’ll be available in three colors: Smokey Black, Cool Gray, and the transparent H20. You can opt for prescription lenses. Doing so will bump the price tag to $ 448.There's a chance Brilliant Labs won’t have your exact prescription. They recommend to instead select the option that closely matches your actual prescription. Shipping is free and the first batch rolls out April 15.

It appears all of the AI features are subject to a daily usage cap. Brilliant Labs has plans to launch a subscription service lifting the limit. We reached out to the company for clarification and asked several other questions like exactly how does the Frame receive input? This story will be updated at a later time.

Until then, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps is getting a big accessibility update that could change how people connect with the world

Google is introducing new accessibility features to several of its platforms to help people with disabilities get around town more easily.

A few of the six changes will be exclusive to smartphones. Search with Live View on Google Maps will receive “screen reader capabilities… [giving] auditory feedback of the place around you”. This tool is meant to help “people who are blind or low-vision” get helpful info like the name or category of a location and how far away it is from their current position. All users have to do to activate it is tap the camera icon in the Google Maps search bar and then aim the rear camera at whatever is around them.  

Google Maps screen reader

(Image credit: Google)

The screen reader is making its way to iOS starting today with the Android version rolling out in the coming months. Also coming to mobile, the Chrome app’s address bar will be able to detect typos in text and display “suggested websites” according to what the browser thinks you’re looking for. This second tool is meant to help people with dyslexia find the content they’re looking for.

Google points out these two build on top of the recently released accessibility features on Pixel phones like the Magnifier app as well as the upgraded Guided Frame. The latter can help blind people take selfies by utilizing a “combination of audio cues, high-contrast animations, and haptic feedback”. 

Guided Frame is available on the Pixel 8 and 8 Pro with plans to expand it to the Pixel 6 and Pixel 7 by the end of the year.

Magnifier on Google Pixel

(Image credit: Google)

Easier navigation

The rest of the update consists of minor tweaks to select apps.

First, Google Maps on mobile is adding a “wheelchair-accessible transit” option for people looking for locations that don’t have any stairs at the entrance as well as buildings that are wheelchair friendly. Similarly, Maps for Android Auto will indicate “wheelchair-accessible places” on the screen with a little blue icon next to relevant results. Additionally, local businesses have the opportunity to label themselves as “Disabled-owned” on Google Search in case you want to support them directly.

The last change sees Assistant Routines on Google Home become more like the company's Actions Block app as users can configure the icons on the main screen however they want. For example, the on-screen icons can be increased in size and you can alter the thumbnail image for one of the blocks.

A Google representative told us this batch is currently rolling out so keep an eye out for the patch when it arrives.

We recommend checking out TechRadar’s list of the best text-to-speech software for 2023 if you’re looking for other ways to help you navigate the internet. 

You might also like

  1. Google reveals a revamped read-aloud feature in major accessibility update
  2. The upcoming Pixel 8a may be bigger than the Pixel but it’ll be less powerful
  3. The Google Graveyard revisited: 9 scrapped Google ideas we still miss

TechRadar – All the latest technology news

Read More