Here’s more proof Apple is going big with AI this year

The fact that Apple is going to debut a new generative artificial intelligence (AI) tool in iOS 18 this year is probably one of the worst-kept secrets in tech at the moment. Now, another morsel has leaked out surrounding Apple’s future AI plans, and it could shed light on what sort of AI features Apple fans might soon get to experience.

As first reported by Bloomberg, earlier this year Apple bought Canadian startup DarwinAI, with dozens of the company’s workers joining Apple once the deal was completed. It’s thought that Apple made this move in an attempt to bolster its AI capabilities in the last few months before iOS 18 will be revealed, which is expected to happen at the company’s Worldwide Developers Conference (WWDC) in June.

Bloomberg’s report says that DarwinAI “has developed AI technology for visually inspecting components during the manufacturing process.” One of its “core technologies,” however, is making AI faster and more efficient, and that could be the reason Apple chose to open its wallet. Apple intends its AI to run entirely on-device, presumably to protect your privacy by not sharing AI inputs with the cloud, and this would benefit from DarwinAI’s tech. After all, Apple won’t want its flagship AI features to result in sluggish iPhone performance.

Apple’s AI plans

Siri

(Image credit: Unsplash [Omid Armin])

This is apparently just the latest move Apple has made in the AI arena. Thanks to a series of leaks and statements from Apple CEO Tim Cook, the company is known to be making serious efforts to challenge AI market leaders like OpenAI and Microsoft.

For instance, it’s been widely reported that Apple will soon unveil its own generative AI tool, which has been dubbed Ajax and AppleGPT during its development process. This could give a major boost to Apple’s Siri assistant, which has long lagged behind competitors such as Amazon Alexa and Google Assistant. As well as that, we could see generative AI tools debut in apps like Pages and Apple Music, rivaling products like Microsoft’s Copilot and Spotify’s AI DJ.

Tim Cook has dropped several hints regarding Apple’s plans, saying customers can expect to see a host of AI features “later this year.” The Apple chief has called AI a “huge opportunity” for his company and has said that Apple intends to “break new ground” in this area. When it comes to specifics, though, Cook has been far less forthcoming, presumably preferring to reveal all at WWDC.

It’s unknown whether Apple will have time to properly integrate DarwinAI’s tools into iOS 18 before it is announced to the world, but it seems certain it will make use of them over the coming months and years. It could be just one more piece of the AI puzzle that Apple is attempting to solve.

You might also like

TechRadar – All the latest technology news

Read More

YouTube TV refreshed UI makes video watching more engaging for users

YouTube is redesigning its smart TV app to increase interactivity between people and their favorite channels.

In a recent blog post, YouTube described how the updated UI shrinks the main video a bit to make room for an information column housing a video’s view counts, amount of likes it has, description, and comments. Yes, despite the internet’s advice, people do read the YouTube comments section. The current layout has the same column, but it obscures the right side of the screen. YouTube states in its announcement the redesign allows users to enjoy content “without interrupting [or ruining] the viewing experience.” 

Don’t worry about this becoming the new normal. TheVerge in their coverage states the full screen view will remain. It won’t be supplanted by the refresh or removed as the default setting. You can switch to the revamped interface at any time from within the video player screen. It’s totally up to the viewer how they want to curate their experience. 

Varying content

What you see on the UI’s column can differ depending on the type of content being watched. In the announcement, YouTube demonstrates how the layout works by playing a video about beauty products. Below the comments, viewers can check out the specific products mentioned in the clip and buy them directly.

Shopping on YouTube TV may appear seamless, however, TheVerge claims it’ll be a little awkward. Instead of buying items directly from a channel, you'll have to scan a QR code that shows up on the screen. From there, you will be taken to a web page where users will complete the transaction. We contacted YouTube to double-check, and a company representative confirmed that is how it’ll work.

Besides shopping, the far-right column will also display live scores and stats for sports games. It’ll be a part of the already existing “Views suite of features,” all of which can be found by triggering the correct on-screen filter.

The update will be released to all YouTube TV subscribers in the coming weeks. It won’t happen all at once so keep an eye out for the patch when it arrives.

Be sure to check out TechRadar's recommendations for the best TVs for 2024 if you're looking to upgrade.

You might also like

TechRadar – All the latest technology news

Read More

Apple’s Vision Pro successfully helps nurse assist in spinal surgery – and there’s more mixed-reality medical work on the way

In a fascinating adoption of technology, a surgical team in the UK recently used Apple’s Vision Pro to help with a medical procedure.

It wasn’t a surgeon who donned the headset, but Suvi Verho, the lead scrub nurse (also known as a theater nurse) at the Cromwell Hospital in London. Scrub nurses help surgeons by providing them with all the equipment and support they need to complete an operation – in this case, it was a spinal surgery. 

Verho told The Daily Mail that the Vision Pro used an app made by software developer eXeX to float “superimposed virtual screens in front of [her displaying] vital information”. The report adds that the mixed reality headset was used to help her prepare, keep track of the surgery, and choose which tools to hand to the surgeon. There’s even a photograph of the operation itself in the publication. 

Vision Pro inside surgery room

(Image credit: Cromwell Hospital/The Daily Mail)

Verho sounds like a big fan of the Vision Pro stating, perhaps somewhat hyperbolically, “It eliminates human error… [and] guesswork”. Even so, anything that ensures operations go as smoothly as possible is A-OK in our books.

Syed Aftab, the surgeon who led the procedure, also had several words of praise. He had never worked with Verho before. However, he said the headset turned an unfamiliar scrub nurse “into someone with ten years’ experience” working alongside him.

Mixed reality support

eXeX, as a company, specializes in upgrading hospitals by implementing mixed reality. This isn’t the first time one of their products has been used in an operating room. Last month, American surgeon Dr. Robert Masson used the Vision Pro with eXeX’s app to help him perform a spinal procedure. Again, it doesn’t appear he physically wore the headset, although his assistants did. They used the device to follow procedural guides from inside a sterile environment, something that was previously deemed “impossible.”

Dr. Masson had his own words of praise stating that the combination of the Vision Pro and the eXeX tool enabled an “undistracted workflow” for his team. It’s unknown which software was used. However, if you check the company’s website, it appears both Dr. Masson’s team and Nurse Verho utilized ExperienceX, a mixed reality app giving technicians “a touch-free heads up display” 

Apple's future in medicine

The Vision Pro’s future in medicine won’t just be for spinal surgeries. In a recent blog post, Apple highlighted several other medical apps harnessing visionOS  Medical corporation Stryker created myMako to help doctors plan for their patients’ joint replacement surgeries. For medical students, Cinematic Reality by Siemens Healthineers offers “interactive holograms of the human body”. 

These two and more are available for download off the App Store, although some of the software requires a connection to the developer’s platform to work. You can download if you want to, but keep in mind they're primarily for medical professionals.

If you're looking for a headset with a wider range of usability, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

New Rabbit R1 demo promises a world without apps – and a lot more talking to your tech

We’ve already talked about the Rabbit R1 before here on TechRadar: an ambitious little pocket-friendly device that contains an AI-powered personal assistant, capable of doing everything from curating a music playlist to booking you a last-minute flight to Rome. Now, the pint-sized companion tool has been shown demonstrating its note-taking capabilities.

The latest demo comes from Jesse Lyu on X, founder and CEO of Rabbit Inc., and shows how the R1 can be used for note-taking and transcription via some simple voice controls. The video (see the tweet below) shows that note-taking can be started with a short voice command, and ended with a single button press.

See more

It’s a relatively early tech demo – Lyu notes that it “still need bit of touch” [sic] – but it’s a solid demonstration of Rabbit Inc.’s objectives when it comes to user simplicity. The R1 has very little in terms of a physical interface, and doubles down by having as basic a software interface as possible: there’s no Android-style app grid in sight here, just an AI capable of connecting to web apps to carry out tasks.

Once you’ve recorded your notes, you can either view a full transcription, see an AI-generated summary, or replay the audio recording (the latter of which requires you to access a web portal). The Rabbit R1 is primarily driven by cloud computing, meaning that you’ll need a constant internet connection to get the full experience.

Opinion: A nifty gadget that might not hold up to criticism

As someone who personally spent a lot of time interviewing people and frantically scribbling down notes in my early journo days, I can definitely see the value of a tool like the Rabbit R1. I’m also a sucker for purpose-built hardware, so despite my frequent reservations about AI, I truly like the concept of the R1 as a ‘one-stop shop’ for your AI chatbot needs.

My main issue is that this latest tech demo doesn’t actually do anything I can’t do with my phone. I’ve got a Google Pixel 8, and nowadays I use the Otter.ai app for interview transcriptions and voice notes. It’s not a perfect tool, but it does the job as well as the R1 can right now.

Rabbit r1

The Rabbit R1’s simplicity is part of its appeal – though it does still have a touchscreen. (Image credit: Rabbit)

As much as I love the Rabbit R1’s charming analog design, it’s still going to cost $ 199 (£159 / around AU$ 300) – and I just don’t see the point in spending that money when the phone I’ve already paid for can do all the same tasks. An AI-powered pocket companion sounds like an excellent idea on paper, but when you take a look at the current widespread proliferation of AI tools like Windows Copilot and Google Gemini in our existing tech products, it feels a tad redundant.

The big players such as Google and Microsoft aren’t about to stop cramming AI features into our everyday hardware anytime soon, so dedicated AI gadgets like Rabbit Inc.’s dinky pocket helper will need to work hard to prove themselves. The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future. And yet, as our Editor-in-Chief Lance Ulanoff puts it, I might still end up loving the R1…

You might also like

TechRadar – All the latest technology news

Read More

Windows Copilot will soon allow you to edit photos, shop instantly, and more

Ever since its reveal and launch, Microsoft Copilot has been getting a steady stream of features and an upcoming update will add even more. The latest update, detailed in the official Windows blog, will arrive in late March 2024 and will introduce tons of new skills and tools. 

For instance, you'll be able to type commands to activate certain PC features. Simply type something like “enable battery saver” or “turn off battery saver” and Copilot will take the appropriate action and confirm its completion.

Image 1 of 3

screenshot of Windows Copilot features

(Image credit: Microsoft)
Image 2 of 3

screenshot of Windows Copilot features

(Image credit: Microsoft)
Image 3 of 3

screenshot of Windows Copilot features

(Image credit: Microsoft)

There’s also a new Generative Erase feature in the Photos app that allows you to select and remove unwanted objects or imperfections from your images. Copilot will also receive new accessibility features including Voice Shortcuts, which lets you create custom commands using just a single phrase. You can also now use voice commands on a multi-display setup to better navigate between displays or move files and apps.

New plugins are also coming to Copilot, allowing easy access to various applications in an instant. Shopify, Klarna and Kayak will be added in March, adding to the Copilot features offered via OpenTable and Instacart.

Windows Copliot is finally getting there…

Some previous updates to Windows Copilot have given the tool some serious utility. For instance, you can now use it to generate and edit AI images using text-to-image prompts, powered by Dall-E. An update to this tool, Designer, takes it even further by letting you make tweaks to generated content like highlighting certain aspects, blurring the background, or adding a unique filter.

There was also another very useful plugin added to Copilot recently, Power Automate. It lets users automate repetitive and tedious tasks like creating and manipulating entries in Excel, managing PDFs, and other file management.

Slowly Windows Copilot is getting more and more useful, with tons of new features and improvements that make it worth having around. Maybe it will even make Windows 11 a worthwhile upgrade for those who still haven’t taken the plunge yet and are still looking at Windows 10.

You might also like

TechRadar – All the latest technology news

Read More

The Meta Quest Pro 2 could be a wearable LG OLED TV, and I couldn’t be more excited


  • Meta and LG confirm collaboration on “extended reality (XR) ventures”
  • This could mean a future Meta Quest Pro 2 uses an LG display
  • Announcement also hints at team-up for “content/service capabilities”

Following months of speculation and rumors – the most recent of which came literally days ahead of an official announcement – Meta and LG have confirmed that they’ll be collaborating on next-gen XR hardware (with XR standing for 'extended reality' and being a catchall for VR, MR and AR). And I couldn’t be more excited to see what their Apple Vision Pro rival looks like.

While they didn’t expressly outline what the collaboration entails, or what hardware they’ll be working on together, it seems all but guaranteed that Meta’s next VR headset – likely the Meta Quest Pro 2, but maybe the Meta Quest 4 or other future models – will use LG’s display tech for its screens. This means Meta might finally release another OLED Quest headset which promises some superb visuals for our favorite VR software.

Unfortunately, there’s also no mention of a timescale so we don’t know when the first LG and Meta headset will be released. But several recent rumors have suggested that the next Quest headset (probably the Pro 2) will launch in 2025; so we could see LG tech in Meta hardware next year if we’re lucky.

The Meta Quest Pro

Forget an iPad for your face, the next Meta headset could be an LG TV (Image credit: Meta)

We should always take rumors with a pinch of salt, but these same leaks teased the LG collaboration – so there’s a good chance that they’re on the money for the release date, too.

Beyond the potential for OLED Quest headsets, what’s particularly interesting is a line in the press release that mentions the desire for the companies to bring together “Meta’s platform with [LG’s] content/service capabilities.” To me, that hints at more than simply working together on hardware, but also bringing the LG TV software experience to your Meta headset as well.

More than just an OLED screen

Exactly what this means is yet to be seen, but it could result in a whole host of TV apps reimagined for VR. For Meta, this could importantly mean finally getting VR apps for the best streaming services including Disney Plus, Paramount Plus and Apple TV Plus – as well working apps for Netflix, Prime Video and other services that have Quest software that is practically non-functioning. 

These kinds of streaming apps are the one massive software area in which Meta has no answer to the Apple Vision Pro.

I’ve previously asked Meta if it had plans to bring more streaming services to Quest and a representative told me it had “no additional information to share at this time.” I hoped this meant it had some kind of reveal on the way in the near future, and it appears this LG announcement has answered my calls.

The Disney app running on the Apple Vision Pro

Disney Plus is a sight to behold on the Vision Pro (Image credit: Apple)

That said, while the press release certainly teases some interesting collaborations, until we actually see something in action there’s no telling what form Meta and LG’s partnership will take because the announcement is (no doubt intentionally) a little vague.

There’s also a chance the LG-powered TV apps won’t offer the same 3D movie selection or immersive environments found on Vision Pro. Depending on how the apps are implemented, 3D video might not be possible – or perhaps Apple has an exclusive deal for content with these apps on its Vision Pro platform.

Regardless I’m pretty excited by the potential this announcement brings as it appears to answer two of my four biggest Meta Quest Pro 2 feature requests. Here’s hoping the other two features follow suit. If they do (and the device isn’t astronomically pricey) the Meta Quest Pro 2 could be my new favorite VR headset by a landslide.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Paint update could make it even more Photoshop-like with handy new tools

Microsoft Paint received a plethora of new features late last year, introducing layers, a dark mode, and AI-powered image generation. These new updates brought Microsoft Paint up to speed with the rest of Windows 11's modern layout (maybe a different word? Trying to say vibe)  after years of virtually no meaningful upgrades, and it looks like Microsoft still has plans to add even more features to the humble art tool. 

X user @PhantomOfEarth made a post highlighting potential changes spotted in the Canary Development channel, and we could see these new features implemented in Microsoft Paint very soon. The Canary Dev channel is part of the Microsoft Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. 

See more

 We do have to take the features we see in such developer channels with a pinch of salt, as it’s common to see a cool upgrade or new software appear in the channel but never actually make it out of the development stage. That being said, PhantonOfEarth originally spotted the big changes set for Windows 11 Paint last year in the same Dev channel, so there’s a good chance that the brush size slider and layer panel update that is now present in the Canary build will actually come to fruition in a public update soon.   

Show my girl Paint some love

It’s great to see Microsoft continue to show some love for the iconic Paint app, as it had been somewhat forgotten about for quite some time. It seems like the company has finally taken note of the app's charm, as many of us can certainly admit to holding a soft spot for Paint and would hate to see it abandoned. I have many memories of using Paint; as a child in IT class learning to use a computer for the first time, or firing it up to do some casual scribbles while waiting for my family’s slow Wi-Fi to connect. 

These proposed features won’t make Paint the next Photoshop (at least for now), but they do bring the app closer to being a simple, free art tool that most everyday people will have access to. Cast your mind back to the middle of last year, when Photoshop introduced image generation capabilities – if you wanted to use them, you’d have to have paid for Adobe Firefly access or a Photoshop license. Now, if you’re looking to do something quick and simple with AI image-gen, you can do it in Paint. 

Better brush size control and layers may not seem like the most important or exciting new features, especially compared to last year's overhaul of Windows Paint, but it is proof that the team at Microsoft is still thinking about Paint. In fact, the addition of a proper layers panel will do a lot to justify the program’s worth to digital artists. It could also be the beginning of a new direction for Paint if more people flock back to the revamped app. I hope that Microsoft continues to improve it – just so long as it remains a free feature of Windows.

You might also like…

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

Google has fixed an annoying Gemini voice assistant problem – and more upgrades are coming soon

Last week, Google rebranded its Bard AI bot as Gemini (matching the name of the model it runs on), and pushed out an Android app in the US; and while the new app has brought a few frustrations with it, Google is now busy trying to fix the major ones.

You can, if you want, use Google Gemini as a replacement for Google Assistant on your Android phone – and Google has made this possible even though Gemini lacks a lot of the basic digital assistant features that users have come to rely on.

One problem has now been fixed: originally, when chatting to Gemini using your voice, you had to manually tap on the 'send' arrow to submit your command or question – when you're trying to keep up a conversation with your phone, that really slows everything down.

As per 9to5Google, that's no longer the case, and Google Gemini will now realize that you've stopped talking (and respond accordingly) in the same way that Google Assistant always has. It makes the app a lot more intuitive to use.

Updates on the way

See more

What's more, Google Gemini team member Jack Krawczyk has posted a list of features that engineers are currently working on – including some pretty basic functionality, including the ability to interact with your Google Calendar and reminders.

A coding interpreter is apparently also on the roadmap, which means Gemini would not just be able to produce programming code, but also to emulate how it would run – all within the same app. Additionally, the Google Gemini team is working to remove some of the “preachy guardrails” that the AI bot currently has.

The “top priority” is apparently refusals, which means Gemini declines to complete a task or answer a question. We've seen Reddit posts that suggest the AI bot will sometimes apologetically report that it can't help with a particular prompt – something that's clearly on Google's radar in terms of rolling fixes out.

Krawczyk says the Android app is coming to more countries in the coming days and weeks, and will be available in Europe “ASAP” – and he's also encouraging users to keep the feedback to the Google team coming.

You might also like

TechRadar – All the latest technology news

Read More

iOS 17.4 might give you more options for turning off those FaceTime reactions

The FaceTime video reactions Apple introduced in iOS 17 are kind of cool – fireworks when you show two thumbs up, and so on – but you don't necessarily want them going off on every call. Now it looks as though Apple is about to make the feature less prominent.

As per MacRumors, with the introduction of iOS 17.4 and iPadOS 17.4, third-party video calling apps will be able to turn the reactions off by default. In other words, you won't suddenly find balloons filling the screen on a serious call with your boss.

That “by default” is the crucial bit – at the moment, whenever you fire up FaceTime or another video app for the first time, these reactions will be enabled. You can turn them off (and they will then stay off for that app), but you need to remember to do it.

The move also means third-party developers get more control over the effects that are applied at a system level. As The Verge reports, one telehealth provider has already taken the step of informing users that it has no control over these reactions.

Coming soon

FaceTime reactions

A thumbs down is another reaction you can use (Image credit: Apple)

This extra flexibility is made possible through what's called an API or Application Programming Interface – a way for apps to interact with operating systems. It would mean the iOS or iPadOS setting no longer dictates the setting for every other video app.

The changes have been spotted in the latest beta versions of iOS 17.4 and iPadOS 17.4, though there's no guarantee that they'll stay there when the final version of the software rolls out. As yet it's not clear if the same update will be applied to macOS.

iOS 17.3 was pushed out on January 22, so we shouldn't have too much longer to wait to see its successor. Among the iOS 17.4 features in the pipeline, based on the beta version, we've got game streaming apps and automatic transcripts for your podcasts.

Apple will be hoping that a new version helps to encourage more people to actually install iOS 17 too. Uptake has been slower than it was with iOS 16, with users citing bugs and a lack of new features as reasons not to apply the update.

You might also like

TechRadar – All the latest technology news

Read More