Apple could be working on a new AI tool that animates your images based on text prompts

Apple may be working on a new artificial intelligence tool that will let you create basic animations from your photos using a simple text prompt. If the tool comes to fruition, you’ll be able to turn any static image into a brief animation just by typing in what you want it to look like. 

According to 9to5Mac, Apple researchers have published a paper that details procedures for manipulating image graphics using text commands. The tool, Apple Keyframer, will use natural language text to tell the proposed AI system to manipulate the given image and animate it. 

Say you have a photo of the view from your window, with trees in the background and even cars driving past. From what the paper suggests, you’ll be able to type commands such as ‘make the leaves move as if windy’ into the Keyframer tool, which will then animate the specified part of your photo.

You may recognize the name ‘keyframe’ if you’re an Apple user, as it’s already part of Apple’s Live Photos feature – which lets you go through a ‘live photo’ GIF and select which frame, the keyframe, you want to be the actual still image for the photo. 

Better late than never? 

Apple has been notably slow to jump onto the AI bandwagon, but that’s not exactly surprising. The company is known to play the long game and let others beat out the kinks before they make their move, as we’ve seen with its recent foray into mixed reality with the Apple Vision Pro (this is also why I have hope for a foldable iPhone coming soon). 

I’m quite excited for the Keyframer tool if it does come to fruition because it’ll put basic animation tools into the palm of every iPhone user who might not know where to even start with animation, let alone make their photos move.

Overall, the direction Apple seems to be taking in terms of AI tools seems to be a positive one. The Keyframer tool comes right off the back of Apple’s AI-powered image editing tool, which again reinforces the move towards user experience improvement rather than just putting out things that mirror the competition from companies like OpenAI, Microsoft, and Google.

I’m personally glad to see that Apple’s dive into the world of artificial intelligence tools isn’t just another AI chatbot like ChatGPT or Google Gemini, but rather focusing on tools that offer unique new features for iOS and macOS products. While this project is in the very early stages of inception, I’m still pretty hyped about the idea of making funny little clips of my cat being silly or creating moving memories of my friends with just a few word prompts. 

As for when we’ll get our hands on Keyframer, unfortunately there’s no release date in sight just yet – but based on previous feature launches, Apple willingly revealing details at this stage indicates that it’s probably not too far off, and more importantly isn’t likely to get tossed aside. After all, Apple isn’t Google.

You might also like…

TechRadar – All the latest technology news

Read More

An Apple Vision Pro Pencil could be on the way, based on a new patent

We now know that the Apple Vision Pro is going to be available to buy from February 2, with preorders opening on January 19, but a newly discovered patent suggests that Apple engineers are busy looking to the future of the mixed reality headset.

The patent, spotted by Patently Apple, shows what looks like a giant Apple Pencil. The idea is that the implement could be used to write or draw in virtual space, or to select and manipulate items that exist in virtual or augmented reality.

“A handheld controller with a marker-shaped housing may have an elongated housing that spans across the width of a user's hand and that can be held like a pen, pencil, marker, wand, or tool,” reads part of the patent application.

And while a “head-mounted device” like the Vision Pro is extensively referenced in the documentation, this peripheral could also be used with phones, tablets, laptops, smartwatches, and other devices, according to the patent.

Apple Vision Pro Pencil patent diagram

Like the Apple Pencil… but bigger (Image credit: USPTO/Apple)

Virtual paint brushes

One of the cool little tricks that this implement might offer, as per the patent, is being able to display a range of “brush heads” that are only visible in AR – so it could change from looking like a paintbrush to a spray can inside the Vision Pro headset, for instance.

We've got minions of swipes and waves and shakes, as well as writing and drawing, so there's clearly lots of potential for whatever this device is. All these movements would be measured via sensors built into the pencil itself.

Apple has made a point of saying that you only need your fingers and hands to manipulate the software environment inside the Vision Pro headset, but we've also seen other patents that suggest other input methods are currently being worked on.

The usual patent disclaimers apply here: these filings give us some ideas about the tech that companies are developing and thinking about, but at the same time there's no guarantee that an actual product will result from them.

You might also like

TechRadar – All the latest technology news

Read More

Meta Builder Bot concept happily builds virtual worlds based on voice description

The Metaverse, that immersive virtual world where Meta (née Facebook) imagines we'll work, play, and interact with friends and family is also where we may someday build entire worlds with nothing but our voice.

During an online AI development update delivered, in part, by Meta/Facebook Founder and CEO Mark Zuckerberg on Wednesday (February 23), the company offered a glimpse of Builder Bot, an AI concept that allows the user to build entire virtual experiences using their voice.

Standing in what looked like a stripped-down version of Facebook's Horizon Worlds' Metaverse, Zuckerberg's and a co-worker's avatars asked a virtual bot to add an island, some furniture, clouds, a catamaran, and even a boombox that could pay real music to the environment. In the demonstration, the command phrasing was natural and the 3D virtual imagery appeared instantly, though it did look a bit like the graphics you'd find in Nintendo's Animal Crossing: New Horizons.

The development of Builder Bot is part of a larger AI initiative called Project CAIRaeoke, which is an end-to-end neural model for building on-device assistance. 

Meta's Builder Bot concept

Mark Zuckerberg’s legless avatar and Builder Bot. (Image credit: Future)

Zuckerberg explained that current technology is not yet equipped to help us explore an immersive version of the internet that will ultimately live in the Metaverse. While that will require updates across a whole range of hardware and software, Meta believes AI is the key to unlocking advancement that will lead to, as Zukerberg put it, “a new generation of assistants that will help us explore new worlds”.

“When we’re wearing [smart] Glasses, it will be the first time an AI system will be able to see the world from our perspective,” he added. A key goal here is for the AI they're developing to see as we do and, more importantly, learn about the world as we do, as well.

It's unclear if Builder Bot will ever become a true part of the burgeoning Metaverse, but its skill with real-time language processing and understanding how parts of the environment should go together is clearly informed by the work Meta is doing.

Mark Zuckerberg talks AI translation

Mark Zuckerberg talks AI translation (Image credit: Future)

Zuckerberg outlined a handful of other related AI projects, all of which will eventually feed into a Metaverse that can be accessed and used by anyone in the world.

These include “No Language Left Behind,” which, unlike traditional translation that often uses English as a mid-translation point, can translate languages directly from the source to the translation language. There's also the very Star Trek-like “Universal Speech Translator”, which would provide instantaneous speech-to-speech translation across all languages, including spoken languages.

“AI is going to deliver that in our lifetimes,” said Zuckerberg.

Mark Zuckerberg talks image abstraction

Mark Zuckerberg talks image abstraction (Image credit: Future)

Meta is also investing heavily in self-supervised learning (SSL) to build human-like cognition into AI systems. Instead of training with tons of images to help the AI identify patterns, the system is fed raw data and then asked to predict the missing parts. Eventually, the AI learns how to build abstract representations.

An AI that can understand abstraction could complete an image just from a few pieces of visual information, or generate the next frame of a video it's never seen. It could also build a visually pleasing virtual world with only your words to guide it.

For those full-on freaked out by Meta's Metaverse ambitions, Zuckerberg said that the company is building the Metaverse for everyone and they are “committed to build openly and responsibly” while protecting privacy and preventing harm.

It's unlikely anyone will take his word for it, but we look forward to watching the Metaverse's development.

TechRadar – All the latest technology news

Read More