Microsoft’s VASA-1 AI video generation system can make lifelike avatars that speak volumes from a single photo

AI-generated video is already a reality, and now another player has joined the fray: Microsoft. Apparently, the tech giant has developed a generative AI system that can whip up realistic talking avatars from a single picture and an audio clip. The tool is named VASA-1, and it goes beyond mimicking mouth movement; it can capture lifelike emotions and produce natural-looking movements as well.

The system offers its user the ability to modify the subject’s eye movements, the distance the subject is being perceived at, and the emotions expressed. VASA-1 is the first model in what is rumored to be a series of AI tools, and MSPowerUser reports that it can conjure up specific facial expressions, synchronize lip movements to a high degree, and produce human-like head motions. 

It can offer a wide range of emotions to choose from and generate facial subtleties, which sounds like it could make for a scarily convincing result. 

How VASA-1 works and what it's capable of

Seemingly taking a note from how human 3D animators and modelers work, VASA-1 makes use of a process it calls ‘disentanglement,’ allowing the system to control and edit the facial expressions, 3D head position, and facial features independently of each other, and this is what powers VASA-1’s realism.

As you might be imagining already, this has seismic potential, offering the possibility to totally change our experiences of digital apps and interfaces. According to MSPowerUser, VASA-1 can produce videos unlike those that it was trained on. Apparently, the system wasn’t trained on artistic photos, singing voices, or non-English speech, but if you request a video that features one of these, it’ll oblige. 

The Microsoft researchers behind VASA-1 praise its real-time efficiency, stating that the system can make fairly high-resolution videos (512×512 pixels) with high frame rates. Frame rate, or frames per second (fps), is the frequency at which a series of images (referred to as frames) can be captured or displayed in succession within a piece of media. The researchers claim that VASA-1 can generate videos with 45fps in offline mode, and 40fps with online generation. 

You can check out the state of VASA-1 and learn more about it on Microsoft’s dedicated webpage for the project. It has several demonstrations and includes links to download information about it, ending with a section headlined ‘Risks and responsible AI considerations.’

Works like magic – but is it a miracle spell or a recipe for disaster?

In this final reflective section, Microsoft acknowledges that a tool like this has plentiful scope for misuse, but the researchers try to emphasize the potential positives of VASA-1. They’re not wrong; a technology like this could mean next-level educational experiences that are available to more students than ever before, better assistance to people who have difficulties communicating, the capability to provide companionship, and improved digital therapeutic support. 

All of that said, it would be foolish to ignore the potential for harm and wrongdoing with something like this. Microsoft does state that it doesn’t currently have plans to make VASA-1 available in any form to the public until it’s reassured that “the technology will be used responsibly and in accordance with proper regulations.” If Microsoft sticks to this ethos, I think it could be a long wait. 

All in all, I think it’s becoming hard to deny that generative AI video tools are going to become more commonplace and the countdown to when they saturate our lives has begun. Google has been working on an analogous AI system with the moniker VLOGGER, and also recently put out a paper detailing how VLOGGER can create realistic videos of people moving, speaking, and gesturing with the input of a single photo. 

OpenAI also made headlines recently by introducing its own AI video generation tool, Sora, which can generate videos from text descriptions. OpenAI explained how Sora works on a dedicated page, and provided demonstrations that impressed a lot of people – and worried even more. 

I am wary of what these innovations will enable us to do, and I’m glad that, as far as we know, all three of these new tools are being kept tightly under wraps. I think realistically the best guardrails we have against the misuse of technologies like these are airtight regulations, but I’m doubtful that all governments will take these steps in time. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

This scary AI breakthrough means you can run but not hide – how AI can guess your location from a single image

There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken – and it has the potential to become a privacy nightmare.

The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.

Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.

Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”

A double-edged sword

Google Maps

(Image credit: Google)

Before we start getting the pitchforks ready, it’s worth remembering that this technology might also have a range of positive uses, if deployed responsibly. For instance, it could be used to identify places in need of roadworks or other maintenance. Or it could help you plan a holiday: where in the world could you go to see landscapes like those in your photos? There are other uses, too, from education to monitoring biodiversity.

Like many recent advances in AI, it’s a double-edged sword. Generative AI can be used to help a programmer debug code to great effect, but could also be used by a hacker to refine their malware. It could help you drum up ideas for a novel, but might assist someone who wants to cheat on their college coursework.

But anything that helps identify a person’s location in this way could be extremely problematic in terms of personal privacy – and have big ramifications for social media. As Stanley argued, it’s long been possible to remove geolocation data from photos before you upload them. Now, that might not matter anymore.

What’s clear is that some sort of regulation is desperately needed to prevent wider abuses, while the companies making AI tech must work to prevent damage caused by their products. Until that happens, it’s likely we’ll continue to see concerns raised over AI and its abilities.

You might also like

TechRadar – All the latest technology news

Read More