This scary AI breakthrough means you can run but not hide – how AI can guess your location from a single image

There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken – and it has the potential to become a privacy nightmare.

The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.

Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.

Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”

A double-edged sword

Google Maps

(Image credit: Google)

Before we start getting the pitchforks ready, it’s worth remembering that this technology might also have a range of positive uses, if deployed responsibly. For instance, it could be used to identify places in need of roadworks or other maintenance. Or it could help you plan a holiday: where in the world could you go to see landscapes like those in your photos? There are other uses, too, from education to monitoring biodiversity.

Like many recent advances in AI, it’s a double-edged sword. Generative AI can be used to help a programmer debug code to great effect, but could also be used by a hacker to refine their malware. It could help you drum up ideas for a novel, but might assist someone who wants to cheat on their college coursework.

But anything that helps identify a person’s location in this way could be extremely problematic in terms of personal privacy – and have big ramifications for social media. As Stanley argued, it’s long been possible to remove geolocation data from photos before you upload them. Now, that might not matter anymore.

What’s clear is that some sort of regulation is desperately needed to prevent wider abuses, while the companies making AI tech must work to prevent damage caused by their products. Until that happens, it’s likely we’ll continue to see concerns raised over AI and its abilities.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s reported ‘superintelligence’ breakthrough is so big it nearly destroyed the company, and ChatGPT

It now seems entirely possible that ChatGPT parent company OpenAI has solved the 'superintelligence' problem, and is now grappling with the implications for humanity.

In the aftermath of OpenAI's firing and rehiring of its co-founder and CEO Sam Altman, revelations about what sparked the move keep coming. A new report in The Information pins at least the internal disruption on a significant Generative AI breakthrough that could lead to the development of something called 'superintelligence' within this decade or sooner.

Superintelligence is, as you might have guessed, intelligence that outstrips humanity, and the development of AI that's capable of such intelligence without proper safeguards is, naturally, a major red flag.

According to The Information, the breakthrough was spearheaded by OpenAI Chief Scientist (and full-of-regrets board member) Ilya Sutskever. 

It allows AI to use cleaner and computer-generated data to solve problems the AI has never seen before. This means the AI is trained not on many different versions of the same problem, but on information not directly related to the problem. Solving problems in this way – usually math or science problems – requires reasoning. Right, something we do, not AIs.

OpenAI's primary consumer-facing product, ChatGPT (powered by the GPT large language model [LLM]) may seem so smart that it must to be using reason to craft its responses. Spend enough time with ChatGPT, however, and you soon realize it's just regurgitating what it's learned from the vast swaths of data it's been fed, and making mostly accurate guesses about how to craft sentences that make sense and which apply to your query. There is no reasoning involved here.

The Information claims, though, that this breakthrough – which Altman may have alluded to in a recent conference appearance, saying, “on a personal note, just in the last couple of weeks, I have gotten to be in the room, when we sort of like push the sort of the veil of ignorance back and the frontier of discovery forward,” – sent shockwaves throughout OpenAI.

Managing the threat

While there's no sign of superintelligence in ChatGPT right now, OpenAI is surely working to integrate some of this power into, at least, some of its premium products, like GPT-4 Turbo and those GPTs chatbot agents (and future 'intelligent agents').

Connecting superintelligence to the board's recent actions, which Sutskever initially supported, might be a stretch. The breakthrough reportedly came months ago, and prompted Sutskever and another OpenAI scientist, Jan Leike, to form a new OpenAI research group called Superaligment with the goal of developing superintelligence safeguards.

Yes, you heard that right. The company working on developing superintelligence is simultaneously building tools to protect us from superintelligence. Imagine Doctor Frankenstein equipping the villagers with flamethrowers, and you get the idea.

What's not clear from the report is how internal concerns about the rapid development of superintelligence possibly triggered the Altman firing. Perhaps it doesn't matter.

At this writing, Altman is on his way back to OpenAI, the board is refashioned, and the work to build superintelligence – and to protect us from it – will continue.

If all of this is confusing, I suggest you ask ChatGPT to explain it to you.

You might also like

TechRadar – All the latest technology news

Read More

Tape could replace hard drives – in some cases – thanks to this breakthrough

Fujitsu has announced a new technology called Virtual Integrated File System that it says could help magnetic tape storage compete with hard disk drives as a low-cost, large capacity storage alternative.

With the feud between Sony and Fujitsu around LTO resolved late last year, all eyes are now on LTO-9, which is expected to be delivered in 2020. This iteration will deliver capacities up to 26.1TB (uncompressed) and raw throughput of up to 708MB/sec.

That’s a higher capacity than the largest hard drive on the market (currently 20TB) –  also faster and likely cheaper too. Add in on-the-fly compression capabilities and, suddenly, it's all looking rosy for the venerable tape. 

Hacking the file system

Fujitsu's Virtual Integrated File System (VIFS) allows “multiple tape cartridges to be consolidated into one”, which means users can access data without worrying about individual tape cartridges.

It sounds a little like RAID but for tapes, which means that you'll likely need multiple tape drives or a tape library. This limits the product to enterprise and large businesses, where storage demands are usually measured in Petabytes and Exabytes.

The Japanese company claims to have improved the read speeds by more than fourfold in one trial run, while another test yielded a speed improvement of nearly 2X.

“This technology enables high-speed tape access performance, such as random reads and writes of various sizes occurring in archive applications, and is expected to provide a cost-effective data archiving infrastructure for long-term archiving of large volumes of data," Fujitsu added.

TechRadar – All the latest technology news

Read More