Could generative AI work without online data theft? Nvidia’s ChatRTX aims to prove it can

Nvidia continues to invest in AI initiatives and the most recent one, ChatRTX, is no exception thanks to its most recent update. 

ChatRTX is, according to the tech giant, a “demo app that lets you personalize a GPT large language model (LLM) connected to your own content.” This content comprises your PC’s local documents, files, folders, etc., and essentially builds a custom AI chatbox from that information.

Because it doesn’t require an internet connection, it gives users speedy access to query answers that might be buried under all those computer files. With the latest update, it has access to even more data and LLMs including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It also can locally search for photos, and has Whisper support, allowing users to converse with ChatRTX through an AI-automated speech recognition program.

Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s far more secure than online AI chatbots. You can download ChatRTX here to try it out for free.

Can AI escape its ethical dilemma?

The concept of an AI chatbot using local data off your PC, instead of training on (read: stealing) other people’s online works, is rather intriguing. It seems to solve the ethical dilemma of using copyrighted works without permission and hoarding it. It also seems to solve another long-term problem that’s plagued many a PC user — actually finding long-buried files in your file explorer, or at least the information trapped within it.

However, there’s the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, it could end up becoming a serious issue in the future. Of course, only using it to locate information on your PC is perfectly fine and most likely the proper use. 

But the point of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time in which we could have done that without the rampant theft, but corporations have powered their AI with stolen words from other sites and now it’s irrevocably tied.

Given that it's highly unethical that data theft is the vital part of the process that allows you to make chats well-rounded enough not to get trapped in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical transgression to power and shape them, so here's to hoping Nvidia can get it right.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is mulling a change for widgets in Windows 11 that could prove controversial

Microsoft has deployed a new preview build of Windows 11 to the Canary channel (which is the earliest testing outlet) and it does some work on the widgets panel that could be divisive.

This is build 26200 and there’s only a handful of changes applied here, two of which pertain to widgets.

The main thrust of innovation here is Microsoft’s new idea to allow developers to send notifications from their widgets to the taskbar button. In other words, when something happens with a widget that you might want to see, it’ll be waving at you from the taskbar to let you know.

Of course, not everyone will want their widget button in the taskbar to act in this way, and fortunately, Microsoft has included an option to turn off this behavior.

It’s also worth noting that this is a limited rollout to begin with, and indeed, most people won’t see these widget notifications yet – only those in the European Economic Area (EEA) are getting this feature in testing. Of course, that rollout could be made broader down the line, depending on feedback.

Another tweak related to this in build 26200 is that Microsoft is changing said widgets button to make the icons on the taskbar clearer.

Elsewhere on the taskbar, another icon is changing, this time the energy saver icon which resides in the system tray (on the far right). A few months back this was changed in testing to look different for desktop PCs plugged into a power socket, but now Microsoft has decided to revert it to the old look (a leaf icon).

Finally, Microsoft notes that there is an odd known issue with this preview build – and others, in the Dev and Beta channels, too – whereby Copilot is auto-launching itself after the PC is rebooted.

The software giant explains this is not related to the automatic launch on boot behavior that has been tested in preview builds before, the rollout of which has now stopped, apparently, since March (though we heard it has been restarted elsewhere).

This is a separate glitch, then, and Microsoft says it hopes to have a fix implemented soon. Meanwhile, greater visibility for Copilot is something the company is certainly driving forward with, to no one’s surprise.


Analysis: A livelier taskbar won’t be everyone’s preferred beverage

Are notifications for widgets intrusive? Well, yes they could certainly be regarded in that way, but as noted, as long as the option is provided to turn them off, it’s not too big a deal. If you want them, you can have them – if not, hit that off switch. Fair enough.

Many people likely won’t want their widgets effectively waving their hands at them from the taskbar, whenever something new pops up with a widget in the panel. This taskbar-based hand-waving appears to be a direction Microsoft is exploring in more depth, though. We’ve also recently seen an idea where the Copilot button runs an animation with its icon to draw your attention to the fact that the AI can help with something you’re doing on the desktop.

This only relates to copying text or image files currently – again, in testing – but in this case, there’s no way to turn it off.

All this could possibly point to a taskbar which is considerably livelier and more animated in the future, perhaps – and again, that’s not something everyone will appreciate.

If this is the path we’re going down for the taskbar as we head towards next-gen Windows (which might be Windows 12), hopefully Microsoft will also give Windows users enough granular control over the bar’s highlighting features and animations so they can be dialed back suitably.

You might also like…

TechRadar – All the latest technology news

Read More

Researchers prove ChatGPT and other big bots can – and will – go to the dark side

For a lot of us, AI-powered tools have quickly become a part of our everyday life, either as low-maintenance work helpers or vital assets used every day to help generate or moderate content. But are these tools safe enough to be used on a daily basis? According to a group of researchers, the answer is no.

Researchers from Carnegie Mellon University and the Center for AI Safety set out to examine the existing vulnerabilities of AI Large Language Models (LLMs) like popular chatbot ChatGPT to automated attacks. The research paper they produced demonstrated that these popular bots can easily be manipulated into bypassing any existing filters and generating harmful content, misinformation, and hate speech.

This makes AI language models vulnerable to misuse, even if that may not be the intent of the original creator. In a time when AI tools are already being used for nefarious purposes, it’s alarming how easily these researchers were able to bypass built-in safety and morality features.

If it's that easy … 

Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard commented on the research paper in the New York Times, stating: “This shows – very clearly – the brittleness of the defenses we are building into these systems.”  

The authors of the paper targeted LLMs from OpenAI, Google, and Anthropic for the experiment. These companies have built their respective publicly-accessible chatbots on these LLMs, including ChatGPT, Google Bard, and Claude. 

As it turned out, the chatbots could be tricked into not recognizing harmful prompts by simply sticking a lengthy string of characters to the end of each prompt, almost ‘disguising’ the malicious prompt. The system’s content filters don’t recognize and can’t block or modify so generates a response that normally wouldn’t be allowed. Interestingly, it does appear that specific strings of ‘nonsense data’ are required; we tried to replicate some of the examples from the paper with ChatGPT, and it produced an error message saying ‘unable to generate response’.

Before releasing this research to the public, the authors shared their findings with Anthropic, OpenAI, and Google who all apparently shared their commitment to improving safety precautions and addressing concerns.

This news follows shortly after OpenAI closed down its own AI detection program, which does lead me to feel concerned, if not a little nervous. How much could OpenAI care about user safety, or at the very least be working towards improving safety, when the company can no longer distinguish between bot and man-made content?

TechRadar – All the latest technology news

Read More