Could generative AI work without online data theft? Nvidia’s ChatRTX aims to prove it can

Nvidia continues to invest in AI initiatives and the most recent one, ChatRTX, is no exception thanks to its most recent update. 

ChatRTX is, according to the tech giant, a “demo app that lets you personalize a GPT large language model (LLM) connected to your own content.” This content comprises your PC’s local documents, files, folders, etc., and essentially builds a custom AI chatbox from that information.

Because it doesn’t require an internet connection, it gives users speedy access to query answers that might be buried under all those computer files. With the latest update, it has access to even more data and LLMs including Google Gemma and ChatGLM3, an open, bilingual (English and Chinese) LLM. It also can locally search for photos, and has Whisper support, allowing users to converse with ChatRTX through an AI-automated speech recognition program.

Nvidia uses TensorRT-LLM software and RTX graphics cards to power ChatRTX’s AI. And because it’s local, it’s far more secure than online AI chatbots. You can download ChatRTX here to try it out for free.

Can AI escape its ethical dilemma?

The concept of an AI chatbot using local data off your PC, instead of training on (read: stealing) other people’s online works, is rather intriguing. It seems to solve the ethical dilemma of using copyrighted works without permission and hoarding it. It also seems to solve another long-term problem that’s plagued many a PC user — actually finding long-buried files in your file explorer, or at least the information trapped within it.

However, there’s the obvious question of how the extremely limited data pool could negatively impact the chatbot. Unless the user is particularly skilled at training AI, it could end up becoming a serious issue in the future. Of course, only using it to locate information on your PC is perfectly fine and most likely the proper use. 

But the point of an AI chatbot is to have unique and meaningful conversations. Maybe there was a time in which we could have done that without the rampant theft, but corporations have powered their AI with stolen words from other sites and now it’s irrevocably tied.

Given that it's highly unethical that data theft is the vital part of the process that allows you to make chats well-rounded enough not to get trapped in feedback loops, it’s possible that Nvidia could be the middle ground for generative AI. If fully developed, it could prove that we don’t need the ethical transgression to power and shape them, so here's to hoping Nvidia can get it right.

You might also like

TechRadar – All the latest technology news

Read More

Google’s new generative AI aims to help you get those creative juices following

It’s a big day for Google AI as the tech giant has launched a new image-generation engine aimed at fostering people’s creativity.

The tool is called ImageFX and it runs on Imagen 2,  Google's “latest text-to-image model” that Google claims can deliver the company's “highest-quality images yet.” Like so many other generative AIs before, it generates content by having users enter a command into the text box. What’s unique about the engine is it comes with “Expressive Chips” which are dropdown menus over keywords allowing you to quickly alter content with adjacent ideas. For example, ImageFX gave us a sample prompt of a dress carved out of deadwood complete with foliage. After it made a series of pictures, the AI offered the opportunity to change certain aspects; turning a beautiful forest-inspired dress into an ugly shirt made out of plastic and flowers. 

Image 1 of 2

ImageFX - generated dress

(Image credit: Future)
Image 2 of 2

ImageFX - generated shirt

(Image credit: Future)

Options in the Expressive Chips don’t change. They remain fixed to the initial prompt although you can add more to the list by selecting the tags down at the bottom. There doesn’t appear to be a way to remove tags. Users will have to click the Start Over button to begin anew. If the AI manages to create something you enjoy, it can be downloaded or shared on social media.

Be creative

This obviously isn’t the first time Google has released a text-to-image generative AI. In fact, Bard just received the same ability. The main difference with ImageFX is, again, its encouragement of creativity. The clips can help spark inspiration by giving you ideas of how to direct the engine; ideas that you may never have thought of. Bard’s feature, on the other hand, offers little to no guidance. Because it's less user-friendly, directing Bard's image generation will be trickier.

ImageFX is free to use on Google’s AI Test Kitchen. Do keep in mind it’s still a work in progress. Upon visiting the page for the first time, you’ll be met with a warning message telling you the AI “may display inaccurate info”, and in some cases, offensive content. If this happens to you, the company asks that you report it to them by clicking the flag icon. 

Also, Google wants people to keep things clean. They link to their Generative AI Prohibited Use Policy in the warning listing out what you can’t do with ImageFX.

AI updates

In addition to ImageFX, Google made several updates to past experimental AIs. 

MusicFX, the brand’s text-to-music engine, now allows users to generate songs up to 70 seconds in length as well as alter their speed. The tool even received Expressive Chips, helping people get those creative juices flowing. MusicFX even got a performance boost enabling it to pump content faster than before. TextFX, on the other hand, didn’t see a major upgrade or new features. Google mainly updated the website so it’s more navigable.

MusicFX's new layout

(Image credit: Future)

Everything you see here is available to users in the US, New Zealand, Kenya, and Australia. No word on if the AI will roll out elsewhere, although we did ask. This story will be updated at a later time.

Until then, check out TechRadar's roundup of the best AI art generators for 2024 where we compare them to each other. There's no clear winner, but they do have their specialties. 

You might also like

TechRadar – All the latest technology news

Read More

Hate taxes? H&R Block’s new AI chatbot aims to reduce your tax frustrations

Like a spoonful of mustard or mayonnaise — or perhaps you prefer sriracha? — it seems no software recipe is complete these days without a dollop of generative AI. The groundbreaking technology plays games better than you, writes songs for you … heck, it can even be your girlfriend (to each their own). And now it can help you with life’s most vexing challenge: taxes.

In mid-December, H&R Block quietly announced AI Tax Assist, a generative AI tool powered by Microsoft’s Azure OpenAI Service. The new offering leverages the tax giant’s decades of experience in tax prep and leans on its stable of more than 60,000 tax pros to answer your thorniest questions about U.S. and state tax laws: Can I deduct this new laptop? Is this a personal expense or a business one? How many roads must a man walk down, before travel is officially part of his job?

We see AI as one of the defining technologies of our time.

Sarah Bird, Responsible AI, Microsoft

In a press preview on Thursday, January 25, in New York City, H&R Block announced another new tool designed to simplify importing last year’s tax returns from the competition. And it offered TechRadar the opportunity to try out the new AI assistant software and talk about the future of tax prep.

“We see AI as one of the defining technologies of our time … but only if we do it responsibility,” explained Sarah Bird, who serves as global lead for responsible AI at Microsoft. (Bird was virtual at the event, a result of possible exposure to Covid.) Microsoft’s Azure OpenAI Service lets developers easily build generative AI experiences, using a set of prebuilt and curated models from OpenAI, Meta and beyond. Her team helps companies like H&R Block ensure that their tools use the highest quality training data, guide you with clever prompts, and so on. “It’s really a best practices implementation in terms of responsible AI,” she said.

H&R Block notes that the tech will not file or even fill out your forms; it merely answers questions. But if you’re hesitant to even ask AI for tax advice, you have good reason. The tech is notorious for hallucinations, where it simply invents the answers to questions if it can’t find the right answer. Some experts worry that problem may never be solved. “This isn’t fixable,” Emily Bender, a linguistics professor at UW's Computational Linguistics Laboratory, told the Associated Press last fall. “It’s inherent in the mismatch between the technology and the proposed use cases.”

One answer to the problem is starting with the right training data. If the AI has reliable, trustworthy sources of data to pore through, it can find the right answer, saving you from hunting through the Kafka-esque bowels of the IRS to find the instruction form for Schedule B or whatever. And off-the-shelf large language models (LLMs) simply don’t have that data, explained Aditya Thadani, VP of Artificial Intelligence Platforms for H&R Block. Ask ChatGPT 4 a question, he noted, and you risk missing out on what’s new: The cut-off date for that LLM's data sources is April of 2023.

“The IRS has released a number of changes since then,” he told attendees at the H&R Block event. “They’re making changes well into December and January, well into the tax season. We’re making sure you get all that information.”

To try out the new system, TechRadar sat down with some sample data and asked a few test prompts: Am I missing any deductions? Can I deduct a car as a business expense? And so on. The chatbot offered reasonable prompts: A few paragraphs of information culled from H&R Block’s deep catalog of data, links to find more information, and so on. The company says it can answer tax theory questions, clarify tax terms, and give guidance on specific tax rules. And crucial to the entire experience: Live, human beings — CPAs even! — are always just a click away.

“When we are not absolutely sure? Don’t guess. Give a response that we are actually confident in,” Thadani said. And if you don’t get the response you are looking for from the AI, you can get it from the tax pro.”

Privacy matters: Who will see your data?

It’s hard to discuss any emerging technology without touching on privacy, and both Microsoft and H&R Block are very aware of the risks. After all, a person’s tax returns are highly personal and confidential – one reason they became such a hot-button in the US presidential elections. Should a company be allowed to train an LLM on your data?

“We’re sitting on a lot of really personal, private information,” Thadani admitted. “As much as we want to use that to answer questions effectively, we have to continue to find the balance.” So the assistant won’t remember you. It won’t ingest your tax forms to answer the questions you pose. And by design, other people won’t benefit from your questions down the road.

The new software also taps into one of the quirks of our modern software assistants. We don’t necessarily talk to them like adults. We’ve been trained to ask Alexa or the Google Assistant halting half-words and phrases. Meanwhile, chatbots can converse in natural language. H&R Block’s tool works fine in either space, Bird explained.

“It’s incredibly enabling because it allows people to speak in words they’re comfortable with,” she said. There’s the real power of technology in a nutshell: “Make a complex thing more accessible to people because the technology meets them where they’re at.”

Now if it could only help us deduct a few holiday pounds from the waistline.

You might also like

TechRadar – All the latest technology news

Read More

An iOS app aims to preserve the Hmong dialect for future generations

While you may be enjoying apps that can help solve the tasks for the day ahead, or scratches your daily itch in the latest game on Apple Arcade, for example, there are a few different apps that serve an important purpose.

The Hmong people are one of the most marginalized Asian American groups in the US, and its language is in danger of being relegated to the history books.

This is where Hmong Phrases comes in. Its developer, Annie Vang, wants to preserve the Hmong language that has been in her family for generations. Alongside this, Vang also hosts a YouTube channel to showcase foods in the Hmong culture, as well as her other favorite foods.

It's available for iPhone and iPad devices running iOS 14 and iPadOS 14 or later for $ 0.99 / £0.69 / AU$ 1.09,  and it can also work on a Mac with Apple Silicon. You can scroll through the different conversations and hear back from Vang herself on how to pronounce various words.

It feels personal and yet educational – you know that Vang has put everything into this app, and it looks as though she isn't done, having recently spoken to her.

What could be next for the app?

Hmong Phrases app icon

(Image credit: Hmong Phrases)

The app has an elegant layout with a colorful scheme throughout its menus. The list of phrases may seem overwhelming to some at first, but you get used to it. You can use the search bar to find what you want.

While it's great to use it on iOS mainly, we asked Vang if there were any plans to add newer widgets, alongside an Apple Watch version, in the future.

Practicing phrases and words in Hmong on your wrist could appeal to many, especially as later Apple Watch models can use the speaker with some apps.

Vang was enthusiastic about these two ideas, and there's potentially a chance we could see them later in the year.

But whatever occurs in a future update, it's a great effort already to revive a language, and a culture, that should be preserved for future generations.

TechRadar – All the latest technology news

Read More

Windows 11 update has system-wide live captions to help boost its accessibility aims

Despite Windows 11 being sequestered behind hardware requirements such as TPM, Microsoft is doing its best to make its latest OS as accessible as possible for the deaf and hard of hearing communities, with all new system-wide live captions. 

Available today (April 5) after Microsoft's event, the brand new live captions feature allows users who may be deaf, hard of hearing or those who just like subtitles to easily access captions across all audio experiences and apps across Windows. 

Live Captions will also work on web-based audio, allowing users to view auto-generated captions on websites and streaming services that might not otherwise support or have the best captions. 

Unfortunately, it is currently unclear if Microsoft will be bringing the live captions feature to Windows 10, in order to let as many users as possible utilize this useful accessibility feature. 


Analysis: an accessibility win that is not accessible for everyone

There is no denying that more accessibility options are a good thing regardless of where you use them yourself or not, however, Microsoft deserves as much criticism as praise for this new feature as, for now, they’re keeping it exclusive to Windows 11. 

With Windows 11’s growth recently being shown to have dramatically stalled in March, it makes sense that Microsoft’s latest OS may need some more killer features to tempt users into upgrading from Windows 10, however holding accessibility features random certainly is not the way to do it. 

While holding this feature to ransom would be bad enough if upgrading was a simple one-click process, Windows 11 does not make things that easy as it infamously requires TPM 2.0, a feature that many computers, manufactured before 2017, do not have.

Mercifully, captioning services are becoming more and more common across web pages and streaming services, you can even listen to all of our articles, for instance, however, these services all have their potential problems and require individual set up, so it's far from a perfect solution. 

With Microsoft having only just announced this new feature for Windows 11 during their hybrid work event, we can only hope that it is not too long before the tech giant sees sense and brings this feature to older versions of Windows to benefit all users, rather than just those on Windows 11.

TechRadar – All the latest technology news

Read More

Windows 11 update has system-wide live captions to help boost its accessibility aims

Despite Windows 11 being sequestered behind hardware requirements such as TPM, Microsoft is doing its best to make its latest OS as accessible as possible for the deaf and hard of hearing communities, with all new system-wide live captions. 

Available today (April 5) after Microsoft's event, the brand new live captions feature allows users who may be deaf, hard of hearing or those who just like subtitles to easily access captions across all audio experiences and apps across Windows. 

Live Captions will also work on web-based audio, allowing users to view auto-generated captions on websites and streaming services that might not otherwise support or have the best captions. 

Unfortunately, it is currently unclear if Microsoft will be bringing the live captions feature to Windows 10, in order to let as many users as possible utilize this useful accessibility feature. 


Analysis: an accessibility win that is not accessible for everyone

There is no denying that more accessibility options are a good thing regardless of where you use them yourself or not, however, Microsoft deserves as much criticism as praise for this new feature as, for now, they’re keeping it exclusive to Windows 11. 

With Windows 11’s growth recently being shown to have dramatically stalled in March, it makes sense that Microsoft’s latest OS may need some more killer features to tempt users into upgrading from Windows 10, however holding accessibility features random certainly is not the way to do it. 

While holding this feature to ransom would be bad enough if upgrading was a simple one-click process, Windows 11 does not make things that easy as it infamously requires TPM 2.0, a feature that many computers, manufactured before 2017, do not have.

Mercifully, captioning services are becoming more and more common across web pages and streaming services, you can even listen to all of our articles, for instance, however, these services all have their potential problems and require individual set up, so it's far from a perfect solution. 

With Microsoft having only just announced this new feature for Windows 11 during their hybrid work event, we can only hope that it is not too long before the tech giant sees sense and brings this feature to older versions of Windows to benefit all users, rather than just those on Windows 11.

TechRadar – All the latest technology news

Read More

Google Meet aims to tear down the language barrier, but falls short

Google has rolled out an update for video conferencing software Meet that will help workers communicate more effectively with multi-lingual colleagues.

In a blog post, the company announced that its live translation feature has now entered general availability, across all Google Meet platforms.

Launched in beta last year, the feature introduces the ability to translate spoken English into foreign language captions in real-time. At launch, supported languages include French, German, Portuguese and Spanish.

Google Meet translation

Among the various opportunities brought about by the transition to remote working is the ability to recruit from an international pool of talent. However, businesses will clearly need a way to address the communication barriers this may create.

At the moment, Google is pitching the translation feature as a way to overcome disparities in language proficiency, rather than a way to facilitate communication between people who do not share a common language.

“Translated captions help make Google Meet video calls more inclusive and collaborative by removing language proficiency barriers. When meeting participants consume content in their preferred language, this helps equalize information sharing, learning, and collaboration and ensures your meetings are as effective as possible for everyone,” explained Google.

However, if the idea is taken to its logical conclusion, it’s easy to imagine the feature being extended in future to support omnidirectional translation between a variety of different languages. This way, workers could communicate freely with colleagues and partners from across the globe.

The feature as it exists today will roll out over the course of the next two weeks, but only to Google Workspace customers that subscribe to the Business Plus plan and beyond.

TechRadar Pro has asked Google whether customers on cheaper plans can expect to receive access to live translation at a later date, and whether the feature will be capable of translating other languages into English in future.

TechRadar – All the latest technology news

Read More