Are you a Reddit user? Google’s about to feed all your posts to a hungry AI, and there’s nothing you can do about it

Google and Reddit have announced a huge content licensing deal, reportedly worth a whopping $ 60 million – but Reddit users are pissed.

Why, you might ask? Well, the deal involves Google using content posted by users on Reddit to train its AI models, chiefly its newly launched Google Gemini AI suite. It makes sense; Reddit contains a wealth of information and users typically talk colloquially, which Google is probably hoping will make for a more intelligent and more conversational AI service. However, this also essentially means that anything you post on Reddit now becomes fuel for the AI engine, something many users are taking umbrage at.

While the very first thing that came to mind was MIT’s insane Reddit-trained ‘psychopath AI’ from years ago, it’s fair to say that AI model training has come a long way since then – so hooking it up to Reddit hopefully won’t turn Gemini into a raving lunatic.

The deal, announced yesterday by Reddit in a blog post, will have other benefits as well: since many people specifically append ‘reddit’ to their search queries when looking for the answer to a question, Google aims to make getting to the relevant content on Reddit easier. Reddit plans to use Google’s Vertex AI to improve its own internal site search functionality, too, so Reddit users will enjoy a boost to the user experience – rather than getting absolutely nothing in return for their training data. 

Do Redditors deserve a cut of that $ 60 million?

A lot of Reddit users have been complaining about the deal in various threads on the site, for a wide variety of reasons. Some users have privacy worries, some voiced concerns about the quality of output from an AI trained on Reddit content (which, let’s be honest, can get pretty toxic), and others simply don’t want their posts ‘stolen’ to train an AI.

Unfortunately for any unhappy Redditors, the site’s Terms of Service do mean that Reddit can (within reason) do whatever it wants with your posts and comments. Calling the content ‘stolen’ is inaccurate: if you’re a Reddit user, you’re the product, and Reddit is the one selling. 

Personally, I’m glad to see a company actually getting paid for providing AI training data, unlike the legal grey-area dodginess of previous chatbots and AI art tools that were trained on data scraped from the internet for free without user consent. By agreeing to the Reddit TOS, you’re essentially consenting to your data being used for this.

A person introduces Google Gemini next to text saying it is

Google Gemini could stand to benefit hugely from the training data produced by this content use deal. (Image credit: Google)

Some users are positively incensed by this though, claiming that if they’re the ones making the content, surely they should be entitled to a slice of the AI pie. I’m going to hand out some tough love here: that’s a ridiculous and naive argument. Do these people believe they deserve a cut of ad revenue too, since they made a hit post that drew thousands of people to Reddit? This isn’t the same as AI creators quietly nabbing work from independent artists on Twitter.

At the end of the day, you’re never going to please everyone. If this deal has actual potential to improve not just Google Gemini, but Google Search in general (as well as Reddit’s site search), then the benefits arguably outweigh the costs – although I do think Reddit has a moral obligation to ensure that all of its users are fully informed about the use of their data. 

A few paragraphs in the TOS aren’t enough, guys: you know full well nobody reads those.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

Windows 11’s ChatGPT-powered Bing AI taskbar is nothing more than a pointless ad

Microsoft released a new Windows 11 update on Wednesday, March 1, and all everyone is talking about is how the update emphasises putting artificial intelligence first… and how it falls short of that rather severely. 

The AI-powered search box is now set up in the taskbar by default, which may or may not be helpful depending on your disposition towards AI and ‘helpful’ chatbots. The update to the taskbar is amongst many other improved features that are packaged in with the recent Windows 11 update, so it’ll be hard to avoid or ignore if you’re not a fan of ChatGPT. 

ChatGPT is the AI-powered chatbot developed by OpenAI that allows users to interact with the bot and ask it to do anything from brainstorm recipes, breakdown complex ideas, writing and edit large copies of text or just having a little chat. The bot uses machine learning to analyze prompts given by users and respond using data input by the user and information from its database. Microsoft launched its collaboration with ChatGPT early last month and has had its share of meltdowns and inaccuracies since then.

It’s a little too early to get a grasp on how successful this new Windows 11 update has been with integrating ChatGPT-powered AI search, but so far it doesn't seem like the taskbar update has been well received.  In fact, I would argue it’s just a heavy-handed advertisement for Bing, Microsoft’s largely unloved search engine, and takes away consumer autonomy to decide whether or not they want to dabble in AI. This is not to bash ChatGPT and its fans, but more a finger wag at the mass implementation that takes away the ability to choose.

Say you were a sceptic or someone who didn’t know much about ChatGPT or Bing AI, you don’t really have a choice on whether or not you want access to Bing AI and there doesn’t seem to be a way to get rid of its addition to your Windows 11 desktop.

The announcement from Microsoft gives off the impression that the entire search experience on Windows 11 will now be supercharged by AI, but that’s far from the case. 

There’s no quick search in the taskbar that’ll spit out intelligently thought-out results. Fans or curious users looking to use Bing’s AI search engine don’t have integration within Windows 11 in the capacity seemingly promised by yesterday's announcement. The scale with which AI integration has been promised compared to what we've actually got doesn't match up.

Instead, users now have the ability to launch Bing’s new chatbot without actually having to type ‘bing.com’ into a web browser first. That’s it. The blog post says users have “ the amazing capabilities of the new AI-powered Bing directly into the taskbar “ which is not true at all. You get a banner for Bing on the Windows search page and two prompts to help suggest what to do when you click on any of the related buttons and get whisked off to Microsoft's Edge browser, in what feels like a calculated attempt to force more people to use it. 

Once Microsoft Edge is open, you can use Bing as you please if you’re registered. I was taken to the login/registration page since I was yet to make an account, but it‘s incredibly annoying to be sold the idea of having access to Bing AI’s chatbot from the comfort of your immediate desktop and instead being taken to a new program and webpage instead. Windows isn’t doing anything AI related, since Microsoft hasn’t added AI to search on Windows in the new feature drop as you may think, which makes the ChatGPT-powered version of Bing in Windows 11 just feel like an empty advertisement. 


Analysis: Who is this for? 

This definitely feels like a manifestation of something a lot of people were worried about when Microsoft announced its partnership with ChatGPT and implemented it into Bing: essentially, another way for Microsoft to try to force people into using Bing and Edge in favour of the software they actually use. 

We’ve all seen the pathetic little banners that come up on Edge while you’re setting up your PC and trying to download Chrome or Firefox, and this definitely feels like Microsoft has put the metaphorical foot down and made sure that if you want to use your taskbar search or try out Bing AI, you’re going to have to do it on their terms. 

Regardless of how you feel about AI chatbots or just AI technology in general, there’s no denying the update to the taskbar is less than useful. There are a lot of more interesting, and useful feature updates that have been overshadowed by the glaring blip of the Bing AI taskbar update.  

The lack of a clear opt-out option does seem to solidify the idea that not only is the ‘shortcut to Bing AI’ here to stay, but it’s only to be accessed on Microsoft’s terms.

TechRadar – All the latest technology news

Read More