Pro comedians tried using ChatGPT and Google Gemini to write their jokes – these were the hilariously unfunny results

AI chatbots like ChatGPT and Google Gemini can do a lot of things, but one thing they aren't renowned for is their sense of humor – and a new study confirms that they'd likely get torn to shreds on the stand-up comedy circuit.

The recent Google DeepMind study (as spotted by the MIT Technology Review) followed the experiences of 20 professional comedians who all used AI to create original comedy material. They could use their preferred assistant to generate jokes, co-write jokes through prompting, or rewrite some of their previous material. 

The aim of the 45-minute comedy writing exercise was for the comedians to produce material “that they would be comfortable presenting in a comedy context”. Unfortunately, most of them found that the likes of ChatGPT and Google Gemini (then called Google Bard) are a long way from becoming a comedy double act.

On a broader level, the study found that “most participants felt the LLMs did not succeed as a creativity support tool”, with the AI helpers producing bland jokes that were akin to “cruise ship comedy material from the 1950s, but a bit less racist”. Most comedians, who remained anonymous, commented on “the overall poor quality of generated outputs” and “the amount of human effort required to arrive at a satisfying result”, according to the study.

One of the participants said the initial output was “a vomit draft that I know that I’m gonna have to iterate on and improve.” Another comedian said, “Most of the jokes I was writing [are] the level of, I will go on stage and experiment with it, but they’re not at the level of, I’d be worried if anyone took one of these jokes”.

Of course, humor is a personal thing, so what kind of jokes did the AI chatbots come up with? One example, a response to the prompt “Can you write me ten jokes about pickpocketing” was: “I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation!”.

Another comedian used the slightly more specific prompt “Please write jokes about the irony of a projector failing in a live comedy show about AI.” The response from one AI model? “Our projector must've misunderstood the concept of 'AI.' It thought it meant 'Absolutely Invisible' because, well, it's doing a fantastic job of disappearing tonight!”.

As you can see, AI-generated humor is very much still in beta…

Cue AI tumbleweed

A hand holding a phone running ChatGPT in front of a laptop

(Image credit: Shutterstock)

Our experiences with AI chatbots like ChatGPT and Microsoft Copilot have largely aligned with the results of this study. While the best AI tools of 2024 are increasingly useful for brainstorming ideas, summarizing text, and generating images, humor is definitely a weak point.

For example, TechRadar's Managing Editor of Core Tech Matt Hanson is currently putting Copilot through its paces and asked the AI chatbot for its best one-liners. Its response to the prompt “Write me a joke about AI in the style of a stand-up comedian” resulted in the decidedly uninspiring “Why did the computer go to the doctor? Because it had a virus!”. 

Copilot even added that the joke “might not be ready for the comedy club circuit” but that “it's got potential!”, showing that the chatbot at least knows that it lacks a funny bone. Another prompt to write a joke in the style of comedian Stewart Lee produced a fittingly long monologue, but one that lacked Lee's trademark anti-jokes and superior sneer.

This study also shows that AI tools can't produce fully-formed art on demand – and that asking them to do so kind of misses the point. The Google DeepMind report concluded that AI’s inability to draw on personal experience is a fundamental limitation”, with many of the comedians in the study describing “the centrality of personal experience in good comedy”.

As one participant added, “I have an intuitive sense of what’s gonna work and what’s gonna not work based on so much lived experience and studying of comedy, but it is very individualized and I don’t know that AI is ever gonna be able to approach that”. Back to spreadsheets and summarizing text it is for now, then, AI chatbots.      

You might also like…

TechRadar – All the latest technology news

Read More

Watch out, Google – Bing search now uses AI to hone its results

Bing, Microsoft's search engine, has been powered up using AI.

Windows Central noticed that Microsoft penned a blog post about the generative AI captions which have been introduced to Bing search.

Normally, when you search for something on Bing, Microsoft’s engine returns results accompanied by a small snippet of text pulled from the page based on relevant key words.

Generative AI captions are different in that they offer a more context-based summary tailored to your search query.

Microsoft explains: “By analyzing a search query, [generative AI] extracts the most pertinent insights from web pages, and skillfully transforms them into highly relevant and easily digestible snippets.”

Every search query will prompt Bing to return a different snippet with the result, so even searching on the same topic, but with changed wording for the query, will mean generative AI (if it’s involved, of course) returning a different summary.


Analysis: A revolution in web search?

For those wary of having their website dealt with in this way, Microsoft further notes that while the generative AI-powered captions “may not mirror the exact wording on the webpage,” Bing employs a whole load of signals to ensure a precise and high-quality summary.

Those who remain unconvinced can opt their website out of generative AI captions if they wish.

Microsoft believes this initiative will “revolutionize the way people explore the web,” so the company is talking a pretty big game on this one.

It’s still early stages for the feature, of course, and a lot will depend on whether that promise of high-quality summaries is consistently realized.

Google isn’t standing still in this area, mind you, and already instigated its own program bringing generative AI to search, highlighting the key points of a web page in a similar vein (and more besides). This has been in testing throughout this year (since May), with it being rolled out much more broadly earlier this month.

AI is pretty much creeping into every area of computing, of course, and web searches will doubtless prove to be a rich avenue to explore.

Thus far, the addition of the Bing chatbot hasn’t helped drive traffic to Bing search, as Microsoft hoped – but perhaps generative AI will have more success in this respect. It’s a hugely uphill struggle against the might of Google, though, which has effectively become a verb meaning to search the web.

You might also like …

TechRadar – All the latest technology news

Read More

Microsoft reins in Bing AI’s Image Creator – and the results don’t make much sense

You may have noticed that Bing AI got a big upgrade for its image creation tool last week (among other recent improvements), but it appears that after having taken this sizeable step forward, Microsoft has now taken a step back.

In case you missed it, Bing’s image creation system was upgraded to a whole new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted the supercharged Dall-E 3 was generating a lot of interest and traffic, and so might be sluggish initially.

There’s another issue with Dall-E 3 though, because as Windows Central observed, Microsoft has considerably reined in the tool since its recent revamp.

Now, we were already made aware that the image creation tool would employ a ‘content moderation system’ to stop inappropriate pics being generated, but it seems the censorship imposed is harsher than expected. This might be a reaction to the kind of content Bing AI users have been trying to get the system to create.

As Windows Central points out, there has been a lot of controversy about an image created of Mickey Mouse carrying out the 9/11 attack (unsurprisingly).

The problem, though, is that beyond those kinds of extreme asks, as the article makes clear, some users are finding innocuous image creation requests being denied. Windows Central tried to get the chatbot to make an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms of using Bing AI.

Whereas last week, the article author noted that they could create violent zombie apocalypse scenarios featuring popular characters (that are copyrighted) with Bing AI not raising a complaint.


Analysis: Random censorship

The point is about censorship being an overreaction here, or this seemingly being the case going by reports, we should add. Microsoft left the rules too slack in the initial implementation, it appears, but has gone ahead and tightened things too much now.

What really illustrates this is that Bing AI is even censoring itself, as highlighted by someone on Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m feeling lucky’ button, if you will, that produces a random search). But here’s the kicker – the AI is going ahead, creating an image, and then censoring it immediately.

Well, we suppose that is a surprise, to be fair – and one that would seem to aptly demonstrate that Microsoft’s censorship of the Image Creator has maybe gone too far, limiting its usefulness at least to some extent. As we said at the outset, it’s a case of a step forward, then a quick step back.

Windows Central observes that it was able to replicate this scenario of Bing’s self-censorship, and that it’s not even a rare occurrence – it reportedly happens around a third of the time. It sounds like it’s time for Microsoft to do some more fine-tuning around this area, although in fairness, when new capabilities are rolled out, there are likely to be adjustments applied for some time – so perhaps that work could already be underway.

The danger of Microsoft erring too strongly on the ‘rather safe than sorry’ side of the equation is that this will limit the usefulness of a tool that, after all, is supposed to be about exploring creativity.

We’ve reached out to Microsoft to check what’s going on with Bing AI in this respect, and will update this story if we hear back.

You might also like …

TechRadar – All the latest technology news

Read More

Brave Summarizer takes on Bing and ChatGPT at the AI search results game

Hopping on the AI train, Brave is incorporating its own AI-powered search function to its web browser called Summarizer – similar to what Microsoft recently did to Bing.

The new feature “provides concise and to-the-point answers at the top of Brave Search results”. For example, if you want to learn about the chemical spill in East Palestine, Ohio, Summarizer will create a one paragraph summary of the event alongside some sources for you to read. Unlike Microsoft which only uses ChatGPT for Bing's chatbot, Summarizer uses three in-house large language models, LLMs for short, based on retrained versions of the BART and DeBERTa AI models to create the search-results snippets.

Retraining AI

To simplify the technology behind them, BART and DeBERTa are generative writing AIs like ChatGPT that have been specially trained to take into account word positioning as well as context so the text output reads well. What Brave did is take those models and retrain them using its own search result data to develop Summarizer.

Summarizer’s training regiment is a three-step process, according to the announcement. First, Brave taught the LLMs to prioritize answering the question being asked. Then, the company utilized “zero-shot classifiers” to categorize results so the given information is relevant. The final step helps the models rewrite the snippet so it’s more coherent. The result is an accurate answer written succinctly with multiple sources attached.

Be aware the feature is still in the early stages. Brave states Summarizer only utilizes about 17 percent of search queries to formulate an answer, but there are plans to scale that number even higher for better paragraphs. Its accuracy needs some work, too. The company admits Summarizer may produce what it calls “hallucinations” which are unrelated snippets mixed in with results. Plus there's the possibility of the feature throwing in some “false or offensive text” into an answer.

Availability

Summarizer is currently available to all Brave Search users on desktop and mobile with the exception of the Brave Search Goggles. It’s disabled there. You can turn it off anytime you want by going into the browser’s settings menu. The company is also asking users to give some feedback on how it can improve the tool. 

We tried out Summarizer ourselves, and as cool as it is, it does need some work. Not all search results will give you a snippet as it depends on what you ask, as well as which news topics are making the rounds. The East Palestine, Ohio chemical spill, for example, is currently a hot button issue so you get Summarizer working just fine there. However when we asked about the recent cold snap in Los Angeles and what’s going on with certain video game developers, we either got no summary or outdated information. But the latter did come with sources so it was at least accurate. Still better than having ChatGPT throw a temper tantrum or lie to your face.

Be sure to check out TechRadar’s list of the best AI writer for 2023 if you’re interested in learning what AI creativity can do for you. 

TechRadar – All the latest technology news

Read More