Study Reveals Large Language Model’s Limitations in Humor; ChatGPT Struggles to Be the Life of the Party

“Can machines be funny?” That’s a question that has been asked by many researchers exploring the field of computational humor. A recent study by researchers at Stanford University and Google Research set out to find the answer, taking a look at the humor-generating capabilities of ChatGPT, one of the largest language models in the world.

ChatGPT is part of the GPT family of language models that forms the backbone of the OpenAI language model. This model has learned to generate text that bears remarkable similarities to human writing. The study aimed to explore whether ChatGPT could generate funny content like a human.

The researchers tested ChatGPT’s ability to create humor by presenting it with a joke prompt and analyzing the response. They discovered that more than 90% of the time, ChatGPT’s response was a repetition of one of the 25 different jokes provided.

The top four jokes were recycled in more than half of the responses. This result points to an issue highlighted in the study: while ChatGPT can generate a large amount of text, a significant proportion of that text is repetitive and predictable.

ChatGPT’s contribution

Despite these limitations, the study suggests that ChatGPT is a significant step in the direction of creating “funny” machines. Humor has always been seen as too subjective of a field to develop an algorithm that can effectively create it. However, the study shows that despite the limitations, ChatGPT can generate coherent jokes, demonstrating its potential in various natural language tasks.

The researchers also observed that ChatGPT displayed an understanding of wordplay and double meanings. This result suggests that future iterations of the program may lead to significant advancements in this area.

Difficulty confirming beyond training data

However, the study acknowledges that it is challenging to confirm whether the jokes were hard-coded without access to more extensive language model training data. Without expanding the dataset used in the study, it is difficult to say whether ChatGPT learned these jokes through training or whether these jokes were hard-coded into the algorithm.

ChatGPT’s Limitations

While ChatGPT shows potential for advancements in computational humor, the study concludes that the model cannot confidently create intentionally funny original content. ChatGPT can respond to prompts using its vast dataset of pre-existing jokes, but it still struggles to create original humor.

In other instances, the model struggled to make sense of the joke’s setup, which led to a lack of cohesion and became an obstacle to humor. However, it is worth noting that humor often relies on cultural and contextual understandings, which may be difficult for a language model to grasp.

The study highlights the difficulty for large language models to understand and create humor. As of today, machines are no match for human humor. Yet, the study concludes that even though ChatGPT cannot yet generate intentionally funny, original content, it represents a significant leap forward in the design of machines that have a sense of humor. The potential conveyed by ChatGPT could pave the way for subsequent studies that will analyze humor in more detail, bringing us closer to the day when machines will be able to make us laugh.

Explore more

Agentic AI Redefines the Software Development Lifecycle

The quiet hum of servers executing tasks once performed by entire teams of developers now underpins the modern software engineering landscape, signaling a fundamental and irreversible shift in how digital products are conceived and built. The emergence of Agentic AI Workflows represents a significant advancement in the software development sector, moving far beyond the simple code-completion tools of the past.

Is AI Creating a Hidden DevOps Crisis?

The sophisticated artificial intelligence that powers real-time recommendations and autonomous systems is placing an unprecedented strain on the very DevOps foundations built to support it, revealing a silent but escalating crisis. As organizations race to deploy increasingly complex AI and machine learning models, they are discovering that the conventional, component-focused practices that served them well in the past are fundamentally

Agentic AI in Banking – Review

The vast majority of a bank’s operational costs are hidden within complex, multi-step workflows that have long resisted traditional automation efforts, a challenge now being met by a new generation of intelligent systems. Agentic and multiagent Artificial Intelligence represent a significant advancement in the banking sector, poised to fundamentally reshape operations. This review will explore the evolution of this technology,

Cooling Job Market Requires a New Talent Strategy

The once-frenzied rhythm of the American job market has slowed to a quiet, steady hum, signaling a profound and lasting transformation that demands an entirely new approach to organizational leadership and talent management. For human resources leaders accustomed to the high-stakes war for talent, the current landscape presents a different, more subtle challenge. The cooldown is not a momentary pause

What If You Hired for Potential, Not Pedigree?

In an increasingly dynamic business landscape, the long-standing practice of using traditional credentials like university degrees and linear career histories as primary hiring benchmarks is proving to be a fundamentally flawed predictor of job success. A more powerful and predictive model is rapidly gaining momentum, one that shifts the focus from a candidate’s past pedigree to their present capabilities and