• chevron_right

      Researchers discover that ChatGPT prefers repeating 25 jokes over and over

      news.movim.eu / ArsTechnica · Friday, 9 June, 2023 - 21:42 · 1 minute

    An AI-generated image of

    Enlarge / An AI-generated image of "a laughing robot." (credit: Midjourney)

    On Wednesday, two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI's ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT's knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model's training rather than being newly generated.

    The two researchers, associated with the Institute for Software Technology, German Aerospace Center (DLR), and Technical University Darmstadt, explored the nuances of humor found within ChatGPT's 3.5 version (not the newer GPT-4 version) through a series of experiments focusing on joke generation, explanation, and detection. They conducted these experiments by prompting ChatGPT without having access to the model's inner workings or data set.

    "To test how rich the variety of ChatGPT’s jokes is, we asked it to tell a joke a thousand times," they write. "All responses were grammatically correct. Almost all outputs contained exactly one joke. Only the prompt, 'Do you know any good jokes?' provoked multiple jokes, leading to 1,008 responded jokes in total. Besides that, the variation of prompts did have any noticeable effect."

    Read 10 remaining paragraphs | Comments

    • chevron_right

      OpenAI peeks into the “black box” of neural networks with new research

      news.movim.eu / ArsTechnica · Thursday, 11 May, 2023 - 21:25

    An AI-generated image of robots looking inside an artificial brain.

    Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

    On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

    While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

    For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

    Read 10 remaining paragraphs | Comments