• chevron_right

      Researchers discover that ChatGPT prefers repeating 25 jokes over and over

      news.movim.eu / ArsTechnica · Friday, 9 June, 2023 - 21:42 · 1 minute

    An AI-generated image of

    Enlarge / An AI-generated image of "a laughing robot." (credit: Midjourney)

    On Wednesday, two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI's ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT's knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model's training rather than being newly generated.

    The two researchers, associated with the Institute for Software Technology, German Aerospace Center (DLR), and Technical University Darmstadt, explored the nuances of humor found within ChatGPT's 3.5 version (not the newer GPT-4 version) through a series of experiments focusing on joke generation, explanation, and detection. They conducted these experiments by prompting ChatGPT without having access to the model's inner workings or data set.

    "To test how rich the variety of ChatGPT’s jokes is, we asked it to tell a joke a thousand times," they write. "All responses were grammatically correct. Almost all outputs contained exactly one joke. Only the prompt, 'Do you know any good jokes?' provoked multiple jokes, leading to 1,008 responded jokes in total. Besides that, the variation of prompts did have any noticeable effect."

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds

      news.movim.eu / ArsTechnica · Friday, 12 May, 2023 - 15:44

    An AI-generated image of a robot reading a book.

    Enlarge / An AI-generated image of a robot reading a book. (credit: Benj Edwards / Stable Diffusion)

    On Thursday, AI company Anthropic announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words.

    Like OpenAI's GPT-4 , Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory—how much human-provided input data an LLM can process at once.

    A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days," according to Anthropic:

    Read 5 remaining paragraphs | Comments

    • chevron_right

      OpenAI peeks into the “black box” of neural networks with new research

      news.movim.eu / ArsTechnica · Thursday, 11 May, 2023 - 21:25

    An AI-generated image of robots looking inside an artificial brain.

    Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

    On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

    While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

    For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      The AI race heats up: Google announces PaLM 2, its answer to GPT-4

      news.movim.eu / ArsTechnica · Thursday, 11 May, 2023 - 19:20

    The Google PaLM 2 logo.

    Enlarge (credit: Google)

    On Wednesday, Google introduced PaLM 2 , a family of foundational language models comparable to OpenAI's GPT-4 . At its Google I/O event in Mountain View, California, Google revealed that it already uses PaLM 2 to power 25 products, including its Bard conversational AI assistant.

    As a family of large language models (LLMs), PaLM 2 has been trained on an enormous volume of data and does next-word prediction, which outputs the most likely text after a prompt input by humans. PaLM stands for "Pathways Language Model," and " Pathways " is a machine-learning technique created at Google. PaLM 2 follows up on the original PaLM , which Google announced in April 2022.

    According to Google, PaLM 2 supports over 100 languages and can perform "reasoning," code generation, and multi-lingual translation. During his 2023 Google I/O keynote, Google CEO Sundar Pichai said that PaLM 2 comes in four sizes: Gecko, Otter, Bison, Unicorn. Gecko is the smallest and can reportedly run on a mobile device. Aside from Bard, PaLM 2 is behind AI features in Docs, Sheets, and Slides.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Google’s ChatGPT-killer is now open to everyone, packing new features

      news.movim.eu / ArsTechnica · Wednesday, 10 May, 2023 - 20:16

    The Google Bard logo at Google I/O

    Enlarge (credit: Google)

    At Wednesday's Google I/O conference, Google announced wide availability of its ChatGPT-like AI assistant, Bard , in over 180 countries with no waitlist. It also announced updates such as support for Japanese and Korean, visual responses to queries, integration with Google services, and add-ons that will extend Bard's capabilities.

    Similar to how OpenAI upgraded ChatGPT with GPT-4 after its launch, Bard is getting an upgrade under the hood. Google says that some of Bard's recent enhancements are powered by Google's new PaLM 2 , a family of foundational large language models (LLMs) that have enabled " advanced math and reasoning skills " and better coding capabilities. Previously, Bard used Google's LaMDA AI model.

    Google plans to add Google Lens integration to Bard, which will allow users to include photos and images in their prompts. On the Bard demo page, Google shows an example of uploading a photo of dogs and asking Bard to “write a funny caption about these two." Reportedly, Bard will analyze the photo, detect the dog breeds, and draft some amusing captions on demand.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      AI with a moral compass? Anthropic outlines “Constitutional AI” in its Claude chatbot

      news.movim.eu / ArsTechnica · Tuesday, 9 May, 2023 - 21:16

    Anthropic's Constitutional AI logo on a glowing orange background.

    Enlarge / Anthropic's Constitutional AI logo on a glowing orange background. (credit: Anthropic / Benj Edwards)

    On Tuesday, AI startup Anthropic detailed the specific principles of its " Constitutional AI " training approach that provides its Claude chatbot with explicit "values." It aims to address concerns about transparency, safety, and decision-making in AI systems without relying on human feedback to rate responses.

    Claude is an AI chatbot similar to OpenAI's ChatGPT that Anthropic released in March .

    "We’ve trained language models to be better at responding to adversarial questions, without becoming obtuse and saying very little," Anthropic wrote in a tweet announcing the paper. "We do this by conditioning them with a simple set of behavioral principles via a technique called Constitutional AI."

    Read 18 remaining paragraphs | Comments

    • chevron_right

      Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely

      news.movim.eu / ArsTechnica · Monday, 1 May, 2023 - 19:26 · 1 minute

    Geoffrey Hinton in 2019.

    Enlarge / Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (credit: Getty Images / Benj Edwards)

    According to the New York Times , AI pioneer Dr. Geoffrey Hinton has resigned from Google so he can "speak freely" about potential risks posed by AI. Hinton, who helped create some of the fundamental technology behind today's generative AI systems, fears that the tech industry's drive to develop AI products could result in dangerous consequences—from misinformation to job loss or even a threat to humanity.

    "Look at how it was five years ago and how it is now," the Times quoted Hinton as saying. "Take the difference and propagate it forwards. That’s scary."

    Hinton's resume in the field of artificial intelligence extends back to 1972, and his accomplishments have influenced current practices in generative AI. In 1987, Hinton, David Rumelhart, and Ronald J. Williams popularized backpropagation , a key technique for training neural networks that is used in today's generative AI models. In 2012, Hinton, Alex Krizhevsky, and Ilya Sutskever created AlexNet , which is commonly hailed as a breakthrough in machine vision and deep learning, and it arguably kickstarted our current era of generative AI. In 2018, Hinton won the Turing Award , which some call the "Nobel Prize of Computing," along with Yoshua Bengio and Yann LeCun.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Hobbyist builds ChatGPT client for MS-DOS

      news.movim.eu / ArsTechnica · Monday, 27 March, 2023 - 19:23

    A photo of an IBM PC 5155 computer running a ChatGPT client written by Yeo Kheng Meng.

    Enlarge / A photo of an IBM PC 5155 portable computer running a ChatGPT client written by Yeo Kheng Meng. (credit: Yeo Kheng Meng )

    On Sunday, Singapore-based retrocomputing enthusiast Yeo Kheng Meng released a ChatGPT client for MS-DOS that can run on a 4.77 MHz IBM PC from 1981, providing a unique way to converse with the popular OpenAI language model.

    Vintage computer development projects come naturally to Yeo, who created a Slack client for Windows 3.1 in 2019. "I thought to try something different this time and develop for an even older platform as a challenge," he writes on his blog. In this case, he turned his attention to MS-DOS , a text-only operating system first released in 1981, and ChatGPT , an AI-powered large language model (LLM) released by OpenAI in November.

    As a conversational AI model, ChatGPT draws on knowledge scraped from the Internet to answer questions and generate text. Thanks to an API that launched his month , anyone with the programming chops can interface ChatGPT with their own custom application.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      ChatGPT gets “eyes and ears” with plugins that can interface AI with the world

      news.movim.eu / ArsTechnica · Friday, 24 March, 2023 - 19:29

    An illustration of an eyeball

    Enlarge (credit: Aurich Lawson | Getty Images)

    On Thursday, OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.

    Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: "looking up current stock prices") or perform any task controlled by a network service (for example: "ordering pizza through the Internet"), it is now possible, provided it doesn't go against OpenAI's rules.

    Conventionally, most large language models (LLM) like ChatGPT have been constrained in a bubble, so to speak, only able to interact with the world through text conversations with a user. As OpenAI writes in its introductory blog post on ChatGPT plugins, "The only thing language models can do out-of-the-box is emit text."

    Read 18 remaining paragraphs | Comments