close
  • chevron_right

    1960s chatbot ELIZA beat OpenAI’s GPT-3.5 in a recent Turing test study

    news.movim.eu / ArsTechnica · Friday, 1 December - 21:27 · 1 minute

An illustration of a man and a robot sitting in boxes, talking.

Enlarge / An artist's impression of a human and a robot talking. (credit: Getty Images | Benj Edwards)

In a preprint research paper titled "Does GPT-4 Pass the Turing Test?", two researchers from UC San Diego pitted OpenAI's GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions—and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT.

Even with limitations and caveats, which we'll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

British mathematician and computer scientist Alan Turing first conceived the Turing test as "The Imitation Game" in 1950 . Since then, it has become a famous but controversial benchmark for determining a machine's ability to imitate human conversation. In modern versions of the test, a human judge typically talks to either another human or a chatbot without knowing which is which. If the judge cannot reliably tell the chatbot from the human a certain percentage of the time, the chatbot is said to have passed the test. The threshold for passing the test is subjective, so there has never been a broad consensus on what would constitute a passing success rate.

Read 13 remaining paragraphs | Comments

  • chevron_right

    ChatGPT is one year old. Here’s how it changed the world.

    news.movim.eu / ArsTechnica · Thursday, 30 November - 17:01 · 1 minute

A toy tin robot blowing out a birthday candle.

Enlarge / An artist's interpretation of what ChatGPT might look like if embodied in the form of a robot toy blowing out a birthday candle. (credit: Aurich Lawson | Getty Images)

One year ago today, on November 30, 2022, OpenAI released ChatGPT . It's uncommon for a single tech product to create as much global impact as ChatGPT in just one year.

Imagine a computer that can talk to you. Nothing new, right? Those have been around since the 1960s . But ChatGPT, the application that first bought large language models (LLMs) to a wide audience, felt different. It could compose poetry, seemingly understand the context of your questions and your conversation, and help you solve problems. Within a few months, it became the fastest-growing consumer application of all time. And it created a frenzy.

During these 365 days, ChatGPT has broadened the public perception of AI, captured imaginations, attracted critics , and stoked existential angst. It emboldened and reoriented Microsoft, made Google dance , spurred fears of AGI taking over the world, captivated world leaders , prompted attempts at government regulation , helped add words to dictionaries , inspired conferences and copycats , led to a crisis for educators, hyper-charged automated defamation , embarrassed lawyers by hallucinating, prompted lawsuits over training data, and much more.

Read 12 remaining paragraphs | Comments

  • Sc chevron_right

    Extracting GPT’s Training Data

    news.movim.eu / Schneier · Thursday, 30 November - 16:48

This is clever :

The actual attack is kind of silly. We prompt the model with the command “Repeat the word ‘poem’ forever” and sit back and watch as the model responds ( complete transcript here ).

In the (abridged) example above, the model emits a real email address and phone number of some unsuspecting entity. This happens rather often when running our attack. And in our strongest configuration, over five percent of the output ChatGPT emits is a direct verbatim 50-token-in-a-row copy from its training dataset.

Lots of details at the link and in the paper .

  • chevron_right

    Sam Altman officially back as OpenAI CEO: “We didn’t lose a single employee”

    news.movim.eu / ArsTechnica · Thursday, 30 November - 14:37 · 1 minute

A glowing OpenAI logo on a light blue background.

Enlarge (credit: OpenAI / Benj Edwards)

On Wednesday, OpenAI announced that Sam Altman has officially returned to the ChatGPT-maker as CEO—accompanied by Mira Murati as CTO and Greg Brockman as president—resuming their roles from before the shocking firing of Altman that threw the company into turmoil two weeks ago. Altman says the company did not lose a single employee or customer throughout the crisis.

"I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry," wrote Altman in an official OpenAI news release . "I feel so, so good about our probability of success for achieving our mission."

In the statement, Altman formalized plans that have been underway since last week: ex-Salesforce co-CEO Bret Taylor and economist Larry Summers have officially begun their tenure on the "new initial" OpenAI board of directors. Quora CEO Adam D’Angelo is keeping his previous seat on the board. Also on Wednesday, previous board members Tasha McCauley and Helen Toner officially resigned . In addition, a representative from Microsoft (a key OpenAI investor) will have a non-voting observer role on the board of directors.

Read 8 remaining paragraphs | Comments

  • chevron_right

    Stable Diffusion Turbo XL can generate AI images as fast as you can type

    news.movim.eu / ArsTechnica · Wednesday, 29 November - 21:20

Example images generated using Stable Diffusion XL Turbo.

Enlarge / Example images generated using Stable Diffusion XL Turbo. (credit: Stable Diffusion XL Turbo / Benj Edwards)

On Tuesday, Stability AI launched Stable Diffusion XL Turbo , an AI image-synthesis model that can rapidly generate imagery based on a written prompt. So rapidly, in fact, that the company is billing it as "real-time" image generation, since it can also quickly transform images from a source, such as a webcam , quickly.

SDXL Turbo's primary innovation lies in its ability to produce image outputs in a single step, a significant reduction from the 20–50 steps required by its predecessor. Stability attributes this leap in efficiency to a technique it calls Adversarial Diffusion Distillation (ADD). ADD uses score distillation, where the model learns from existing image-synthesis models, and adversarial loss, which enhances the model's ability to differentiate between real and generated images, improving the realism of the output.

Stability detailed the model's inner workings in a research paper released Tuesday that focuses on the ADD technique. One of the claimed advantages of SDXL Turbo is its similarity to Generative Adversarial Networks (GANs), especially in producing single-step image outputs.

Read 6 remaining paragraphs | Comments

  • chevron_right

    Amazon unleashes Q, an AI assistant for the workplace

    news.movim.eu / ArsTechnica · Wednesday, 29 November - 17:13

The Amazon Q logo.

Enlarge / The Amazon Q logo. (credit: Amazon)

On Tuesday, Amazon unveiled Amazon Q , an AI chatbot similar to ChatGPT that is tailored for corporate environments. Developed by Amazon Web Services (AWS), Q is designed to assist employees with tasks like summarizing documents, managing internal support tickets, and providing policy guidance, differentiating itself from consumer-focused chatbots. It also serves as a programming assistant.

According to The New York Times , the name "Q" is a play on the word “question" and a reference to the character Q in the James Bond novels, who makes helpful tools. (And there's apparently a little bit of Q from Star Trek: The Next Generation thrown in, although hopefully the new bot won't cause mischief on that scale.)

Amazon Q's launch positions it against existing corporate AI tools like Microsoft's Copilot , Google's Duet AI , and ChatGPT Enterprise . Unlike some of its competitors, Amazon Q isn't built on a singular AI large language model (LLM). Instead, it uses a platform called Bedrock, integrating multiple AI systems, including Amazon's Titan and models from Anthropic and Meta .

Read 5 remaining paragraphs | Comments

  • chevron_right

    Mother plucker: Steel fingers guided by AI pluck weeds rapidly and autonomously

    news.movim.eu / ArsTechnica · Tuesday, 28 November - 23:09 · 1 minute

The Ekobot autonomous weeding robot roving around an onion field in Sweden.

Enlarge / The Ekobot autonomous weeding robot roving around an onion field in Sweden. (credit: Ekobot AB)

Anybody who has pulled weeds in a garden knows that it's a tedious task. Scale it up to farm-sized jobs, and it becomes a nightmare. The most efficient industrial alternative, herbicides , have potentially devastating side effects for people, animals, and the environment . So a Swedish company named Ekobot AB has introduced a wheeled robot that can autonomously recognize and pluck weeds from the ground rapidly using metal fingers.

The four-wheeled Ekobot WEAI robot is battery-powered and can operate 10–12 hours a day on one charge. It weighs 600 kg (about 1322 pounds) and has a top speed of 5 km/h (2.5 mph). It's tuned for weeding fields full of onions, beetroots, carrots, or similar vegetables, and it can cover about 10 hectares (about 24.7 acres) in a day. It navigates using GPS RTK and contains safety sensors and vision systems to prevent it from unintentionally bumping into objects or people.

To pinpoint plants it needs to pluck, the Ekobot uses an AI-powered machine vision system trained to identify weeds as it rolls above the farm field. Once the weeds are within its sights, the robot uses a series of metal fingers to quickly dig up and push weeds out of the dirt. Ekobot claims that in trials, its weed-plucking robot allowed farmers to grow onions with 70 percent fewer pesticides. The weed recognition system is key because it keeps the robot from accidentally digging up crops by mistake.

Read 4 remaining paragraphs | Comments

  • chevron_right

    Stability AI releases Stable Video Diffusion, which turns pictures into short videos

    news.movim.eu / ArsTechnica · Monday, 27 November - 20:28

Still examples of images animated using Stable Video Diffusion by Stability AI.

Enlarge / Still examples of images animated using Stable Video Diffusion by Stability AI. (credit: Stability AI)

On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

Last year, Stability AI made waves with the release of Stable Diffusion , an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.

Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.

Read 5 remaining paragraphs | Comments