• chevron_right

    Fake Pentagon “explosion” photo sows confusion on Twitter / ArsTechnica · Tuesday, 23 May, 2023 - 21:01 · 1 minute

A fake AI-generated image of an

Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

Read 6 remaining paragraphs | Comments

  • chevron_right

    Stone-hearted researchers gleefully push over adorable soccer-playing robots / ArsTechnica · Monday, 1 May, 2023 - 21:22 · 1 minute

In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground.

Enlarge / In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground. (credit: DeepMind)

On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.

But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer ), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.

DeepMind's "Robustness to pushes" demonstration video.

On the demo website for "Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning," the researchers frame the merciless toppling of the robots as a key part of a "robustness to pushes" evaluation, writing, "Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way."

Read 5 remaining paragraphs | Comments

  • chevron_right

    Why ChatGPT and Bing Chat are so good at making things up / ArsTechnica · Thursday, 6 April, 2023 - 15:58

Why ChatGPT and Bing Chat are so good at making things up

Enlarge (credit: Aurich Lawson | Getty Images)

Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation .

Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.

“Hallucinations”—a loaded term in AI

AI chatbots such as OpenAI's ChatGPT rely on a type of AI called a "large language model" (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate "natural language" text—language as humans would naturally write or talk. Unfortunately, they can also make mistakes.

Read 41 remaining paragraphs | Comments

  • chevron_right

    Ethical AI art generation? Adobe Firefly may be the answer / ArsTechnica · Wednesday, 22 March, 2023 - 17:27 · 1 minute

Adobe Firefly AI image generator example.

Enlarge / An Adobe Firefly AI image generator example. (credit: Adobe)

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E , Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists .

To avoid those legal and ethical issues, Adobe created an AI art generator trained solely on Adobe Stock images, openly licensed content, and public domain content, ensuring the generated content is safe for commercial use. Adobe goes into more detail in its news release :

Read 3 remaining paragraphs | Comments