close
  • chevron_right

    OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

    news.movim.eu / ArsTechnica · 6 days ago - 17:12

An AI-generated image of

Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

Read 14 remaining paragraphs | Comments

  • chevron_right

    Fake Pentagon “explosion” photo sows confusion on Twitter

    news.movim.eu / ArsTechnica · Tuesday, 23 May - 21:01 · 1 minute

A fake AI-generated image of an

Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

Read 6 remaining paragraphs | Comments

  • chevron_right

    Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely

    news.movim.eu / ArsTechnica · Monday, 1 May - 19:26 · 1 minute

Geoffrey Hinton in 2019.

Enlarge / Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (credit: Getty Images / Benj Edwards)

According to the New York Times , AI pioneer Dr. Geoffrey Hinton has resigned from Google so he can "speak freely" about potential risks posed by AI. Hinton, who helped create some of the fundamental technology behind today's generative AI systems, fears that the tech industry's drive to develop AI products could result in dangerous consequences—from misinformation to job loss or even a threat to humanity.

"Look at how it was five years ago and how it is now," the Times quoted Hinton as saying. "Take the difference and propagate it forwards. That’s scary."

Hinton's resume in the field of artificial intelligence extends back to 1972, and his accomplishments have influenced current practices in generative AI. In 1987, Hinton, David Rumelhart, and Ronald J. Williams popularized backpropagation , a key technique for training neural networks that is used in today's generative AI models. In 2012, Hinton, Alex Krizhevsky, and Ilya Sutskever created AlexNet , which is commonly hailed as a breakthrough in machine vision and deep learning, and it arguably kickstarted our current era of generative AI. In 2018, Hinton won the Turing Award , which some call the "Nobel Prize of Computing," along with Yoshua Bengio and Yann LeCun.

Read 8 remaining paragraphs | Comments

  • chevron_right

    OpenAI checked to see whether GPT-4 could take over the world

    news.movim.eu / ArsTechnica · Wednesday, 15 March - 22:09

An AI-generated image of the earth enveloped in an explosion.

Enlarge (credit: Ars Technica)

As part of pre-release safety testing for its new GPT-4 AI model , launched Tuesday, OpenAI allowed an AI testing group to assess the potential risks of the model's emergent capabilities—including "power-seeking behavior," self-replication, and self-improvement.

While the testing group found that GPT-4 was "ineffective at the autonomous replication task," the nature of the experiments raises eye-opening questions about the safety of future AI systems.

Raising alarms

"Novel capabilities often emerge in more powerful models," writes OpenAI in a GPT-4 safety document published yesterday. "Some that are particularly concerning are the ability to create and act on long-term plans, to accrue power and resources (“power-seeking”), and to exhibit behavior that is increasingly 'agentic.'" In this case, OpenAI clarifies that "agentic" isn't necessarily meant to humanize the models or declare sentience but simply to denote the ability to accomplish independent goals.

Read 21 remaining paragraphs | Comments