close
  • chevron_right

    OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

    news.movim.eu / ArsTechnica · 4 days ago - 17:12

An AI-generated image of

Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

Read 14 remaining paragraphs | Comments

  • chevron_right

    The lightning onset of AI—what suddenly changed? An Ars Frontiers 2023 recap

    news.movim.eu / ArsTechnica · Wednesday, 24 May - 23:31 · 1 minute

Benj Edwards (L) moderated a panel featuring Paige Bailey (C), Haiyan Zhang (R) for the Ars Frontiers 2023 session titled

Enlarge / On May 22, Benj Edwards (left) moderated a panel featuring Paige Bailey (center), Haiyan Zhang (right) for the Ars Frontiers 2023 session titled, "The Lightning Onset of AI — What Suddenly Changed?" (credit: Ars Technica)

On Monday, Ars Technica hosted our Ars Frontiers virtual conference. In our fifth panel, we covered "The Lightning Onset of AI—What Suddenly Changed?" The panel featured a conversation with Paige Bailey , lead product manager for Generative Models at Google DeepMind, and Haiyan Zhang , general manager of Gaming AI at Xbox, moderated by Ars Technica's AI reporter, Benj Edwards .

The panel originally streamed live, and you can now watch a recording of the entire event on YouTube. The "Lightning AI" part introduction begins at the 2:26:05 mark in the broadcast.

Ars Frontiers 2023 livestream recording.

With "AI" being a nebulous term, meaning different things in different contexts, we began the discussion by considering the definition of AI and what it means to the panelists. Bailey said, "I like to think of AI as helping derive patterns from data and use it to predict insights ... it's not anything more than just deriving insights from data and using it to make predictions and to make even more useful information."

Read 21 remaining paragraphs | Comments

  • chevron_right

    Fake Pentagon “explosion” photo sows confusion on Twitter

    news.movim.eu / ArsTechnica · Tuesday, 23 May - 21:01 · 1 minute

A fake AI-generated image of an

Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

Read 6 remaining paragraphs | Comments

  • chevron_right

    Adobe Photoshop’s new “Generative Fill” AI tool lets you manipulate photos with text

    news.movim.eu / ArsTechnica · Tuesday, 23 May - 19:07 · 1 minute

An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by Generative Fill in the Adobe Photoshop beta.

Enlarge / An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by the new "Generative Fill" AI tool in the Adobe Photoshop beta. (credit: Apple / Benj Edwards / Adobe)

On Tuesday, Adobe added a new tool to its Photoshop beta called "Generative Fill," which uses cloud-based image synthesis to fill selected areas of an image with new AI-generated content based on a text description. Powered by Adobe Firefly, Generative Fill works similarly to a technique called "inpainting" used in DALL-E and Stable Diffusion releases since last year.

At the core of Generative Fill is Adobe Firefly , which is Adobe's custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe's stock library to associate certain imagery with text descriptions of them. Now part of Photoshop, people can type in what they want to see (i.e. "a clown on a computer monitor"), and Firefly will synthesize several options for the user to choose from. Generative Fill uses a well-known AI technique called " inpainting " to create a context-aware generation that can seamlessly blend synthesized imagery into an existing image.

To use Generative Fill, users select an area of an existing image they want to modify. After selecting it, a "Contextual Task Bar" pops up that allows users to type in a description of what they want to see generated in the selected area. Photoshop sends this data to Adobe's servers for processing, then returns results in the app. After generating, the user has the option to select between several options of generations or to create more options to browse through.

Read 7 remaining paragraphs | Comments

  • chevron_right

    Hypersensitive robot hand is eerily human in how it can feel things

    news.movim.eu / ArsTechnica · Monday, 22 May - 16:53

Image of robotic fingers gripping a mirrored disco ball with light reflected off it.

Enlarge (credit: Columbia University ROAM Lab )

From bionic limbs to sentient androids, robotic entities in science fiction blur the boundaries between biology and machine. Real-life robots are far behind in comparison. While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human.

One thing robots have not been able to achieve is a level of sensitivity and dexterity high enough to feel and handle things as humans do. Enter a robot hand developed by a team of researchers at Columbia University. (Five years ago, we covered their work back when this achievement was still a concept.)

This hand doesn’t just pick things up and put them down on command. It is so sensitive that it can actually “feel” what it is touching, and it's dextrous enough to easily change the position of its fingers so it can better hold objects, a maneuver known as "finger gaiting." It is so sensitive it can even do all this in the dark, figuring everything out by touch.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds

    news.movim.eu / ArsTechnica · Friday, 12 May - 15:44

An AI-generated image of a robot reading a book.

Enlarge / An AI-generated image of a robot reading a book. (credit: Benj Edwards / Stable Diffusion)

On Thursday, AI company Anthropic announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words.

Like OpenAI's GPT-4 , Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory—how much human-provided input data an LLM can process at once.

A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days," according to Anthropic:

Read 5 remaining paragraphs | Comments

  • chevron_right

    OpenAI peeks into the “black box” of neural networks with new research

    news.movim.eu / ArsTechnica · Thursday, 11 May - 21:25

An AI-generated image of robots looking inside an artificial brain.

Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

Read 10 remaining paragraphs | Comments

  • chevron_right

    The AI race heats up: Google announces PaLM 2, its answer to GPT-4

    news.movim.eu / ArsTechnica · Thursday, 11 May - 19:20

The Google PaLM 2 logo.

Enlarge (credit: Google)

On Wednesday, Google introduced PaLM 2 , a family of foundational language models comparable to OpenAI's GPT-4 . At its Google I/O event in Mountain View, California, Google revealed that it already uses PaLM 2 to power 25 products, including its Bard conversational AI assistant.

As a family of large language models (LLMs), PaLM 2 has been trained on an enormous volume of data and does next-word prediction, which outputs the most likely text after a prompt input by humans. PaLM stands for "Pathways Language Model," and " Pathways " is a machine-learning technique created at Google. PaLM 2 follows up on the original PaLM , which Google announced in April 2022.

According to Google, PaLM 2 supports over 100 languages and can perform "reasoning," code generation, and multi-lingual translation. During his 2023 Google I/O keynote, Google CEO Sundar Pichai said that PaLM 2 comes in four sizes: Gecko, Otter, Bison, Unicorn. Gecko is the smallest and can reportedly run on a mobile device. Aside from Bard, PaLM 2 is behind AI features in Docs, Sheets, and Slides.

Read 9 remaining paragraphs | Comments