• chevron_right

    OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter / ArsTechnica · 4 days ago - 17:12

An AI-generated image of

Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

Read 14 remaining paragraphs | Comments

  • chevron_right

    Built-in ChatGPT-driven Copilot will transform Windows 11 starting in June / ArsTechnica · Tuesday, 23 May - 17:08 · 1 minute

Windows Copilot is an AI-assisted feature coming to Windows 11 preview builds starting in June.

Enlarge / Windows Copilot is an AI-assisted feature coming to Windows 11 preview builds starting in June. (credit: Microsoft)

A couple of months ago, Microsoft added generative AI features to Windows 11 in the form of a taskbar-mounted version of the Bing chatbot . Starting this summer, the company will be going even further, adding a new ChatGPT-driven Copilot feature that can be used alongside your other Windows apps. The company announced the change at its Build developer conference alongside another new batch of Windows 11 updates due later this year. Windows Copilot will be available to Windows Insiders starting in June.

Like the Microsoft 365 Copilot , Windows Copilot is a separate window that opens up along the right side of your screen and assists with various tasks based on what you ask it to do. A Microsoft demo video shows Copilot changing Windows settings, rearranging windows with Snap Layouts , summarizing and rewriting documents that were dragged into it, and opening apps like Spotify, Adobe Express, and Teams. Copilot is launched with a dedicated button on the taskbar.

"Once open, the Windows Copilot side bar stays consistent across your apps, programs and windows, always available to act as your personal assistant. It makes every user a power user, helping you take action, customize your settings, and seamlessly connect across your favorite apps," wrote Microsoft Chief Product Officer Panos Panay.

Read 7 remaining paragraphs | Comments

  • chevron_right

    Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds / ArsTechnica · Friday, 12 May - 15:44

An AI-generated image of a robot reading a book.

Enlarge / An AI-generated image of a robot reading a book. (credit: Benj Edwards / Stable Diffusion)

On Thursday, AI company Anthropic announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words.

Like OpenAI's GPT-4 , Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory—how much human-provided input data an LLM can process at once.

A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days," according to Anthropic:

Read 5 remaining paragraphs | Comments

  • chevron_right

    OpenAI peeks into the “black box” of neural networks with new research / ArsTechnica · Thursday, 11 May - 21:25

An AI-generated image of robots looking inside an artificial brain.

Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

Read 10 remaining paragraphs | Comments

  • chevron_right

    “Meaningful harm” from AI necessary before regulation, says Microsoft exec / ArsTechnica · Thursday, 11 May - 19:48

“Meaningful harm” from AI necessary before regulation, says Microsoft exec

Enlarge (credit: HJBC | iStock Editorial / Getty Images Plus )

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"

World Economic Forum Growth Summit 2023 panel "Growth Hotspots: Harnessing the Generative AI Revolution."

"I would say yes," Schwarz said, likening regulating AI before "a little bit of harm" is caused to passing driver's license laws before people died in car accidents.

Read 12 remaining paragraphs | Comments

  • chevron_right

    The AI race heats up: Google announces PaLM 2, its answer to GPT-4 / ArsTechnica · Thursday, 11 May - 19:20

The Google PaLM 2 logo.

Enlarge (credit: Google)

On Wednesday, Google introduced PaLM 2 , a family of foundational language models comparable to OpenAI's GPT-4 . At its Google I/O event in Mountain View, California, Google revealed that it already uses PaLM 2 to power 25 products, including its Bard conversational AI assistant.

As a family of large language models (LLMs), PaLM 2 has been trained on an enormous volume of data and does next-word prediction, which outputs the most likely text after a prompt input by humans. PaLM stands for "Pathways Language Model," and " Pathways " is a machine-learning technique created at Google. PaLM 2 follows up on the original PaLM , which Google announced in April 2022.

According to Google, PaLM 2 supports over 100 languages and can perform "reasoning," code generation, and multi-lingual translation. During his 2023 Google I/O keynote, Google CEO Sundar Pichai said that PaLM 2 comes in four sizes: Gecko, Otter, Bison, Unicorn. Gecko is the smallest and can reportedly run on a mobile device. Aside from Bard, PaLM 2 is behind AI features in Docs, Sheets, and Slides.

Read 9 remaining paragraphs | Comments

  • chevron_right

    Google’s ChatGPT-killer is now open to everyone, packing new features / ArsTechnica · Wednesday, 10 May - 20:16

The Google Bard logo at Google I/O

Enlarge (credit: Google)

At Wednesday's Google I/O conference, Google announced wide availability of its ChatGPT-like AI assistant, Bard , in over 180 countries with no waitlist. It also announced updates such as support for Japanese and Korean, visual responses to queries, integration with Google services, and add-ons that will extend Bard's capabilities.

Similar to how OpenAI upgraded ChatGPT with GPT-4 after its launch, Bard is getting an upgrade under the hood. Google says that some of Bard's recent enhancements are powered by Google's new PaLM 2 , a family of foundational large language models (LLMs) that have enabled " advanced math and reasoning skills " and better coding capabilities. Previously, Bard used Google's LaMDA AI model.

Google plans to add Google Lens integration to Bard, which will allow users to include photos and images in their prompts. On the Bard demo page, Google shows an example of uploading a photo of dogs and asking Bard to “write a funny caption about these two." Reportedly, Bard will analyze the photo, detect the dog breeds, and draft some amusing captions on demand.

Read 6 remaining paragraphs | Comments

  • chevron_right

    AI with a moral compass? Anthropic outlines “Constitutional AI” in its Claude chatbot / ArsTechnica · Tuesday, 9 May - 21:16

Anthropic's Constitutional AI logo on a glowing orange background.

Enlarge / Anthropic's Constitutional AI logo on a glowing orange background. (credit: Anthropic / Benj Edwards)

On Tuesday, AI startup Anthropic detailed the specific principles of its " Constitutional AI " training approach that provides its Claude chatbot with explicit "values." It aims to address concerns about transparency, safety, and decision-making in AI systems without relying on human feedback to rate responses.

Claude is an AI chatbot similar to OpenAI's ChatGPT that Anthropic released in March .

"We’ve trained language models to be better at responding to adversarial questions, without becoming obtuse and saying very little," Anthropic wrote in a tweet announcing the paper. "We do this by conditioning them with a simple set of behavioral principles via a technique called Constitutional AI."

Read 18 remaining paragraphs | Comments