• chevron_right

      Google’s DeepMind finds 2.2M crystal structures in materials science win

      news.movim.eu / ArsTechnica · Wednesday, 29 November - 18:42

    Lab picture

    Enlarge / The researchers identified novel materials by using machine learning to first generate candidate structures and then gauge their likely stability. (credit: Marilyn Sargent/Berkeley Lab)

    Google DeepMind researchers have discovered 2.2 million crystal structures that open potential progress in fields from renewable energy to advanced computation, and show the power of artificial intelligence to discover novel materials.

    The trove of theoretically stable but experimentally unrealized combinations identified using an AI tool known as GNoME is more than 45 times larger than the number of such substances unearthed in the history of science, according to a paper published in Nature on Wednesday.

    The researchers plan to make 381,000 of the most promising structures available to fellow scientists to make and test their viability in fields from solar cells to superconductors. The venture underscores how harnessing AI can shortcut years of experimental graft—and potentially deliver improved products and processes.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Google’s RT-2 AI model brings us one step closer to WALL-E

      news.movim.eu / ArsTechnica · Friday, 28 July, 2023 - 21:32

    A Google robot controlled by RT-2.

    Enlarge / A Google robot controlled by RT-2. (credit: Google)

    On Friday, Google DeepMind announced Robotic Transformer 2 (RT-2), a "first-of-its-kind" vision-language-action (VLA) model that uses data scraped from the Internet to enable better robotic control through plain language commands. The ultimate goal is to create general-purpose robots that can navigate human environments, similar to fictional robots like WALL-E or C-3PO.

    When a human wants to learn a task, we often read and observe. In a similar way, RT-2 utilizes a large language model (the tech behind ChatGPT ) that has been trained on text and images found online. RT-2 uses this information to recognize patterns and perform actions even if the robot hasn't been specifically trained to do those tasks—a concept called generalization.

    For example, Google says that RT-2 can allow a robot to recognize and throw away trash without having been specifically trained to do so. It uses its understanding of what trash is and how it is usually disposed to guide its actions. RT-2 even sees discarded food packaging or banana peels as trash, despite the potential ambiguity.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

      news.movim.eu / ArsTechnica · Tuesday, 30 May, 2023 - 17:12

    An AI-generated image of

    Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

    On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

    The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Stone-hearted researchers gleefully push over adorable soccer-playing robots

      news.movim.eu / ArsTechnica · Monday, 1 May, 2023 - 21:22 · 1 minute

    In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground.

    Enlarge / In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground. (credit: DeepMind)

    On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.

    But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer ), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.

    DeepMind's "Robustness to pushes" demonstration video.

    On the demo website for "Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning," the researchers frame the merciless toppling of the robots as a key part of a "robustness to pushes" evaluation, writing, "Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way."

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Google develops an AI that can learn both chess and Pac-Man

      John Timmer · news.movim.eu / ArsTechnica · Thursday, 24 December, 2020 - 13:00

    The first major conquest of artificial intelligence was chess. The game has a dizzying number of possible combinations, but it was relatively tractable because it was structured by a set of clear rules. An algorithm could always have perfect knowledge of the state of the game and know every possible move that both it and its opponent could make. The state of the game could be evaluated just by looking at the board.

    But many other games aren't that simple. If you take something like Pac-Man , then figuring out the ideal move would involve considering the shape of the maze, the location of the ghosts, the location of any additional areas to clear, the availability of power-ups, etc., and the best plan can end up in disaster if Blinky or Clyde makes an unexpected move. We've developed AIs that can tackle these games, too, but they have had to take a very different approach to the ones that conquered chess and Go.

    At least until now. Today, however, Google's DeepMind division published a paper describing the structure of an AI that can tackle both chess and Atari classics.

    Read 12 remaining paragraphs | Comments

    index?i=5zGqdDP0ql4:QHVbE-lh3Eo:V_sGLiPBpWUindex?i=5zGqdDP0ql4:QHVbE-lh3Eo:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Google vient-il de résoudre un problème vieux de 50 ans en biologie ?

      François Manens · news.movim.eu / Numerama · Tuesday, 1 December, 2020 - 17:34

    DeepMind, filiale de Alphabet (Google), a annoncé que son outil AlphaFold parvient à prédire la forme des protéines avec une précision jamais atteinte. AlphaFold ouvrirait de nouvelles perspectives dans la compréhension des protéines, et donc dans celle des maladies et des matériaux. [Lire la suite]

    Abonnez-vous à notre chaîne YouTube pour ne manquer aucune vidéo !

    L'article Google vient-il de résoudre un problème vieux de 50 ans en biologie ? est apparu en premier sur Numerama .

    • chevron_right

      DeepMind AI handles protein folding, which humbled previous software

      John Timmer · news.movim.eu / ArsTechnica · Monday, 30 November, 2020 - 22:10 · 1 minute

    Proteins rapidly form complicated structures which had proven difficult to predict.

    Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. (credit: Argonne National Lab )

    Today, DeepMind announced that it had seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

    The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper, and has only made a blog post and some press releases available. But it clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.

    Between the folds

    To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.

    Read 13 remaining paragraphs | Comments

    index?i=xt_2L9as_aI:uGE4QhrVmrc:V_sGLiPBpWUindex?i=xt_2L9as_aI:uGE4QhrVmrc:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA