• chevron_right

      Two ways of performing massively parallel AI calculations using light

      John Timmer · news.movim.eu / ArsTechnica · Thursday, 7 January, 2021 - 20:48 · 1 minute

    Image of a series of parallel lines in different colors.

    Enlarge / The output of two optical frequency combs, showing the light appearing at evenly spaced wavelengths. (credit: ESO )

    AI and machine-learning techniques have become a major focus of everything from cloud computing services to cell phone manufacturers. Unfortunately, our existing processors are a bad match for the sort of algorithms that many of these techniques are based on, in part because they require frequent round trips between the processor and memory. To deal with this bottleneck, researchers have figured out how to perform calculations in memory and designed chips where each processing unit has a bit of memory attached .

    Now, two different teams of researchers have figured out ways of performing calculations with light in a way that both merges memory and calculations and allows for massive parallelism. Despite the differences in implementation, the hardware designed by these teams has a common feature: it allows the same piece of hardware to simultaneously perform different calculations using different frequencies of light. While they're not yet at the level of performance of some dedicated processors, the approach can scale easily and can be implemented using on-chip hardware, raising the process of using it as a dedicated co-processor.

    A fine-toothed comb

    The new work relies on hardware called a frequency comb, a technology that won some of its creators the 2005 Nobel Prize in Physics. While a lot of interesting physics is behind how the combs work (which you can read more about here if you're curious), what we care about is the outcome of that physics. While there are several ways to produce a frequency comb, they all produce the same thing: a beam of light that is composed of evenly spaced frequencies. So, a frequency comb in visible wavelengths might be composed of light with a wavelength of 500 nanometers, 510nm, 520nm, and so on.

    Read 14 remaining paragraphs | Comments

    index?i=bl5IcUtM32w:MIj-Eh9ehyE:V_sGLiPBpWUindex?i=bl5IcUtM32w:MIj-Eh9ehyE:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      Google develops an AI that can learn both chess and Pac-Man

      John Timmer · news.movim.eu / ArsTechnica · Thursday, 24 December, 2020 - 13:00

    The first major conquest of artificial intelligence was chess. The game has a dizzying number of possible combinations, but it was relatively tractable because it was structured by a set of clear rules. An algorithm could always have perfect knowledge of the state of the game and know every possible move that both it and its opponent could make. The state of the game could be evaluated just by looking at the board.

    But many other games aren't that simple. If you take something like Pac-Man , then figuring out the ideal move would involve considering the shape of the maze, the location of the ghosts, the location of any additional areas to clear, the availability of power-ups, etc., and the best plan can end up in disaster if Blinky or Clyde makes an unexpected move. We've developed AIs that can tackle these games, too, but they have had to take a very different approach to the ones that conquered chess and Go.

    At least until now. Today, however, Google's DeepMind division published a paper describing the structure of an AI that can tackle both chess and Atari classics.

    Read 12 remaining paragraphs | Comments

    index?i=5zGqdDP0ql4:QHVbE-lh3Eo:V_sGLiPBpWUindex?i=5zGqdDP0ql4:QHVbE-lh3Eo:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      DeepMind AI handles protein folding, which humbled previous software

      John Timmer · news.movim.eu / ArsTechnica · Monday, 30 November, 2020 - 22:10 · 1 minute

    Proteins rapidly form complicated structures which had proven difficult to predict.

    Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. (credit: Argonne National Lab )

    Today, DeepMind announced that it had seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

    The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper, and has only made a blog post and some press releases available. But it clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.

    Between the folds

    To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.

    Read 13 remaining paragraphs | Comments

    index?i=xt_2L9as_aI:uGE4QhrVmrc:V_sGLiPBpWUindex?i=xt_2L9as_aI:uGE4QhrVmrc:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
    • chevron_right

      D-Wave releases its next-generation quantum annealing chip

      John Timmer · news.movim.eu / ArsTechnica · Tuesday, 29 September, 2020 - 18:13 · 1 minute

    Image of a chip surrounded by complicated support hardware.

    Enlarge

    Today, quantum computing company D-Wave is announcing the availability of its next-generation quantum annealer, a specialized processor that uses quantum effects to solve optimization and minimization problems. The hardware itself isn't much of a surprise—D-Wave was discussing its details months ago —but D-Wave talked with Ars about the challenges of building a chip with over a million individual quantum devices. And the company is coupling the hardware's release to the availability of a new software stack that functions a bit like middleware between the quantum hardware and classical computers.

    Quantum annealing

    Quantum computers being built by companies like Google and IBM are general purpose, gate-based machines. They can solve any problem and should show a vast acceleration for specific classes of problems. Or they will, as soon as the gate count gets high enough. Right now, these quantum computers are limited to a few dozen gates and have no error correction. Bringing them up to the scale needed presents a series of difficult technical challenges.

    D-Wave's machine is not general-purpose; it's technically a quantum annealer, not a quantum computer. It performs calculations that find low-energy states for different configurations of the hardware's quantum devices. As such, it will only work if a computing problem can be translated into an energy-minimization problem in one of the chip's possible configurations. That's not as limiting as it might sound, since many forms of optimization can be translated to an energy minimization problem, including things like complicated scheduling issues and protein structures.

    Read 22 remaining paragraphs | Comments

    index?i=WBWYyxMn2Kw:7SblLUC8gZY:V_sGLiPBpWUindex?i=WBWYyxMn2Kw:7SblLUC8gZY:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA