• chevron_right

      Quantum computing progress: Higher temps, better error correction

      news.movim.eu / ArsTechnica · Wednesday, 27 March - 22:24 · 1 minute

    conceptual graphic of symbols representing quantum states floating above a stylized computer chip.

    Enlarge (credit: vital )

    There's a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.

    We probably won't have a clearer picture of what's likely to work for a few years. But there's going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we're going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.

    Hot stuff

    Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we'd need tens of thousands of hardware qubits before we could do anything practical.

    Read 21 remaining paragraphs | Comments

    • chevron_right

      IBM adds error correction to updated quantum computing roadmap

      news.movim.eu / ArsTechnica · Monday, 4 December - 15:40 · 1 minute

    Image of a series of silver-covered rectangles, each representing a processing chip.

    Enlarge / The family portrait of IBM's quantum processors, with the two new arrivals (Heron and Condor) at right. (credit: IBM)

    On Monday, IBM announced that it has produced the two quantum systems that its roadmap had slated for release in 2023. One of these is based on a chip named Condor, which is the largest transmon-based quantum processor yet released, with 1,121 functioning qubits. The second is based on a combination of three Heron chips, each of which has 133 qubits. Smaller chips like Heron and its successor, Flamingo, will play a critical role in IBM's quantum roadmap—which also got a major update today.

    Based on the update, IBM will have error-corrected qubits working by the end of the decade, enabled by improvements to individual qubits made over several iterations of the Flamingo chip. While these systems probably won't place things like existing encryption schemes at risk, they should be able to reliably execute quantum algorithms that are far more complex than anything we can do today.

    We talked with IBM's Jay Gambetta about everything the company is announcing today, including existing processors, future roadmaps, what the machines might be used for over the next few years, and the software that makes it all possible. But to understand what the company is doing, we have to back up a bit to look at where the field as a whole is moving.

    Read 20 remaining paragraphs | Comments

    • chevron_right

      What does it take to get AI to work like a scientist?

      news.movim.eu / ArsTechnica · Tuesday, 8 August, 2023 - 18:27

    Digital generated image of glowing dots connected into brain icon inside abstract digital space.

    Enlarge (credit: Andriy Onufriyenko )

    As machine-learning algorithms grow more sophisticated, artificial intelligence seems poised to revolutionize the practice of science itself. In part, this will come from the software enabling scientists to work more effectively. But some advocates are hoping for a fundamental transformation in the process of science. The Nobel Turing Challenge , issued in 2021 by noted computer scientist Hiroaki Kitano , tasked the scientific community with producing a computer program capable of making a discovery worthy of a Nobel Prize by 2050.

    Part of the work of scientists is to uncover laws of nature—basic principles that distill the fundamental workings of our Universe. Many of them, like Newton’s laws of motion or the law of conservation of mass in chemical reactions, are expressed in a rigorous mathematical form. Others, like the law of natural selection or Mendel’s law of genetic inheritance, are more conceptual.

    The scientific community consists of theorists, data analysts, and experimentalists who collaborate to uncover these laws. The dream behind the Nobel Turing Challenge is to offload the tasks of all three onto artificial intelligence.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Is distributed computing dying, or just fading into the backdrop?

      news.movim.eu / ArsTechnica · Tuesday, 11 July, 2023 - 13:44 · 1 minute

    Image of a series of bar graphs in multiple colors.

    Enlarge / This image has a warm, nostalgic feel for many of us. (credit: SETI Institute )

    Distributed computing erupted onto the scene in 1999 with the release of SETI@home, a nifty program and screensaver (back when people still used those) that sifted through radio telescope signals for signs of alien life.

    The concept of distributed computing is simple enough: You take a very large project, slice it up into pieces, and send out individual pieces to PCs for processing. There is no inter-PC connection or communication; it’s all done through a central server. Each piece of the project is independent of the others; a distributed computing project wouldn't work if a process needed the results of a prior process to continue. SETI@home was a prime candidate for distributed computing: Each individual work unit was a unique moment in time and space as seen by a radio telescope.

    Twenty-one years later, SETI@home shut down, having found nothing. An incalculable amount of PC cycles and electricity wasted for nothing. We have no way of knowing all the reasons people quit (feel free to tell us in the comments section), but having nothing to show for it is a pretty good reason.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Hypersensitive robot hand is eerily human in how it can feel things

      news.movim.eu / ArsTechnica · Monday, 22 May, 2023 - 16:53

    Image of robotic fingers gripping a mirrored disco ball with light reflected off it.

    Enlarge (credit: Columbia University ROAM Lab )

    From bionic limbs to sentient androids, robotic entities in science fiction blur the boundaries between biology and machine. Real-life robots are far behind in comparison. While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human.

    One thing robots have not been able to achieve is a level of sensitivity and dexterity high enough to feel and handle things as humans do. Enter a robot hand developed by a team of researchers at Columbia University. (Five years ago, we covered their work back when this achievement was still a concept.)

    This hand doesn’t just pick things up and put them down on command. It is so sensitive that it can actually “feel” what it is touching, and it's dextrous enough to easily change the position of its fingers so it can better hold objects, a maneuver known as "finger gaiting." It is so sensitive it can even do all this in the dark, figuring everything out by touch.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Large language models also work for protein structures

      news.movim.eu / ArsTechnica · Thursday, 16 March, 2023 - 19:01 · 1 minute

    Artist's rendering of a collection of protein structures floating in space

    Enlarge (credit: CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY )

    The success of ChatGPT and its competitors is based on what's termed emergent behaviors. These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware ); they were simply tasked with tracking the statistics of word usage. But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts. Their complex behavior emerged from a far simpler training.

    A team at Meta has now reasoned that this sort of emergent understanding shouldn't be limited to languages. So it has trained an LLM on the statistics of the appearance of amino acids within proteins and used the system's internal representation of what it learned to extract information about the structure of those proteins. The result is not quite as good as the best competing AI systems for predicting protein structures, but it's considerably faster and still getting better.

    LLMs: Not just for language

    The first thing you need to know to understand this work is that, while the term "language" in the name "LLM" refers to their original development for language processing tasks, they can potentially be used for a variety of purposes. So, while language processing is a common use case for LLMs, these models have other capabilities as well. In fact, the term "Large" is far more informative, in that all LLMs have a large number of nodes—the "neurons" in a neural network—and an even larger number of values that describe the weights of the connections among those nodes. While they were first developed to process language, they can potentially be used for a variety of tasks.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      Do better coders swear more, or does C just do that to good programmers?

      news.movim.eu / ArsTechnica · Tuesday, 14 March, 2023 - 18:35

    A person screaming at his computer.

    Enlarge (credit: dasilvafa )

    Ever find yourself staring at a tricky coding problem and thinking, “shit”?

    If those thoughts make their way into your code or the associated comments, you’re in good company. When undergraduate student Jan Strehmel from Karlsruhe Institute of Technology analyzed open source code written in the programming language C, he found no shortage of obscenity. While that might be expected, Strehmel’s overall finding might not be: The average quality of code containing swears was significantly higher than the average quality of code that did not.

    “The results are quite surprising!” Strehmel said. Programmers and scientists may have a lot of follow-up questions. Are the researchers sure there aren’t certain profanity-prone programmers skewing the results? What about other programming languages? And, most importantly, why would swears correlate with high-quality code? The work is ongoing, but even without all the answers, one thing’s for sure: Strehmel just wrote one hell of a bachelor’s thesis.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Is the future of computing biological?

      news.movim.eu / ArsTechnica · Wednesday, 1 March, 2023 - 16:30

    Image of neurons glowing blue against a black background

    Enlarge (credit: Andriy Onufriyenko )

    Trying to make computers more like human brains isn’t a new phenomenon. However, a team of researchers from Johns Hopkins University argues that there could be many benefits in taking this concept a bit more literally by using actual neurons, though there are some hurdles to jump first before we get there.

    In a recent paper , the team laid out a roadmap of what's needed before we can create biocomputers powered by human brain cells (not taken from human brains, though). Further, according to one of the researchers, there are some clear benefits the proposed “organoid intelligence” would have over current computers.

    “We have always tried to make our computers more brain-like,” Thomas Hartung, a researcher at Johns Hopkins University’s Environmental Health and Engineering department and one of the paper’s authors, told Ars. “At least theoretically, the brain is essentially unmatched as a computer.”

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Programming a robot to teach itself how to move

      John Timmer · news.movim.eu / ArsTechnica · Tuesday, 11 May, 2021 - 16:19 · 1 minute

    image of three small pieces of hardware connected by tubes.

    Enlarge / The robotic train. (credit: Oliveri et. al.)

    One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft . Given that self-teaching capability, it's tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we're not there yet. But it should be much easier with a simpler system, right?

    Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

    Roving robots

    The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

    Read 14 remaining paragraphs | Comments

    index?i=AmJUptPtYCA:GdHNpTRO87o:V_sGLiPBpWUindex?i=AmJUptPtYCA:GdHNpTRO87o:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA