• chevron_right

    What does it take to get AI to work like a scientist? / ArsTechnica · Tuesday, 8 August - 18:27

Digital generated image of glowing dots connected into brain icon inside abstract digital space.

Enlarge (credit: Andriy Onufriyenko )

As machine-learning algorithms grow more sophisticated, artificial intelligence seems poised to revolutionize the practice of science itself. In part, this will come from the software enabling scientists to work more effectively. But some advocates are hoping for a fundamental transformation in the process of science. The Nobel Turing Challenge , issued in 2021 by noted computer scientist Hiroaki Kitano , tasked the scientific community with producing a computer program capable of making a discovery worthy of a Nobel Prize by 2050.

Part of the work of scientists is to uncover laws of nature—basic principles that distill the fundamental workings of our Universe. Many of them, like Newton’s laws of motion or the law of conservation of mass in chemical reactions, are expressed in a rigorous mathematical form. Others, like the law of natural selection or Mendel’s law of genetic inheritance, are more conceptual.

The scientific community consists of theorists, data analysts, and experimentalists who collaborate to uncover these laws. The dream behind the Nobel Turing Challenge is to offload the tasks of all three onto artificial intelligence.

Read 22 remaining paragraphs | Comments

  • chevron_right

    Is distributed computing dying, or just fading into the backdrop? / ArsTechnica · Tuesday, 11 July - 13:44 · 1 minute

Image of a series of bar graphs in multiple colors.

Enlarge / This image has a warm, nostalgic feel for many of us. (credit: SETI Institute )

Distributed computing erupted onto the scene in 1999 with the release of SETI@home, a nifty program and screensaver (back when people still used those) that sifted through radio telescope signals for signs of alien life.

The concept of distributed computing is simple enough: You take a very large project, slice it up into pieces, and send out individual pieces to PCs for processing. There is no inter-PC connection or communication; it’s all done through a central server. Each piece of the project is independent of the others; a distributed computing project wouldn't work if a process needed the results of a prior process to continue. SETI@home was a prime candidate for distributed computing: Each individual work unit was a unique moment in time and space as seen by a radio telescope.

Twenty-one years later, SETI@home shut down, having found nothing. An incalculable amount of PC cycles and electricity wasted for nothing. We have no way of knowing all the reasons people quit (feel free to tell us in the comments section), but having nothing to show for it is a pretty good reason.

Read 15 remaining paragraphs | Comments

  • chevron_right

    Hypersensitive robot hand is eerily human in how it can feel things / ArsTechnica · Monday, 22 May - 16:53

Image of robotic fingers gripping a mirrored disco ball with light reflected off it.

Enlarge (credit: Columbia University ROAM Lab )

From bionic limbs to sentient androids, robotic entities in science fiction blur the boundaries between biology and machine. Real-life robots are far behind in comparison. While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human.

One thing robots have not been able to achieve is a level of sensitivity and dexterity high enough to feel and handle things as humans do. Enter a robot hand developed by a team of researchers at Columbia University. (Five years ago, we covered their work back when this achievement was still a concept.)

This hand doesn’t just pick things up and put them down on command. It is so sensitive that it can actually “feel” what it is touching, and it's dextrous enough to easily change the position of its fingers so it can better hold objects, a maneuver known as "finger gaiting." It is so sensitive it can even do all this in the dark, figuring everything out by touch.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Large language models also work for protein structures / ArsTechnica · Thursday, 16 March, 2023 - 19:01 · 1 minute

Artist's rendering of a collection of protein structures floating in space


The success of ChatGPT and its competitors is based on what's termed emergent behaviors. These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware ); they were simply tasked with tracking the statistics of word usage. But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts. Their complex behavior emerged from a far simpler training.

A team at Meta has now reasoned that this sort of emergent understanding shouldn't be limited to languages. So it has trained an LLM on the statistics of the appearance of amino acids within proteins and used the system's internal representation of what it learned to extract information about the structure of those proteins. The result is not quite as good as the best competing AI systems for predicting protein structures, but it's considerably faster and still getting better.

LLMs: Not just for language

The first thing you need to know to understand this work is that, while the term "language" in the name "LLM" refers to their original development for language processing tasks, they can potentially be used for a variety of purposes. So, while language processing is a common use case for LLMs, these models have other capabilities as well. In fact, the term "Large" is far more informative, in that all LLMs have a large number of nodes—the "neurons" in a neural network—and an even larger number of values that describe the weights of the connections among those nodes. While they were first developed to process language, they can potentially be used for a variety of tasks.

Read 17 remaining paragraphs | Comments

  • chevron_right

    Do better coders swear more, or does C just do that to good programmers? / ArsTechnica · Tuesday, 14 March, 2023 - 18:35

A person screaming at his computer.

Enlarge (credit: dasilvafa )

Ever find yourself staring at a tricky coding problem and thinking, “shit”?

If those thoughts make their way into your code or the associated comments, you’re in good company. When undergraduate student Jan Strehmel from Karlsruhe Institute of Technology analyzed open source code written in the programming language C, he found no shortage of obscenity. While that might be expected, Strehmel’s overall finding might not be: The average quality of code containing swears was significantly higher than the average quality of code that did not.

“The results are quite surprising!” Strehmel said. Programmers and scientists may have a lot of follow-up questions. Are the researchers sure there aren’t certain profanity-prone programmers skewing the results? What about other programming languages? And, most importantly, why would swears correlate with high-quality code? The work is ongoing, but even without all the answers, one thing’s for sure: Strehmel just wrote one hell of a bachelor’s thesis.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Is the future of computing biological? / ArsTechnica · Wednesday, 1 March, 2023 - 16:30

Image of neurons glowing blue against a black background

Enlarge (credit: Andriy Onufriyenko )

Trying to make computers more like human brains isn’t a new phenomenon. However, a team of researchers from Johns Hopkins University argues that there could be many benefits in taking this concept a bit more literally by using actual neurons, though there are some hurdles to jump first before we get there.

In a recent paper , the team laid out a roadmap of what's needed before we can create biocomputers powered by human brain cells (not taken from human brains, though). Further, according to one of the researchers, there are some clear benefits the proposed “organoid intelligence” would have over current computers.

“We have always tried to make our computers more brain-like,” Thomas Hartung, a researcher at Johns Hopkins University’s Environmental Health and Engineering department and one of the paper’s authors, told Ars. “At least theoretically, the brain is essentially unmatched as a computer.”

Read 8 remaining paragraphs | Comments

  • chevron_right

    Programming a robot to teach itself how to move / ArsTechnica · Tuesday, 11 May, 2021 - 16:19 · 1 minute

image of three small pieces of hardware connected by tubes.

Enlarge / The robotic train. (credit: Oliveri et. al.)

One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft . Given that self-teaching capability, it's tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we're not there yet. But it should be much easier with a simpler system, right?

Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

Roving robots

The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

Read 14 remaining paragraphs | Comments

  • chevron_right

    Two ways of performing massively parallel AI calculations using light / ArsTechnica · Thursday, 7 January, 2021 - 20:48 · 1 minute

Image of a series of parallel lines in different colors.

Enlarge / The output of two optical frequency combs, showing the light appearing at evenly spaced wavelengths. (credit: ESO )

AI and machine-learning techniques have become a major focus of everything from cloud computing services to cell phone manufacturers. Unfortunately, our existing processors are a bad match for the sort of algorithms that many of these techniques are based on, in part because they require frequent round trips between the processor and memory. To deal with this bottleneck, researchers have figured out how to perform calculations in memory and designed chips where each processing unit has a bit of memory attached .

Now, two different teams of researchers have figured out ways of performing calculations with light in a way that both merges memory and calculations and allows for massive parallelism. Despite the differences in implementation, the hardware designed by these teams has a common feature: it allows the same piece of hardware to simultaneously perform different calculations using different frequencies of light. While they're not yet at the level of performance of some dedicated processors, the approach can scale easily and can be implemented using on-chip hardware, raising the process of using it as a dedicated co-processor.

A fine-toothed comb

The new work relies on hardware called a frequency comb, a technology that won some of its creators the 2005 Nobel Prize in Physics. While a lot of interesting physics is behind how the combs work (which you can read more about here if you're curious), what we care about is the outcome of that physics. While there are several ways to produce a frequency comb, they all produce the same thing: a beam of light that is composed of evenly spaced frequencies. So, a frequency comb in visible wavelengths might be composed of light with a wavelength of 500 nanometers, 510nm, 520nm, and so on.

Read 14 remaining paragraphs | Comments