close
  • chevron_right

    Hypersensitive robot hand is eerily human in how it can feel things

    news.movim.eu / ArsTechnica · 7 days ago - 16:53

Image of robotic fingers gripping a mirrored disco ball with light reflected off it.

Enlarge (credit: Columbia University ROAM Lab )

From bionic limbs to sentient androids, robotic entities in science fiction blur the boundaries between biology and machine. Real-life robots are far behind in comparison. While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human.

One thing robots have not been able to achieve is a level of sensitivity and dexterity high enough to feel and handle things as humans do. Enter a robot hand developed by a team of researchers at Columbia University. (Five years ago, we covered their work back when this achievement was still a concept.)

This hand doesn’t just pick things up and put them down on command. It is so sensitive that it can actually “feel” what it is touching, and it's dextrous enough to easily change the position of its fingers so it can better hold objects, a maneuver known as "finger gaiting." It is so sensitive it can even do all this in the dark, figuring everything out by touch.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Large language models also work for protein structures

    news.movim.eu / ArsTechnica · Thursday, 16 March - 19:01 · 1 minute

Artist's rendering of a collection of protein structures floating in space

Enlarge (credit: CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY )

The success of ChatGPT and its competitors is based on what's termed emergent behaviors. These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware ); they were simply tasked with tracking the statistics of word usage. But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts. Their complex behavior emerged from a far simpler training.

A team at Meta has now reasoned that this sort of emergent understanding shouldn't be limited to languages. So it has trained an LLM on the statistics of the appearance of amino acids within proteins and used the system's internal representation of what it learned to extract information about the structure of those proteins. The result is not quite as good as the best competing AI systems for predicting protein structures, but it's considerably faster and still getting better.

LLMs: Not just for language

The first thing you need to know to understand this work is that, while the term "language" in the name "LLM" refers to their original development for language processing tasks, they can potentially be used for a variety of purposes. So, while language processing is a common use case for LLMs, these models have other capabilities as well. In fact, the term "Large" is far more informative, in that all LLMs have a large number of nodes—the "neurons" in a neural network—and an even larger number of values that describe the weights of the connections among those nodes. While they were first developed to process language, they can potentially be used for a variety of tasks.

Read 17 remaining paragraphs | Comments

  • chevron_right

    Do better coders swear more, or does C just do that to good programmers?

    news.movim.eu / ArsTechnica · Tuesday, 14 March - 18:35

A person screaming at his computer.

Enlarge (credit: dasilvafa )

Ever find yourself staring at a tricky coding problem and thinking, “shit”?

If those thoughts make their way into your code or the associated comments, you’re in good company. When undergraduate student Jan Strehmel from Karlsruhe Institute of Technology analyzed open source code written in the programming language C, he found no shortage of obscenity. While that might be expected, Strehmel’s overall finding might not be: The average quality of code containing swears was significantly higher than the average quality of code that did not.

“The results are quite surprising!” Strehmel said. Programmers and scientists may have a lot of follow-up questions. Are the researchers sure there aren’t certain profanity-prone programmers skewing the results? What about other programming languages? And, most importantly, why would swears correlate with high-quality code? The work is ongoing, but even without all the answers, one thing’s for sure: Strehmel just wrote one hell of a bachelor’s thesis.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Is the future of computing biological?

    news.movim.eu / ArsTechnica · Wednesday, 1 March - 16:30

Image of neurons glowing blue against a black background

Enlarge (credit: Andriy Onufriyenko )

Trying to make computers more like human brains isn’t a new phenomenon. However, a team of researchers from Johns Hopkins University argues that there could be many benefits in taking this concept a bit more literally by using actual neurons, though there are some hurdles to jump first before we get there.

In a recent paper , the team laid out a roadmap of what's needed before we can create biocomputers powered by human brain cells (not taken from human brains, though). Further, according to one of the researchers, there are some clear benefits the proposed “organoid intelligence” would have over current computers.

“We have always tried to make our computers more brain-like,” Thomas Hartung, a researcher at Johns Hopkins University’s Environmental Health and Engineering department and one of the paper’s authors, told Ars. “At least theoretically, the brain is essentially unmatched as a computer.”

Read 8 remaining paragraphs | Comments

  • chevron_right

    Programming a robot to teach itself how to move

    news.movim.eu / ArsTechnica · Tuesday, 11 May, 2021 - 16:19 · 1 minute

image of three small pieces of hardware connected by tubes.

Enlarge / The robotic train. (credit: Oliveri et. al.)

One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft . Given that self-teaching capability, it's tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we're not there yet. But it should be much easier with a simpler system, right?

Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

Roving robots

The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

Read 14 remaining paragraphs | Comments

index?i=AmJUptPtYCA:GdHNpTRO87o:V_sGLiPBpWUindex?i=AmJUptPtYCA:GdHNpTRO87o:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
  • chevron_right

    Two ways of performing massively parallel AI calculations using light

    news.movim.eu / ArsTechnica · Thursday, 7 January, 2021 - 20:48 · 1 minute

Image of a series of parallel lines in different colors.

Enlarge / The output of two optical frequency combs, showing the light appearing at evenly spaced wavelengths. (credit: ESO )

AI and machine-learning techniques have become a major focus of everything from cloud computing services to cell phone manufacturers. Unfortunately, our existing processors are a bad match for the sort of algorithms that many of these techniques are based on, in part because they require frequent round trips between the processor and memory. To deal with this bottleneck, researchers have figured out how to perform calculations in memory and designed chips where each processing unit has a bit of memory attached .

Now, two different teams of researchers have figured out ways of performing calculations with light in a way that both merges memory and calculations and allows for massive parallelism. Despite the differences in implementation, the hardware designed by these teams has a common feature: it allows the same piece of hardware to simultaneously perform different calculations using different frequencies of light. While they're not yet at the level of performance of some dedicated processors, the approach can scale easily and can be implemented using on-chip hardware, raising the process of using it as a dedicated co-processor.

A fine-toothed comb

The new work relies on hardware called a frequency comb, a technology that won some of its creators the 2005 Nobel Prize in Physics. While a lot of interesting physics is behind how the combs work (which you can read more about here if you're curious), what we care about is the outcome of that physics. While there are several ways to produce a frequency comb, they all produce the same thing: a beam of light that is composed of evenly spaced frequencies. So, a frequency comb in visible wavelengths might be composed of light with a wavelength of 500 nanometers, 510nm, 520nm, and so on.

Read 14 remaining paragraphs | Comments

index?i=bl5IcUtM32w:MIj-Eh9ehyE:V_sGLiPBpWUindex?i=bl5IcUtM32w:MIj-Eh9ehyE:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
  • chevron_right

    Google develops an AI that can learn both chess and Pac-Man

    news.movim.eu / ArsTechnica · Thursday, 24 December, 2020 - 13:00

The first major conquest of artificial intelligence was chess. The game has a dizzying number of possible combinations, but it was relatively tractable because it was structured by a set of clear rules. An algorithm could always have perfect knowledge of the state of the game and know every possible move that both it and its opponent could make. The state of the game could be evaluated just by looking at the board.

But many other games aren't that simple. If you take something like Pac-Man , then figuring out the ideal move would involve considering the shape of the maze, the location of the ghosts, the location of any additional areas to clear, the availability of power-ups, etc., and the best plan can end up in disaster if Blinky or Clyde makes an unexpected move. We've developed AIs that can tackle these games, too, but they have had to take a very different approach to the ones that conquered chess and Go.

At least until now. Today, however, Google's DeepMind division published a paper describing the structure of an AI that can tackle both chess and Atari classics.

Read 12 remaining paragraphs | Comments

index?i=5zGqdDP0ql4:QHVbE-lh3Eo:V_sGLiPBpWUindex?i=5zGqdDP0ql4:QHVbE-lh3Eo:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA
  • chevron_right

    DeepMind AI handles protein folding, which humbled previous software

    news.movim.eu / ArsTechnica · Monday, 30 November, 2020 - 22:10 · 1 minute

Proteins rapidly form complicated structures which had proven difficult to predict.

Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. (credit: Argonne National Lab )

Today, DeepMind announced that it had seemingly solved one of biology's outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It's a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

The limitations of the system aren't yet clear—DeepMind says it's currently planning on a peer-reviewed paper, and has only made a blog post and some press releases available. But it clearly performs better than anything that's come before it, after having more than doubled the performance of the best system in just four years. Even if it's not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.

Between the folds

To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.

Read 13 remaining paragraphs | Comments

index?i=xt_2L9as_aI:uGE4QhrVmrc:V_sGLiPBpWUindex?i=xt_2L9as_aI:uGE4QhrVmrc:F7zBnMyn0Loindex?d=qj6IDK7rITsindex?d=yIl2AUoC8zA