phone

    • chevron_right

      T-Mobile discloses 2nd data breach of 2023, this one leaking account PINs and more

      news.movim.eu / ArsTechnica • 1 May, 2023

    A bird sits on top of a T-Mobile sign outside a mobile phone store,

    Enlarge (credit: Getty Images | Bloomberg )

    T-Mobile on Monday said it experienced a hack that exposed account PINs and other customer data in the company's second network intrusion this year and the ninth since 2018.

    The intrusion, which started on February 24 and lasted until March 30, affected 836 customers, according to a notification on the website of Maine Attorney General Aaron Frey.

    “The information obtained for each customer varied but may have included full name, contact information, account number and associated phone numbers, T-Mobile account PIN, social security number, government ID, date of birth, balance due, internal codes that T-Mobile uses to service customer accounts (for example, rate plan and feature codes), and the number of lines,” the company wrote in a letter sent to affected customers. Account PINs, which customers use to swap out SIM cards and authorize other important changes to their accounts, were reset once T-Mobile discovered the breach on March 27.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      Stone-hearted researchers gleefully push over adorable soccer-playing robots

      news.movim.eu / ArsTechnica • 1 May, 2023 • 1 minute

    In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground.

    Enlarge / In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground. (credit: DeepMind)

    On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.

    But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer ), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.

    DeepMind's "Robustness to pushes" demonstration video.

    On the demo website for "Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning," the researchers frame the merciless toppling of the robots as a key part of a "robustness to pushes" evaluation, writing, "Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way."

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Apple uses iOS and macOS Rapid Security Response feature for the first time

      news.movim.eu / ArsTechnica • 1 May, 2023

    Macs running macOS Ventura.

    Enlarge / Macs running macOS Ventura. (credit: Apple)

    When it announced iOS 16, iPadOS 16, and macOS Ventura at its Worldwide Developers Conference last summer, one of the features Apple introduced was something called "Rapid Security Response." The feature is meant to enable quicker and more frequent security patches for Apple's newest operating systems, especially for WebKit-related flaws that affect Safari and other apps that use Apple's built-in browser engine.

    Nearly a year after that WWDC and more than seven months after releasing iOS 16 in September, Apple has finally issued a Rapid Security Response update. Available for iOS and iPadOS devices running version 16.4.1 or Macs running version 13.3.1, the update adds an (a) to your OS version to denote that it's been installed.

    At this point, it's unclear whether Apple intends to release more information about the specific bugs patched by this Security Response update; the support page linked to in the update is just a general description of Rapid Security Response updates and how they work, and the Apple's Security Updates page hasn't been updated with more information as of this writing.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Warning of AI’s danger, pioneer Geoffrey Hinton quits Google to speak freely

      news.movim.eu / ArsTechnica • 1 May, 2023 • 1 minute

    Geoffrey Hinton in 2019.

    Enlarge / Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Thursday, Sept. 5, 2019. (credit: Getty Images / Benj Edwards)

    According to the New York Times , AI pioneer Dr. Geoffrey Hinton has resigned from Google so he can "speak freely" about potential risks posed by AI. Hinton, who helped create some of the fundamental technology behind today's generative AI systems, fears that the tech industry's drive to develop AI products could result in dangerous consequences—from misinformation to job loss or even a threat to humanity.

    "Look at how it was five years ago and how it is now," the Times quoted Hinton as saying. "Take the difference and propagate it forwards. That’s scary."

    Hinton's resume in the field of artificial intelligence extends back to 1972, and his accomplishments have influenced current practices in generative AI. In 1987, Hinton, David Rumelhart, and Ronald J. Williams popularized backpropagation , a key technique for training neural networks that is used in today's generative AI models. In 2012, Hinton, Alex Krizhevsky, and Ilya Sutskever created AlexNet , which is commonly hailed as a breakthrough in machine vision and deep learning, and it arguably kickstarted our current era of generative AI. In 2018, Hinton won the Turing Award , which some call the "Nobel Prize of Computing," along with Yoshua Bengio and Yann LeCun.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Two core Unix-like utilities, sudo and su, are getting rewrites in Rust

      news.movim.eu / ArsTechnica • 1 May, 2023

    Two of the most fundamental tools of the modern Unix-like command line, sudo and su, are being rewritten in the modern language Rust as part of a wider effort to get critical but aging infrastructure pieces replaced by memory-safe counterparts.

    As detailed at Prossimo , a joint team from Ferrous Systems and Tweede Golf , with support from Amazon Web Services, is reimplementing sudo and su. These utilities allow a user to perform actions with the privileges of another user (typically a higher-level superuser) without having to learn and enter that other user's password. Given their age and wide usage, the Prossimo team believes it's time for a rework.

    "Sudo was first developed in the 1980s. Over the decades, it has become an essential tool for performing changes while minimizing risk to an operating system," writes Josh Aas. "But because it's written in C, sudo has experienced many vulnerabilities related to memory safety issues."

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Those scary warnings of juice jacking in airports and hotels? They’re nonsense

      news.movim.eu / ArsTechnica • 1 May, 2023 • 1 minute

    Those scary warnings of juice jacking in airports and hotels? They’re nonsense

    Enlarge (credit: Aurich Lawson | Getty Images)

    Federal authorities, tech pundits, and news outlets want you to be on the lookout for a scary cyberattack that can hack your phone when you do nothing more than plug it into a public charging station. These warnings of “juice jacking,” as the threat has come to be known, have been circulating for more than a decade.

    Earlier this month, though, juice jacking fears hit a new high when the FBI and Federal Communications Commission issued new, baseless warnings that generated ominous-sounding news reports from hundreds of outlets. NPR reported that the crime is "becoming more prevalent, possibly due to the increase in travel." The Washington Post said it's a “significant privacy hazard” that can identify loaded webpages in less than 10 seconds. CNN warned that just by plugging into a malicious charger, "your device is now infected." And a Fortune headline admonished readers: "Don’t let a free USB charge drain your bank account."

    The Halley’s Comet of cybersecurity scares

    The scenario for juice jacking looks something like this: A hacker sets up equipment at an airport, shopping mall, or hotel. The equipment mimics the look and functions of normal charging stations, which allow people to recharge their mobile phones when they're low on power. Unbeknownst to the users, the charging station surreptitiously sends commands over the charging cord’s USB or Lightning connector and steals contacts and emails, installs malware, and does all kinds of other nefarious things.

    Read 38 remaining paragraphs | Comments

    • chevron_right

      Artists astound with AI-generated film stills from a parallel universe

      news.movim.eu / ArsTechnica • 7 April, 2023

    An AI-generated image from an <a class=#aicinema still series called" src="https://cdn.arstechnica.net/wp-content/uploads/2023/04/wieland_hero_2-800x450.jpg" />

    Enlarge / An AI-generated image from an #aicinema still series called "Vinyl Vengeance" by Julie Wieland, created using Midjourney. (credit: Julie Wieland / Midjourney )

    Since last year, a group of artists have been using an AI image generator called Midjourney to create still photos of films that don't exist. They call the trend "AI cinema." We spoke to one of its practitioners, Julie Wieland, and asked her about her technique, which she calls "synthography," for synthetic photography.

    The origins of “AI cinema” as a still image art form

    Last year, image synthesis models like DALL-E 2 , Stable Diffusion , and Midjourney began allowing anyone with a text description (called a "prompt") to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and run with them.

    While anyone with a prompt can make an AI-generated image, it soon became clear that some people possessed a special talent for finessing these new AI tools to produce better content. As with painting or photography, the human creative spark is still necessary to produce notable results consistently.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      There’s a new form of keyless car theft that works in under 2 minutes

      news.movim.eu / ArsTechnica • 7 April, 2023

    Infrared image of a person jimmying open a vehicle.

    Enlarge / Infrared image of a person jimmying open a vehicle. (credit: Getty Images)

    When a London man discovered the front left-side bumper of his Toyota RAV4 torn off and the headlight partially dismantled not once but twice in three months last year, he suspected the acts were senseless vandalism. When the vehicle went missing a few days after the second incident, and a neighbor found their Toyota Land Cruiser gone shortly afterward, he discovered they were part of a new and sophisticated technique for performing keyless thefts.

    It just so happened that the owner, Ian Tabor, is a cybersecurity researcher specializing in automobiles. While investigating how his RAV4 was taken, he stumbled on a new technique called CAN injection attacks.

    The case of the malfunctioning CAN

    Tabor began by poring over the “MyT” telematics system that Toyota uses to track vehicle anomalies known as DTCs (Diagnostic Trouble Codes). It turned out his vehicle had recorded many DTCs around the time of the theft.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Why ChatGPT and Bing Chat are so good at making things up

      news.movim.eu / ArsTechnica • 6 April, 2023

    Why ChatGPT and Bing Chat are so good at making things up

    Enlarge (credit: Aurich Lawson | Getty Images)

    Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation .

    Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.

    “Hallucinations”—a loaded term in AI

    AI chatbots such as OpenAI's ChatGPT rely on a type of AI called a "large language model" (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate "natural language" text—language as humans would naturally write or talk. Unfortunately, they can also make mistakes.

    Read 41 remaining paragraphs | Comments