phone

    • chevron_right

      Chinese state hackers infect critical infrastructure throughout the US and Guam

      news.movim.eu / ArsTechnica • 24 May, 2023

    Chinese state hackers infect critical infrastructure throughout the US and Guam

    Enlarge (credit: peterschreiber.media | Getty Images)

    A Chinese government hacking group has acquired a significant foothold inside critical infrastructure environments throughout the US and Guam and is stealing network credentials and sensitive data while remaining largely undetectable, Microsoft and governments from the US and four other countries said on Wednesday.

    The group, tracked by Microsoft under the name Volt Typhoon, has been active for at least two years with a focus on espionage and information gathering for the People’s Republic of China, Microsoft said . To remain stealthy, the hackers use tools already installed or built into infected devices that are manually controlled by the attackers rather than being automated, a technique known as "living off the land." In addition to being revealed by Microsoft, the campaign was also documented in an advisory jointly published by:

    • US Cybersecurity and Infrastructure Security Agency (CISA)
    • US Federal Bureau of Investigation (FBI)
    • Australian Cyber Security Centre (ACSC)
    • Canadian Centre for Cyber Security (CCCS)
    • New Zealand National Cyber Security Centre (NCSC-NZ)
    • United Kingdom National Cyber Security Centre (NCSC-UK)

    Read 7 remaining paragraphs | Comments

    • chevron_right

      App with 50,000 Google Play installs sent attackers mic recordings every 15 minutes

      news.movim.eu / ArsTechnica • 24 May, 2023 • 1 minute

    App with 50,000 Google Play installs sent attackers mic recordings every 15 minutes

    Enlarge (credit: Getty Images)

    An app that had more than 50,000 downloads from Google Play surreptitiously recorded nearby audio every 15 minutes and sent it to the app developer, a researcher from security firm ESET said.

    The app, titled iRecorder Screen Recorder, started life on Google Play in September 2021 as a benign app that allowed users to record the screens of their Android devices, ESET researcher Lukas Stefanko said in a post published on Tuesday. Eleven months later, the legitimate app was updated to add entirely new functionality. It included the ability to remotely turn on the device mic and record sound, connect to an attacker-controlled server, and upload the audio and other sensitive files that were stored on the device.

    Surreptitious recording every 15 minutes

    The secret espionage functions were implemented using code from AhMyth , an open source RAT—short for remote access trojan—that has been incorporated into several other Android apps in recent years. Once the RAT was added to iRecorder, all users of the previously benign app received updates that allowed their phones to record nearby audio and send it to a developer-designated server through an encrypted channel. As time went on, code taken from AhMyth was heavily modified, an indication that the developer became more adept with the open source RAT. ESET named the newly modified RAT in iRecorder AhRat.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Fake Pentagon “explosion” photo sows confusion on Twitter

      news.movim.eu / ArsTechnica • 23 May, 2023 • 1 minute

    A fake AI-generated image of an

    Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

    On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

    The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

    Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Adobe Photoshop’s new “Generative Fill” AI tool lets you manipulate photos with text

      news.movim.eu / ArsTechnica • 23 May, 2023 • 1 minute

    An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by Generative Fill in the Adobe Photoshop beta.

    Enlarge / An example of a 1983 file photo of the Apple Lisa computer that has been significantly enhanced by the new "Generative Fill" AI tool in the Adobe Photoshop beta. (credit: Apple / Benj Edwards / Adobe)

    On Tuesday, Adobe added a new tool to its Photoshop beta called "Generative Fill," which uses cloud-based image synthesis to fill selected areas of an image with new AI-generated content based on a text description. Powered by Adobe Firefly, Generative Fill works similarly to a technique called "inpainting" used in DALL-E and Stable Diffusion releases since last year.

    At the core of Generative Fill is Adobe Firefly , which is Adobe's custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe's stock library to associate certain imagery with text descriptions of them. Now part of Photoshop, people can type in what they want to see (i.e. "a clown on a computer monitor"), and Firefly will synthesize several options for the user to choose from. Generative Fill uses a well-known AI technique called " inpainting " to create a context-aware generation that can seamlessly blend synthesized imagery into an existing image.

    To use Generative Fill, users select an area of an existing image they want to modify. After selecting it, a "Contextual Task Bar" pops up that allows users to type in a description of what they want to see generated in the selected area. Photoshop sends this data to Adobe's servers for processing, then returns results in the app. After generating, the user has the option to select between several options of generations or to create more options to browse through.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Here’s how long it takes new BrutePrint attack to unlock 10 different smartphones

      news.movim.eu / ArsTechnica • 22 May, 2023

    Here’s how long it takes new BrutePrint attack to unlock 10 different smartphones

    Enlarge (credit: Getty Images)

    Researchers have devised a low-cost smartphone attack that cracks the authentication fingerprint used to unlock the screen and perform other sensitive actions on a range of Android devices in as little as 45 minutes.

    Dubbed BrutePrint by its creators, the attack requires an adversary to have physical control of a device when it is lost, stolen, temporarily surrendered, or unattended, for instance, while the owner is asleep. The objective: to gain the ability to perform a brute-force attack that tries huge numbers of fingerprint guesses until one is found that will unlock the device. The attack exploits vulnerabilities and weaknesses in the device SFA (smartphone fingerprint authentication).

    BrutePrint overview

    BrutePrint is an inexpensive attack that exploits vulnerabilities that allow people to unlock devices by exploiting various vulnerabilities and weaknesses in smartphone fingerprint authentication systems. Here's the workflow of these systems, which are typically abbreviated as SPAs.

    Read 16 remaining paragraphs | Comments

    • chevron_right

      Passkeys may not be for you, but they are safe and easy—here’s why

      news.movim.eu / ArsTechnica • 12 May, 2023

    Passkeys may not be for you, but they are safe and easy—here’s why

    Enlarge (credit: Aurich Lawson | Getty Images)

    My recent feature on passkeys attracted significant interest, and a number of the 1,100+ comments raised questions about how the passkey system actually works and if it can be trusted. In response, I've put together this list of frequently asked questions to dispel a few myths and shed some light on what we know—and don't know—about passkeys.

    Q: I don’t trust Google. Why should I use passkeys?

    A: If you don’t use Google, then Google passkeys aren’t for you. If you don’t use Apple or Microsoft products, the situation is similar. The original article was aimed at the hundreds of millions of people who do use these major platforms (even if grudgingly).

    Read 32 remaining paragraphs | Comments

    • chevron_right

      Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds

      news.movim.eu / ArsTechnica • 12 May, 2023

    An AI-generated image of a robot reading a book.

    Enlarge / An AI-generated image of a robot reading a book. (credit: Benj Edwards / Stable Diffusion)

    On Thursday, AI company Anthropic announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words.

    Like OpenAI's GPT-4 , Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory—how much human-provided input data an LLM can process at once.

    A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days," according to Anthropic:

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Microsoft will take nearly a year to finish patching new 0-day Secure Boot bug

      news.movim.eu / ArsTechnica • 11 May, 2023

    Microsoft will take nearly a year to finish patching new 0-day Secure Boot bug

    Enlarge (credit: Aurich Lawson / Ars Technica )

    Earlier this week, Microsoft released a patch to fix a Secure Boot bypass bug used by the BlackLotus bootkit we reported on in March. The original vulnerability, CVE-2022-21894 , was patched in January, but the new patch for CVE-2023-24932 addresses another actively exploited workaround for systems running Windows 10 and 11 and Windows Server versions going back to Windows Server 2008.

    The BlackLotus bootkit is the first-known real-world malware that can bypass Secure Boot protections, allowing for the execution of malicious code before your PC begins loading Windows and its many security protections. Secure Boot has been enabled by default for over a decade on most Windows PCs sold by companies like Dell, Lenovo, HP, Acer, and others. PCs running Windows 11 must have it enabled to meet the software's system requirements.

    Microsoft says that the vulnerability can be exploited by an attacker with either physical access to a system or administrator rights on a system. It can affect physical PCs and virtual machines with Secure Boot enabled.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      OpenAI peeks into the “black box” of neural networks with new research

      news.movim.eu / ArsTechnica • 11 May, 2023

    An AI-generated image of robots looking inside an artificial brain.

    Enlarge / An AI-generated image of robots looking inside an artificial brain. (credit: Stable Diffusion)

    On Tuesday, OpenAI published a new research paper detailing a technique that uses its GPT-4 language model to write explanations for the behavior of neurons in its older GPT-2 model, albeit imperfectly. It's a step forward for "interpretability," which is a field of AI that seeks to explain why neural networks create the outputs they do.

    While large language models (LLMs) are conquering the tech world, AI researchers still don't know a lot about their functionality and capabilities under the hood. In the first sentence of OpenAI's paper, the authors write, "Language models have become more capable and more widely deployed, but we do not understand how they work."

    For outsiders, that likely sounds like a stunning admission from a company that not only depends on revenue from LLMs but also hopes to accelerate them to beyond-human levels of reasoning ability.

    Read 10 remaining paragraphs | Comments