phone

    • chevron_right

      Hackers spent 2+ years looting secrets of chipmaker NXP before being detected

      news.movim.eu / ArsTechnica • 28 November, 2023 • 1 minute

    A cartoon man runs across a white field of ones and zeroes.

    Enlarge (credit: Getty Images )

    A prolific espionage hacking group with ties to China spent over two years looting the corporate network of NXP, the Netherlands-based chipmaker whose silicon powers security-sensitive components found in smartphones, smartcards, and electric vehicles, a news outlet has reported.

    The intrusion, by a group tracked under names including "Chimera" and "G0114," lasted from late 2017 to the beginning of 2020, according to Netherlands-based NCR, which cited “several sources” familiar with the incident. During that time, the threat actors periodically accessed employee mailboxes and network drives in search of chip designs and other NXP intellectual property. The breach wasn’t uncovered until Chimera intruders were detected in a separate company network that connected to compromised NXP systems on several occasions. Details of the breach remained a closely guarded secret until now.

    No material damage

    NCR cited a report published (and later deleted) by security firm Fox-IT, titled Abusing Cloud Services to Fly Under the Radar . It documented Chimera using cloud services from companies including Microsoft and Dropbox to receive data stolen from the networks of semiconductor makers, including one in Europe that was hit in “early Q4 2017.” Some of the intrusions lasted as long as three years before coming to light. NCR said the unidentified victim was NXP.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Stability AI releases Stable Video Diffusion, which turns pictures into short videos

      news.movim.eu / ArsTechnica • 27 November, 2023

    Still examples of images animated using Stable Video Diffusion by Stability AI.

    Enlarge / Still examples of images animated using Stable Video Diffusion by Stability AI. (credit: Stability AI)

    On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

    Last year, Stability AI made waves with the release of Stable Diffusion , an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.

    Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Amazon’s $195 thin clients are repurposed Fire TV Cubes

      news.movim.eu / ArsTechnica • 27 November, 2023

    amazon workspaces thin client

    Enlarge / A blog post from AWS chief evangelist Jeff Barr shows the Workspaces Thin Client setup. (credit: Jeff Barr/Amazon )

    Amazon has turned its Fire TV Cube streaming device into a thin client optimized for Amazon Web Services (AWS).

    Amazon's Workspaces Thin Client also supports Amazon's Workspaces Web, for accessing virtual desktops from a browser, and AppStream .

    The computer is a Fire TV Cube with a new software stack. All the hardware—from the 2GB of LPDDR4x RAM and 16GB of storage, to the Arm processor with 8 cores, including four running at up to 2.2 GHz—remain identical whether buying the device as an Alexa-powered entertainment-streaming device or thin client computer. Both the Fire TV Cube and Workspaces Thin Client run an Android Open Source Project-based Android fork ( for now ).

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Microsoft offers legal protection for AI copyright infringement challenges

      news.movim.eu / ArsTechnica • 8 September, 2023

    A man in an armor helmet sitting at a desk with a protective glowing field around him.

    Enlarge (credit: Getty Images / Benj Edwards )

    On Thursday, Microsoft announced that it will provide legal protection for customers who are sued for copyright infringement over content generated by the company's AI systems. This new policy, called the Copilot Copyright Commitment, is an expansion of Microsoft's existing intellectual property indemnification coverage, Reuters reports .

    Microsoft's announcement comes as generative AI tools like ChatGPT have raised concerns about reproducing copyrighted material without proper attribution. Microsoft has heavily invested in AI through products like GitHub Copilot and Bing Chat that can generate original code, text, and images on demand. Its AI models have gained these capabilities by scraping publicly available data off of the Internet without seeking express permission from copyright holders.

    By offering legal protection, Microsoft aims to give customers confidence in deploying its AI systems without worrying about potential copyright issues. The policy covers damages and legal fees, providing customers with an added layer of protection as generative AI sees rapid adoption across the tech industry.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival

      news.movim.eu / ArsTechnica • 8 September, 2023

    The Anthropic Claude logo on a purple background.

    Enlarge / The Anthropic Claude logo. (credit: Anthropic / Benj Edwards)

    On Thursday, AI-maker and OpenAI competitor Anthropic launched Claude Pro , a subscription-based version of its Claude.ai web-based AI assistant, which functions similarly to ChatGPT. It's available for $20/month in the US or 18 pounds/month in the UK, and it promises five-times-higher usage limits, priority access to Claude during high-traffic periods, and early access to new features as they emerge.

    Like ChatGPT, Claude Pro can compose text, summarize, do analysis, solve logic puzzles, and more.

    Claude.ai is what Anthropic offers as its conversational interface for its Claude 2 AI language model, similar to how ChatGPT provides an application wrapper for the underlying models GPT-3.5 and GPT-4. In February, OpenAI chose a subscription route for ChatGPT Plus , which for $20 a month also gives early access to new features, but it also unlocks access to GPT-4, which is OpenAI's most powerful language model.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Cisco security appliance 0-day is under attack by ransomware crooks

      news.movim.eu / ArsTechnica • 8 September, 2023 • 1 minute

    Cisco Systems headquarters in San Jose, California, US, on Monday, Aug. 14, 2023. Cisco Systems Inc. is scheduled to release earnings figures on August 16. Photographer: David Paul Morris/Bloomberg via Getty Images

    Enlarge / Cisco Systems headquarters in San Jose, California, US, on Monday, Aug. 14, 2023. Cisco Systems Inc. is scheduled to release earnings figures on August 16. Photographer: David Paul Morris/Bloomberg via Getty Images

    Cisco on Thursday confirmed the existence of a currently unpatched zero-day vulnerability that hackers are exploiting to gain unauthorized access to two widely used security appliances it sells.

    The vulnerability resides in Cisco’s Adaptive Security Appliance Software and its Firepower Threat Defense, which are typically abbreviated as ASA and FTD. Cisco and researchers have known since last week that a ransomware crime syndicate called Akira was gaining access to devices through password spraying and brute-forcing. Password spraying, also known as credential stuffing, involves trying a handful of commonly used passwords for a large number of usernames in an attempt to prevent detection and subsequent lockouts. In brute-force attacks, hackers use a much larger corpus of password guesses against a more limited number of usernames.

    Ongoing attacks since (at least) March

    “An attacker could exploit this vulnerability by specifying a default connection profile/tunnel group while conducting a brute force attack or while establishing a clientless SSL VPN session using valid credentials,” Cisco officials wrote in an advisory . “A successful exploit could allow the attacker to achieve one or both of the following:

    Read 9 remaining paragraphs | Comments

    • chevron_right

      The International Criminal Court will now prosecute cyberwar crimes

      news.movim.eu / ArsTechnica • 8 September, 2023 • 1 minute

    Karim Khan speaks at Colombia's Special Jurisdiction for Peace during the visit of the Prosecutor of the International Criminal Court in Bogota, Colombia, on June 6, 2023.

    Enlarge / Karim Khan speaks at Colombia's Special Jurisdiction for Peace during the visit of the Prosecutor of the International Criminal Court in Bogota, Colombia, on June 6, 2023. (credit: Long Visual Press/Getty )

    For years, some cybersecurity defenders and advocates have called for a kind of Geneva Convention for cyberwar , new international laws that would create clear consequences for anyone hacking civilian critical infrastructure, like power grids, banks, and hospitals. Now the lead prosecutor of the International Criminal Court at the Hague has made it clear that he intends to enforce those consequences—no new Geneva Convention required. Instead, he has explicitly stated for the first time that the Hague will investigate and prosecute any hacking crimes that violate existing international law, just as it does for war crimes committed in the physical world.

    In a little-noticed article released last month in the quarterly publication Foreign Policy Analytics, the International Criminal Court’s lead prosecutor, Karim Khan, spelled out that new commitment: His office will investigate cybercrimes that potentially violate the Rome Statute, the treaty that defines the court’s authority to prosecute illegal acts, including war crimes, crimes against humanity, and genocide.

    wired-logo.png

    “Cyberwarfare does not play out in the abstract. Rather, it can have a profound impact on people’s lives,” Khan writes. “Attempts to impact critical infrastructure such as medical facilities or control systems for power generation may result in immediate consequences for many, particularly the most vulnerable. Consequently, as part of its investigations, my Office will collect and review evidence of such conduct.”

    Read 13 remaining paragraphs | Comments

    • chevron_right

      OpenAI admits that AI writing detectors don’t work

      news.movim.eu / ArsTechnica • 8 September, 2023

    A photo of a teacher covering his eyes.

    Enlarge (credit: Getty Images )

    Last week, OpenAI published tips for educators in a promotional blog post that shows how some teachers are using ChatGPT as an educational aid, along with suggested prompts to get started. In a related FAQ , they also officially admit what we already know: AI writing detectors don't work, despite frequently being used to punish students with false positives.

    In a section of the FAQ titled "Do AI detectors work?", OpenAI writes , "In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content."

    In July, we covered in depth why AI writing detectors such as GPTZero don't work, with experts calling them "mostly snake oil." These detectors often yield false positives due to relying on unproven detection metrics. Ultimately, there is nothing special about AI-written text that always distinguishes it from human-written, and detectors can be defeated by rephrasing. That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      North Korea-backed hackers target security researchers with 0-day

      news.movim.eu / ArsTechnica • 7 September, 2023

    North Korea-backed hackers target security researchers with 0-day

    Enlarge (credit: Dmitry Nogaev | Getty Images)

    North Korea-backed hackers are once again targeting security researchers with a zero-day exploit and related malware in an attempt to infiltrate computers used to perform sensitive investigations involving cybersecurity.

    The presently unfixed zero-day—meaning a vulnerability that’s known to attackers before the hardware or software vendor has a security patch available—resides in a popular software package used by the targeted researchers, Google researchers said Thursday . They declined to identify the software or provide details about the vulnerability until the vendor, which they privately notified, releases a patch. The vulnerability was exploited using a malicious file the hackers sent the researchers after first spending weeks establishing a working relationship.

    Malware used in the campaign closely matches code used in a previous campaign that was definitively tied to hackers backed by the North Korean government, Clement Lecigne and Maddie Stone, both researchers in Google’s Threat Analysis Group, said. That campaign first came to public awareness in January 2021 in posts from the same Google research group and, a few days later, Microsoft .

    Read 7 remaining paragraphs | Comments