• chevron_right

      Microsoft offers legal protection for AI copyright infringement challenges

      news.movim.eu / ArsTechnica · Friday, 8 September, 2023 - 22:40

    A man in an armor helmet sitting at a desk with a protective glowing field around him.

    Enlarge (credit: Getty Images / Benj Edwards )

    On Thursday, Microsoft announced that it will provide legal protection for customers who are sued for copyright infringement over content generated by the company's AI systems. This new policy, called the Copilot Copyright Commitment, is an expansion of Microsoft's existing intellectual property indemnification coverage, Reuters reports .

    Microsoft's announcement comes as generative AI tools like ChatGPT have raised concerns about reproducing copyrighted material without proper attribution. Microsoft has heavily invested in AI through products like GitHub Copilot and Bing Chat that can generate original code, text, and images on demand. Its AI models have gained these capabilities by scraping publicly available data off of the Internet without seeking express permission from copyright holders.

    By offering legal protection, Microsoft aims to give customers confidence in deploying its AI systems without worrying about potential copyright issues. The policy covers damages and legal fees, providing customers with an added layer of protection as generative AI sees rapid adoption across the tech industry.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival

      news.movim.eu / ArsTechnica · Friday, 8 September, 2023 - 20:37

    The Anthropic Claude logo on a purple background.

    Enlarge / The Anthropic Claude logo. (credit: Anthropic / Benj Edwards)

    On Thursday, AI-maker and OpenAI competitor Anthropic launched Claude Pro , a subscription-based version of its Claude.ai web-based AI assistant, which functions similarly to ChatGPT. It's available for $20/month in the US or 18 pounds/month in the UK, and it promises five-times-higher usage limits, priority access to Claude during high-traffic periods, and early access to new features as they emerge.

    Like ChatGPT, Claude Pro can compose text, summarize, do analysis, solve logic puzzles, and more.

    Claude.ai is what Anthropic offers as its conversational interface for its Claude 2 AI language model, similar to how ChatGPT provides an application wrapper for the underlying models GPT-3.5 and GPT-4. In February, OpenAI chose a subscription route for ChatGPT Plus , which for $20 a month also gives early access to new features, but it also unlocks access to GPT-4, which is OpenAI's most powerful language model.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Cisco security appliance 0-day is under attack by ransomware crooks

      news.movim.eu / ArsTechnica · Friday, 8 September, 2023 - 19:50 · 1 minute

    Cisco Systems headquarters in San Jose, California, US, on Monday, Aug. 14, 2023. Cisco Systems Inc. is scheduled to release earnings figures on August 16. Photographer: David Paul Morris/Bloomberg via Getty Images

    Enlarge / Cisco Systems headquarters in San Jose, California, US, on Monday, Aug. 14, 2023. Cisco Systems Inc. is scheduled to release earnings figures on August 16. Photographer: David Paul Morris/Bloomberg via Getty Images

    Cisco on Thursday confirmed the existence of a currently unpatched zero-day vulnerability that hackers are exploiting to gain unauthorized access to two widely used security appliances it sells.

    The vulnerability resides in Cisco’s Adaptive Security Appliance Software and its Firepower Threat Defense, which are typically abbreviated as ASA and FTD. Cisco and researchers have known since last week that a ransomware crime syndicate called Akira was gaining access to devices through password spraying and brute-forcing. Password spraying, also known as credential stuffing, involves trying a handful of commonly used passwords for a large number of usernames in an attempt to prevent detection and subsequent lockouts. In brute-force attacks, hackers use a much larger corpus of password guesses against a more limited number of usernames.

    Ongoing attacks since (at least) March

    “An attacker could exploit this vulnerability by specifying a default connection profile/tunnel group while conducting a brute force attack or while establishing a clientless SSL VPN session using valid credentials,” Cisco officials wrote in an advisory . “A successful exploit could allow the attacker to achieve one or both of the following:

    Read 9 remaining paragraphs | Comments

    • chevron_right

      The International Criminal Court will now prosecute cyberwar crimes

      news.movim.eu / ArsTechnica · Friday, 8 September, 2023 - 17:23 · 1 minute

    Karim Khan speaks at Colombia's Special Jurisdiction for Peace during the visit of the Prosecutor of the International Criminal Court in Bogota, Colombia, on June 6, 2023.

    Enlarge / Karim Khan speaks at Colombia's Special Jurisdiction for Peace during the visit of the Prosecutor of the International Criminal Court in Bogota, Colombia, on June 6, 2023. (credit: Long Visual Press/Getty )

    For years, some cybersecurity defenders and advocates have called for a kind of Geneva Convention for cyberwar , new international laws that would create clear consequences for anyone hacking civilian critical infrastructure, like power grids, banks, and hospitals. Now the lead prosecutor of the International Criminal Court at the Hague has made it clear that he intends to enforce those consequences—no new Geneva Convention required. Instead, he has explicitly stated for the first time that the Hague will investigate and prosecute any hacking crimes that violate existing international law, just as it does for war crimes committed in the physical world.

    In a little-noticed article released last month in the quarterly publication Foreign Policy Analytics, the International Criminal Court’s lead prosecutor, Karim Khan, spelled out that new commitment: His office will investigate cybercrimes that potentially violate the Rome Statute, the treaty that defines the court’s authority to prosecute illegal acts, including war crimes, crimes against humanity, and genocide.

    wired-logo.png

    “Cyberwarfare does not play out in the abstract. Rather, it can have a profound impact on people’s lives,” Khan writes. “Attempts to impact critical infrastructure such as medical facilities or control systems for power generation may result in immediate consequences for many, particularly the most vulnerable. Consequently, as part of its investigations, my Office will collect and review evidence of such conduct.”

    Read 13 remaining paragraphs | Comments

    • chevron_right

      OpenAI admits that AI writing detectors don’t work

      news.movim.eu / ArsTechnica · Friday, 8 September, 2023 - 15:42

    A photo of a teacher covering his eyes.

    Enlarge (credit: Getty Images )

    Last week, OpenAI published tips for educators in a promotional blog post that shows how some teachers are using ChatGPT as an educational aid, along with suggested prompts to get started. In a related FAQ , they also officially admit what we already know: AI writing detectors don't work, despite frequently being used to punish students with false positives.

    In a section of the FAQ titled "Do AI detectors work?", OpenAI writes , "In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content."

    In July, we covered in depth why AI writing detectors such as GPTZero don't work, with experts calling them "mostly snake oil." These detectors often yield false positives due to relying on unproven detection metrics. Ultimately, there is nothing special about AI-written text that always distinguishes it from human-written, and detectors can be defeated by rephrasing. That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      North Korea-backed hackers target security researchers with 0-day

      news.movim.eu / ArsTechnica · Thursday, 7 September, 2023 - 22:05

    North Korea-backed hackers target security researchers with 0-day

    Enlarge (credit: Dmitry Nogaev | Getty Images)

    North Korea-backed hackers are once again targeting security researchers with a zero-day exploit and related malware in an attempt to infiltrate computers used to perform sensitive investigations involving cybersecurity.

    The presently unfixed zero-day—meaning a vulnerability that’s known to attackers before the hardware or software vendor has a security patch available—resides in a popular software package used by the targeted researchers, Google researchers said Thursday . They declined to identify the software or provide details about the vulnerability until the vendor, which they privately notified, releases a patch. The vulnerability was exploited using a malicious file the hackers sent the researchers after first spending weeks establishing a working relationship.

    Malware used in the campaign closely matches code used in a previous campaign that was definitively tied to hackers backed by the North Korean government, Clement Lecigne and Maddie Stone, both researchers in Google’s Threat Analysis Group, said. That campaign first came to public awareness in January 2021 in posts from the same Google research group and, a few days later, Microsoft .

    Read 7 remaining paragraphs | Comments

    • chevron_right

      OpenAI to host its first developer conference on November 6 in San Francisco

      news.movim.eu / ArsTechnica · Thursday, 7 September, 2023 - 15:16

    A vintage tin toy robot collection belonging to Anthea Knowles, UK, 16th May 1980.

    Enlarge (credit: Getty Images )

    On Wednesday, OpenAI announced that it will host its first-ever developer conference, OpenAI DevDay, on November 6, 2023, in San Francisco. The one-day event hopes to bring together hundreds of developers to preview new tools and discuss ideas with OpenAI's technical staff.

    Launched in November, ChatGPT has driven intense interest in generative AI around the world, including tech investments, talk of regulations, a GPU hardware boom , and the emergence of competitors. OpenAI says in a blog post that since launching its first API in 2020, over 2 million developers now use its models like GPT-3, GPT-4 , DALL-E , and Whisper for a variety of applications, "from integrating smart assistants into existing applications to building entirely new applications and services that weren't possible before."

    While OpenAI's DevDay event will mostly take place in person, the keynote and potentially some parts of the conference will be streamed online. "The one-day event will bring hundreds of developers from around the world together with the team at OpenAI to preview new tools and exchange ideas," writes OpenAI. "In-person attendees will also be able to join breakout sessions led by members of OpenAI’s technical staff."

    Read 2 remaining paragraphs | Comments

    • chevron_right

      How China gets free intel on tech companies’ vulnerabilities

      news.movim.eu / ArsTechnica · Thursday, 7 September, 2023 - 13:14

    image related to hacking and China

    Enlarge (credit: Wired staff; Getty Images)

    For state-sponsored hacking operations, unpatched vulnerabilities are valuable ammunition. Intelligence agencies and militaries seize on hackable bugs when they're revealed—exploiting them to carry out their campaigns of espionage or cyberwar—or spend millions to dig up new ones or to buy them in secret from the hacker gray market.

    But for the past two years, China has added another approach to obtaining information about those vulnerabilities: a law that simply demands that any network technology business operating in the country hand it over. When tech companies learn of a hackable flaw in their products, they’re now required to tell a Chinese government agency—which, in some cases, then shares that information with China's state-sponsored hackers, according to a new investigation. And some evidence suggests foreign firms with China-based operations are complying with the law, indirectly giving Chinese authorities hints about potential new ways to hack their own customers.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      TurboTax-maker Intuit offers an AI agent that provides financial tips

      news.movim.eu / ArsTechnica · Wednesday, 6 September, 2023 - 22:19 · 1 minute

    Piggy bank on a laptop computer with a robotic hand.

    Enlarge (credit: Getty Images )

    On Wednesday, TurboTax-maker Intuit launched an AI assistant called "Intuit Assist" that can provide AI-generated financial recommendations and assist with decision-making when using the company's software, Reuters reports . Inuit Assist uses a custom large language model platform called GenOS , and it is available now to all TurboTax customers and select users of Intuit's other products, including Credit Karma, QuickBooks, and Mailchimp, with a wider rollout planned in the coming months.

    "Consumers will find it easier than ever to manage and improve their financial lives," the company writes on its promotional website. "They’ll be able to get personalized recommendations throughout the year, with actions they can take to maximize their tax refund and accurately file taxes in record time with TurboTax. And they’ll be given the tools to make smart money decisions throughout their financial journey with Credit Karma."

    Intuit also sees Intuit Assist as a way to level the playing field for small and medium-sized businesses, which often lack the resources of larger companies. The AI assistant will reportedly help shorten the time it takes to file taxes and provide faster access to refunds, as well as offer personalized financial advice. Intuit Chief Data Officer Ashok Srivastava told Reuters that the company's AI models "competed favorably" against other AI systems in internal accuracy tests.

    Read 6 remaining paragraphs | Comments