phone

    • chevron_right

      FBI Disables Russian Malware

      news.movim.eu / Schneier • 10 May, 2023

    Reuters is reporting that the FBI “had identified and disabled malware wielded by Russia’s FSB security service against an undisclosed number of American computers, a move they hoped would deal a death blow to one of Russia’s leading cyber spying programs.”

    The headline says that the FBI “sabotaged” the malware, which seems to be wrong.

    Presumably we will learn more soon.

    • chevron_right

      PIPEDREAM Malware against Industrial Control Systems

      news.movim.eu / Schneier • 9 May, 2023

    Another nation-state malware , Russian in origin:

    In the early stages of the war in Ukraine in 2022, PIPEDREAM, a known malware was quietly on the brink of wiping out a handful of critical U.S. electric and liquid natural gas sites. PIPEDREAM is an attack toolkit with unmatched and unprecedented capabilities developed for use against industrial control systems (ICSs).

    The malware was built to manipulate the network communication protocols used by programmable logic controllers (PLCs) leveraged by two critical producers of PLCs for ICSs within the critical infrastructure sector, Schneider Electric and OMRON.

    CISA advisory . Wired article .

    • chevron_right

      AI Hacking Village at DEF CON This Year

      news.movim.eu / Schneier • 8 May, 2023

    At DEF CON this year, Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will all open up their models for attack.

    The DEF CON event will rely on an evaluation platform developed by Scale AI, a California company that produces training for AI applications. Participants will be given laptops to use to attack the models. Any bugs discovered will be disclosed using industry-standard responsible disclosure practices.

    • chevron_right

      AI to Aid Democracy

      news.movim.eu / Schneier • 29 April, 2023 • 8 minutes

    There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

    These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

    But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

    An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

    Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

    We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

    These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

    A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

    We know that LLMs often “ hallucinate ,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

    But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

    Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting , lobbying , and campaign messaging . Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

    If the hallucination problem can be solved, LLMs could also become explainers and educators . Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

    Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed , we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

    AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

    Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden , imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

    This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

    Some technological tools are already publicly available . In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

    While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

    Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology , the Lawrence Livermore National Laboratory , and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

    Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

    To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “ align ” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

    Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

    As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering , carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

    The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

    Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

    This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

    EDITED TO ADD: Linux Weekly News discussion .

    • chevron_right

      Research on AI in Adversarial Settings

      news.movim.eu / Schneier • 5 April, 2023

    New research: “ Achilles Heels for AGI/ASI via Decision Theoretic Adversaries “:

    As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern. One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart. As a challenge to this assumption, this paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions which cause them to make irrational decisions in adversarial settings. In a survey of key dilemmas and paradoxes from the decision theory literature, a number of these potential Achilles Heels are discussed in context of this hypothesis. Several novel contributions are made toward understanding the ways in which these weaknesses might be implanted into a system.

    • chevron_right

      The Security Vulnerabilities of Message Interoperability

      news.movim.eu / Schneier • 28 March, 2023

    Jenny Blessing and Ross Anderson have evaluated the security of systems designed to allow the various Internet messaging platforms to interoperate with each other:

    The Digital Markets Act ruled that users on different platforms should be able to exchange messages with each other. This opens up a real Pandora’s box. How will the networks manage keys, authenticate users, and moderate content? How much metadata will have to be shared, and how?

    In our latest paper, One Protocol to Rule Them All? On Securing Interoperable Messaging , we explore the security tensions, the conflicts of interest, the usability traps, and the likely consequences for individual and institutional behaviour.

    Interoperability will vastly increase the attack surface at every level in the stack ­ from the cryptography up through usability to commercial incentives and the opportunities for government interference.

    It’s a good idea in theory, but will likely result in the overall security being the worst of each platform’s security.

    • chevron_right

      Hacks at Pwn2Own Vancouver 2023

      news.movim.eu / Schneier • 27 March, 2023 • 1 minute

    An impressive array of hacks were demonstrated at the first day of the Pwn2Own conference in Vancouver:

    On the first day of Pwn2Own Vancouver 2023, security researchers successfully demoed Tesla Model 3, Windows 11, and macOS zero-day exploits and exploit chains to win $375,000 and a Tesla Model 3.

    The first to fall was Adobe Reader in the enterprise applications category after Haboob SA’s Abdul Aziz Hariri ( @abdhariri ) used an exploit chain targeting a 6-bug logic chain abusing multiple failed patches which escaped the sandbox and bypassed a banned API list on macOS to earn $50,000.

    The STAR Labs team ( @starlabs_sg ) demoed a zero-day exploit chain targeting Microsoft’s SharePoint team collaboration platform that brought them a $100,000 reward and successfully hacked Ubuntu Desktop with a previously known exploit for $15,000.

    Synacktiv ( @Synacktiv ) took home $100,000 and a Tesla Model 3 after successfully executing a TOCTOU (time-of-check to time-of-use) attack against the Tesla-Gateway in the Automotive category. They also used a TOCTOU zero-day vulnerability to escalate privileges on Apple macOS and earned $40,000.

    Oracle VirtualBox was hacked using an OOB Read and a stacked-based buffer overflow exploit chain (worth $40,000) by Qrious Security’s Bien Pham ( @bienpnn ).

    Last but not least, Marcin Wiązowski elevated privileges on Windows 11 using an improper input validation zero-day that came with a $30,000 prize.

    The con’s second and third days were equally impressive.

    • chevron_right

      Friday Squid Blogging: Creating Batteries Out of Squid Cells

      news.movim.eu / Schneier • 24 March, 2023

    This is fascinating :

    “When a squid ends up chipping what’s called its ring tooth, which is the nail underneath its tentacle, it needs to regrow that tooth very rapidly, otherwise it can’t claw its prey,” he explains.

    This was intriguing news ­ and it sparked an idea in Hopkins lab where he’d been trying to figure out how to store and transmit heat.

    “It diffuses in all directions. There’s no way to capture the heat and move it the way that you would electricity. It’s just not a fundamental law of physics.”

    […]

    The tiny brown batteries he mentions are about the size of a chiclet, and Hopkins says it will take a decade or more to create larger batteries that could have commercial value.

    As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

    Read my blog posting guidelines here .