close
    • chevron_right

      Ted Chiang on the Risks of AI

      news.movim.eu / Schneier · Friday, 12 May, 2023 - 14:00 · 1 minute

    Ted Chiang has an excellent essay in the New Yorker : “Will A.I. Become the New McKinsey?”

    The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans­—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

    Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

    • chevron_right

      Rising seas will cut off many properties before they’re flooded

      news.movim.eu / ArsTechnica · Friday, 24 March, 2023 - 22:51 · 1 minute

    Image of a road with a low lying section under water.

    Enlarge / If this road is your only route to the outside world, it might not matter that your house didn't flood. (credit: Maurice Alcorn / EyeEm )

    Climate change produces lots of risks that are difficult to predict. While it will make some events—heatwaves, droughts, extreme storms, etc.—more probable, all of those events depend heavily on year-to-year variation in the weather. So, while the odds may go up, it's impossible to know when one of these events will strike a given location.

    In contrast, sea level rise seems far simpler. While there's still uncertainty about just how quickly ocean levels will rise, other aspects seem pretty predictable. Given a predicted rate of sea level rise, it's easy to tell when a site will start ending up underwater. And that sort of analysis has been done for various regions.

    But having a property above water won't be much good if flooding nearby means you can't get to a hospital or grocery store when you need to or lose access to electricity or other services. It's entirely possible for rising seas to leave a property high, dry, but uninhabitable as rising seas cut connections to essential services. A group of researchers has analyzed the risk of isolation driven by sea level rise, and shows it's a major contributor to the future risks the US faces.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Adversarial ML Attack that Secretly Gives a Language Model a Point of View

      news.movim.eu / Schneier · Thursday, 20 October, 2022 - 18:57 · 3 minutes

    Machine learning security is extraordinarily difficult because the attacks are so varied—and it seems that each new one is weirder than the next. Here’s the latest: a training-time attack that forces the model to exhibit a point of view: Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures .”

    Abstract: We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to “spin” their outputs so as to support an adversary-chosen sentiment or point of view—but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization.

    Model spinning introduces a “meta-backdoor” into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary.

    Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims.

    To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call “pseudo-words,” and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary’s meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models.

    This new attack dovetails with something I’ve been worried about for a while, something Latanya Sweeney has dubbed “persona bots.” This is what I wrote in my upcoming book (to be published in February):

    One example of an extension of this technology is the “persona bot,” an AI posing as an individual on social media and other online groups. Persona bots have histories, personalities, and communication styles. They don’t constantly spew propaganda. They hang out in various interest groups: gardening, knitting, model railroading, whatever. They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and to appear knowledgeable. Then, once in a while, the AI might post something relevant to a political issue, maybe an article about a healthcare worker having an allergic reaction to the COVID-19 vaccine, with worried commentary. Or maybe it might offer its developer’s opinions about a recent election, or racial justice, or any other polarizing subject. One persona bot can’t move public opinion, but what if there were thousands of them? Millions?

    These are chatbots on a very small scale. They would participate in small forums around the Internet: hobbyist groups, book groups, whatever. In general they would behave normally, participating in discussions like a person does. But occasionally they would say something partisan or political, depending on the desires of their owners. Because they’re all unique and only occasional, it would be hard for existing bot detection techniques to find them. And because they can be replicated by the millions across social media, they could have a greater effect. They would affect what we think, and—just as importantly—what we think others think. What we will see as robust political discussions would be persona bots arguing with other persona bots.

    Attacks like these add another wrinkle to that sort of scenario.

    • chevron_right

      Dutch Insider Attack on COVID-19 Data

      Bruce Schneier · news.movim.eu / Schneier · Wednesday, 27 January, 2021 - 14:59

    Insider data theft :

    Dutch police have arrested two individuals on Friday for allegedly selling data from the Dutch health ministry’s COVID-19 systems on the criminal underground.

    […]

    According to Verlaan, the two suspects worked in DDG call centers, where they had access to official Dutch government COVID-19 systems and databases.

    They were working from home:

    “Because people are working from home, they can easily take photos of their screens. This is one of the issues when your administrative staff is working from home,” Victor Gevers, Chair of the Dutch Institute for Vulnerability Disclosure, told ZDNet in an interview today.

    All of this remote call-center work brings with it additional risks.