• chevron_right

      No thanks for the memory, Microsoft, your new AI toy is a total Recall nightmare | John Naughton

      news.movim.eu / TheGuardian · Saturday, 6 July - 15:00

    The tech company’s new Windows machines can take constant screenshots of users’ every action – quelle surprise , it’s a privacy minefield

    On 20 May, Yusuf Mehdi, a cove who rejoices in the magnificent title of executive vice-president, consumer chief marketing officer of Microsoft, launched its Copilot+ PCs , a “new category” of Windows machines that are “designed for AI”. They are, needless to say, “the fastest, most intelligent Windows PCs ever built” and they will “enable you to do things you can’t on any other PC”.

    What kinds of things? Well, how about generating and refining AI images in near real-time directly on the computer? Bridging language barriers by translating audio from 40-plus languages into English? Or enabling you to “easily find and remember what you have seen in your PC”.

    Continue reading...
    • chevron_right

      Online Privacy and Overfishing

      news.movim.eu / Schneier · Friday, 14 June - 03:06 · 4 minutes

    Microsoft recently caught state-backed hackers using its generative AI tools to help with their attacks. In the security community, the immediate questions weren’t about how hackers were using the tools (that was utterly predictable), but about how Microsoft figured it out. The natural conclusion was that Microsoft was spying on its AI users, looking for harmful hackers at work.

    Some pushed back at characterizing Microsoft’s actions as “spying.” Of course cloud service providers monitor what users are doing. And because we expect Microsoft to be doing something like this, it’s not fair to call it spying.

    We see this argument as an example of our shifting collective expectations of privacy. To understand what’s happening, we can learn from an unlikely source: fish.

    In the mid-20th century, scientists began noticing that the number of fish in the ocean—so vast as to underlie the phrase “There are plenty of fish in the sea”—had started declining rapidly due to overfishing. They had already seen a similar decline in whale populations, when the post-WWII whaling industry nearly drove many species extinct. In whaling and later in commercial fishing, new technology made it easier to find and catch marine creatures in ever greater numbers. Ecologists, specifically those working in fisheries management, began studying how and when certain fish populations had gone into serious decline.

    One scientist, Daniel Pauly , realized that researchers studying fish populations were making a major error when trying to determine acceptable catch size. It wasn’t that scientists didn’t recognize the declining fish populations. It was just that they didn’t realize how significant the decline was. Pauly noted that each generation of scientists had a different baseline to which they compared the current statistics, and that each generation’s baseline was lower than that of the previous one.

    What seems normal to us in the security community is whatever was commonplace at the beginning of our careers .

    Pauly called this “ shifting baseline syndrome ” in a 1995 paper. The baseline most scientists used was the one that was normal when they began their research careers. By that measure, each subsequent decline wasn’t significant, but the cumulative decline was devastating. Each generation of researchers came of age in a new ecological and technological environment, inadvertently masking an exponential decline.

    Pauly’s insights came too late to help those managing some fisheries. The ocean suffered catastrophes such as the complete collapse of the Northwest Atlantic cod population in the 1990s.

    Internet surveillance, and the resultant loss of privacy, is following the same trajectory. Just as certain fish populations in the world’s oceans have fallen 80 percent, from previously having fallen 80 percent, from previously having fallen 80 percent (ad infinitum), our expectations of privacy have similarly fallen precipitously. The pervasive nature of modern technology makes surveillance easier than ever before, while each successive generation of the public is accustomed to the privacy status quo of their youth. What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.

    Historically, people controlled their computers, and software was standalone. The always-connected cloud-deployment model of software and services flipped the script. Most apps and services are designed to be always-online, feeding usage information back to the company. A consequence of this modern deployment model is that everyone—cynical tech folks and even ordinary users—expects that what you do with modern tech isn’t private. But that’s because the baseline has shifted.

    AI chatbots are the latest incarnation of this phenomenon: They produce output in response to your input, but behind the scenes there’s a complex cloud-based system keeping track of that input—both to improve the service and to sell you ads .

    Shifting baselines are at the heart of our collective loss of privacy. The U.S. Supreme Court has long held that our right to privacy depends on whether we have a reasonable expectation of privacy . But expectation is a slippery thing: It’s subject to shifting baselines.

    The question remains: What now? Fisheries scientists, armed with knowledge of shifting-baseline syndrome, now look at the big picture. They no longer consider relative measures, such as comparing this decade with the last decade. Instead, they take a holistic, ecosystem-wide perspective to see what a healthy marine ecosystem and thus sustainable catch should look like. They then turn these scientifically derived sustainable-catch figures into limits to be codified by regulators.

    In privacy and security, we need to do the same. Instead of comparing to a shifting baseline, we need to step back and look at what a healthy technological ecosystem would look like: one that respects people’s privacy rights while also allowing companies to recoup costs for services they provide. Ultimately, as with fisheries, we need to take a big-picture perspective and be aware of shifting baselines. A scientifically informed and democratic regulatory process is required to preserve a heritage—whether it be the ocean or the Internet—for the next generation.

    This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum .

    • chevron_right

      AI is coming to your Apple devices. Will it be secure?

      news.movim.eu / TheGuardian · Thursday, 13 June - 07:00

    Security experts evaluate Apple’s pledge that Apple Intelligence, announced on Monday, will usher in a ‘new standard for privacy in AI’

    At its annual developers conference on Monday, Apple announced its long-awaited artificial intelligence system, Apple Intelligence, which will customize user experiences, automate tasks and – the CEO Tim Cook promised – will usher in a “new standard for privacy in AI”.

    While Apple maintains its in-house AI is made with security in mind, its partnership with OpenAI has sparked plenty of criticism. OpenAI tool ChatGPT has long been the subject of privacy concerns. Launched in November 2022, it collected user data without explicit consent to train its models, and only began to allow users to opt out of such data collection in April 2023.

    Continue reading...
    • chevron_right

      Google to destroy billions of private browsing records to settle lawsuit

      news.movim.eu / TheGuardian · Monday, 1 April - 20:54

    Suit claimed tech giant tracked activity of people who thought they were privately using its Chrome browser’s incognito mode

    Google agreed to destroy billions of records to settle a lawsuit claiming it secretly tracked the internet use of people who thought they were browsing privately in its Chrome browser’s incognito mode.

    Users alleged that Google’s analytics, cookies and apps let the Alphabet unit improperly track people who set Google’s Chrome browser to “incognito” mode and other browsers to “private” browsing mode.

    Continue reading...
    • chevron_right

      Facebook let Netflix see user DMs, quit streaming to keep Netflix happy: Lawsuit

      news.movim.eu / ArsTechnica · Thursday, 28 March - 20:40 · 1 minute

    A promotional image for Sorry for Your Loss, with Elizabeth Olsen

    Enlarge / A promotional image for Sorry for Your Loss , which was a Facebook Watch original scripted series. (credit: Facebook )

    Last April, Meta revealed that it would no longer support original shows, like Jada Pinkett Smith's Red Table Talk talk show, on Facebook Watch. Meta's streaming business that was once viewed as competition for the likes of YouTube and Netflix is effectively dead now; Facebook doesn't produce original series, and Facebook Watch is no longer available as a video-streaming app.

    The streaming business' demise has seemed related to cost cuts at Meta that have also included layoffs. However, recently unsealed court documents in an antitrust suit against Meta [ PDF ] claim that Meta has squashed its streaming dreams in order to appease one of its biggest ad customers: Netflix.

    Facebook allegedly gave Netflix creepy privileges

    As spotted via Gizmodo , a letter was filed on April 14 in relation to a class-action antitrust suit that was filed by Meta customers, accusing Meta of anti-competitive practices that harm social media competition and consumers. The letter, made public Saturday, asks a court to have Reed Hastings, Netflix's founder and former CEO, respond to a subpoena for documents that plaintiffs claim are relevant to the case. The original complaint filed in December 2020 [ PDF ] doesn’t mention Netflix beyond stating that Facebook “secretly signed Whitelist and Data sharing agreements” with Netflix, along with “dozens” of other third-party app developers. The case is still ongoing.

    Read 18 remaining paragraphs | Comments

    • chevron_right

      Surveillance through Push Notifications

      news.movim.eu / Schneier · Monday, 4 March - 22:38 · 1 minute

    The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

    The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

    But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.

    […]

    Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.

    But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.

    The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:

    “This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.

    “But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”

    • chevron_right

      The Internet Enabled Mass Surveillance. AI Will Enable Mass Spying.

      news.movim.eu / Schneier · Tuesday, 5 December, 2023 - 05:51 · 4 minutes

    Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

    Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.

    Spying is another matter. It has long been possible to tap someone’s phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people’s phones , but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass . Spying is limited by the need for human labor.

    AI is about to change that. Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

    The technologies aren’t perfect; some of them are pretty primitive. They miss things that are important. They get other things wrong. But so do humans. And, unlike humans, AI tools can be replicated by the millions and are improving at astonishing rates. They’ll get better next year, and even better the year after that. We are about to enter the era of mass spying.

    Mass surveillance fundamentally changed the nature of surveillance. Because all the data is saved, mass surveillance allows people to conduct surveillance backward in time, and without even knowing whom specifically you want to target. Tell me where this person was last year. List all the red sedans that drove down this road in the past month. List all of the people who purchased all the ingredients for a pressure cooker bomb in the past year. Find me all the pairs of phones that were moving toward each other, turned themselves off, then turned themselves on again an hour later while moving away from each other (a sign of a secret meeting).

    Similarly, mass spying will change the nature of spying. All the data will be saved. It will all be searchable, and understandable, in bulk. Tell me who has talked about a particular topic in the past month, and how discussions about that topic have evolved. Person A did something; check if someone told them to do it. Find everyone who is plotting a crime, or spreading a rumor, or planning to attend a political protest.

    There’s so much more. To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people’s confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.

    This spying is not limited to conversations on our phones or computers. Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and “Hey Google” are already always listening; the conversations just aren’t being saved yet.

    Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings . Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

    Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets—it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.

    In the early days of Gmail, Google talked about using people’s Gmail content to serve them personalized ads. The company stopped doing it , almost certainly because the keyword data it collected was so poor—and therefore not useful for marketing purposes. That will soon change. Maybe Google won’t be the first to spy on its users’ conversations, but once others start, they won’t be able to resist. Their true customers—their advertisers—will demand it.

    We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?

    This essay originally appeared in Slate .

    • chevron_right

      Judge: Amazon “cannot claim shock” that bathroom spycams were used as advertised

      news.movim.eu / ArsTechnica · Monday, 4 December, 2023 - 20:16

    Judge: Amazon “cannot claim shock” that bathroom spycams were used as advertised

    Enlarge (credit: zhihao | Moment )

    After a spy camera designed to look like a towel hook was purchased on Amazon and illegally used for months to capture photos of a minor in her private bathroom, Amazon was sued.

    The plaintiff—a former Brazilian foreign exchange student then living in West Virginia—argued that Amazon had inspected the camera three times and its safety team had failed to prevent allegedly severe, foreseeable harms still affecting her today.

    Amazon hoped the court would dismiss the suit, arguing that the platform wasn't responsible for the alleged criminal conduct harming the minor. But after nearly eight months deliberating, a judge recently largely denied the tech giant's motion to dismiss.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Secret White House Warrantless Surveillance Program

      news.movim.eu / Schneier · Thursday, 23 November, 2023 - 02:03

    There seems to be no end to warrantless surveillance :

    According to the letter, a surveillance program now known as Data Analytical Services (DAS) has for more than a decade allowed federal, state, and local law enforcement agencies to mine the details of Americans’ calls, analyzing the phone records of countless people who are not suspected of any crime, including victims. Using a technique known as chain analysis, the program targets not only those in direct phone contact with a criminal suspect but anyone with whom those individuals have been in contact as well.

    The DAS program, formerly known as Hemisphere, is run in coordination with the telecom giant AT&T, which captures and conducts analysis of US call records for law enforcement agencies, from local police and sheriffs’ departments to US customs offices and postal inspectors across the country, according to a White House memo reviewed by WIRED. Records show that the White House has, for the past decade, provided more than $6 million to the program, which allows the targeting of the records of any calls that use AT&T’s infrastructure—­a maze of routers and switches that crisscross the United States.