• chevron_right

      No thanks for the memory, Microsoft, your new AI toy is a total Recall nightmare | John Naughton

      news.movim.eu / TheGuardian · Saturday, 6 July - 15:00

    The tech company’s new Windows machines can take constant screenshots of users’ every action – quelle surprise , it’s a privacy minefield

    On 20 May, Yusuf Mehdi, a cove who rejoices in the magnificent title of executive vice-president, consumer chief marketing officer of Microsoft, launched its Copilot+ PCs , a “new category” of Windows machines that are “designed for AI”. They are, needless to say, “the fastest, most intelligent Windows PCs ever built” and they will “enable you to do things you can’t on any other PC”.

    What kinds of things? Well, how about generating and refining AI images in near real-time directly on the computer? Bridging language barriers by translating audio from 40-plus languages into English? Or enabling you to “easily find and remember what you have seen in your PC”.

    Continue reading...
    • chevron_right

      Canvassing to empty houses: knocking on doors in the smart doorbell era

      news.movim.eu / TheGuardian · Monday, 17 June - 06:46

    Campaigning door-to-door is nothing new, but selling your party’s vision in the UK election to someone when you can’t see them can be a mixed blessing

    Since their debut just over a decade ago, smart doorbells have been a revelation for anyone interested in home security and, though most won’t admit it, being a bit nosey. They’ve also transformed door knocking for political canvassers.

    While doorbell camera footage of passersby pilfering packages or behaving badly can be found all over the internet, spare a thought for those campaigning for the country’s future.

    Continue reading...
    • chevron_right

      “Simulation of keyboard activity” leads to firing of Wells Fargo employees

      news.movim.eu / ArsTechnica · Thursday, 13 June - 20:51

    Signage with logo at headquarters of Wells Fargo Capital Finance, the commercial banking division of Wells Fargo Bank, in the Financial District neighborhood of San Francisco, California, September 26, 2016.

    Enlarge (credit: Getty Images )

    Last month, Wells Fargo terminated over a dozen bank employees following an investigation into claims of faking work activity on their computers, according to a Bloomberg report .

    A Financial Industry Regulatory Authority (FINRA) search conducted by Ars confirmed that the fired members of the firm's wealth and investment management division were "discharged after review of allegations involving simulation of keyboard activity creating impression of active work."

    A rise in remote work during the COVID-19 pandemic accelerated the adoption of remote worker surveillance techniques, especially those using software installed on machines that keeps track of activity and reports back to corporate management. It's worth noting that the Bloomberg report says the FINRA filing does not specify whether the fired Wells Fargo employees were simulating activity at home or in an office.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Poland launches inquiry into previous government’s spyware use

      news.movim.eu / TheGuardian · Monday, 1 April - 04:00

    Victims of Pegasus hacking will be notified and criminal proceedings could be brought against former officials

    Poland has launched an investigation into its previous government’s use of the controversial spyware Pegasus, with a parliamentary inquiry under way and the possibility of criminal charges being brought against former government officials in future.

    Adam Bodnar, Poland’s new justice minister , told the Guardian that in coming months the government would notify people who were targeted with Pegasus. Under Polish law, they would then have the possibility of seeking financial compensation, and becoming party to potential criminal proceedings.

    Continue reading...
    • chevron_right

      Can a Garrick member chair an inquiry into police sexism fairly? I have my doubts | Alison

      news.movim.eu / TheGuardian · Thursday, 28 March - 16:00 · 1 minute

    Sir John Mitting will rule on whether undercover officers broke the law by deceiving women like me. Yet he’s a member of a male-only club

    Those of us involved in the so-called spy cops scandal have followed with interest the recent media coverage of the men-only Garrick Club and its membership list of high-profile individuals. It is not news to us that senior judges and powerful men in the security services have been members. Included among the elite was the chair of the public inquiry into undercover policing, John Mitting. Since his appointment as inquiry chair in 2017 we have been calling this out, as we believe it is an obvious conflict of interest – yet our concerns have predictably been ignored.

    The inquiry had been established two years earlier by the then prime minister, Theresa May, as a direct result of investigations by women like me into the disappearances of our ex-partners , and the subsequent revelations of their true identities as Metropolitan police undercover officers. The abuse of women, and institutional sexism in the police, are fundamental to understanding the significance of this inquiry.

    Alison is one of eight women who first took legal action against the Metropolitan police over the conduct of undercover officers and a founder member of Police Spies Out of Lives . A core participant in the public inquiry into undercover policing, she is one of the authors of Deep Deception – The Story of the Spycop Network by the Women who Uncovered the Shocking Truth

    Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here .

    Continue reading...
    • chevron_right

      Surveillance through Push Notifications

      news.movim.eu / Schneier · Monday, 4 March - 22:38 · 1 minute

    The Washington Post is reporting on the FBI’s increasing use of push notification data—”push tokens”—to identify people. The police can request this data from companies like Apple and Google without a warrant.

    The investigative technique goes back years. Court orders that were issued in 2019 to Apple and Google demanded that the companies hand over information on accounts identified by push tokens linked to alleged supporters of the Islamic State terrorist group.

    But the practice was not widely understood until December, when Sen. Ron Wyden (D-Ore.), in a letter to Attorney General Merrick Garland, said an investigation had revealed that the Justice Department had prohibited Apple and Google from discussing the technique.


    Unlike normal app notifications, push alerts, as their name suggests, have the power to jolt a phone awake—a feature that makes them useful for the urgent pings of everyday use. Many apps offer push-alert functionality because it gives users a fast, battery-saving way to stay updated, and few users think twice before turning them on.

    But to send that notification, Apple and Google require the apps to first create a token that tells the company how to find a user’s device. Those tokens are then saved on Apple’s and Google’s servers, out of the users’ reach.

    The article discusses their use by the FBI, primarily in child sexual abuse cases. But we all know how the story goes:

    “This is how any new surveillance method starts out: The government says we’re only going to use this in the most extreme cases, to stop terrorists and child predators, and everyone can get behind that,” said Cooper Quintin, a technologist at the advocacy group Electronic Frontier Foundation.

    “But these things always end up rolling downhill. Maybe a state attorney general one day decides, hey, maybe I can use this to catch people having an abortion,” Quintin added. “Even if you trust the U.S. right now to use this, you might not trust a new administration to use it in a way you deem ethical.”

    • chevron_right

      The Internet Enabled Mass Surveillance. AI Will Enable Mass Spying.

      news.movim.eu / Schneier · Tuesday, 5 December, 2023 - 05:51 · 4 minutes

    Spying and surveillance are different but related things. If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.

    Before the internet, putting someone under surveillance was expensive and time-consuming. You had to manually follow someone around, noting where they went, whom they talked to, what they purchased, what they did, and what they read. That world is forever gone. Our phones track our locations. Credit cards track our purchases. Apps track whom we talk to, and e-readers know what we read. Computers collect data about what we’re doing on them, and as both storage and processing have become cheaper, that data is increasingly saved and used. What was manual and individual has become bulk and mass. Surveillance has become the business model of the internet, and there’s no reasonable way for us to opt out of it.

    Spying is another matter. It has long been possible to tap someone’s phone or put a bug in their home and/or car, but those things still require someone to listen to and make sense of the conversations. Yes, spyware companies like NSO Group help the government hack into people’s phones , but someone still has to sort through all the conversations. And governments like China could censor social media posts based on particular words or phrases, but that was coarse and easy to bypass . Spying is limited by the need for human labor.

    AI is about to change that. Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

    The technologies aren’t perfect; some of them are pretty primitive. They miss things that are important. They get other things wrong. But so do humans. And, unlike humans, AI tools can be replicated by the millions and are improving at astonishing rates. They’ll get better next year, and even better the year after that. We are about to enter the era of mass spying.

    Mass surveillance fundamentally changed the nature of surveillance. Because all the data is saved, mass surveillance allows people to conduct surveillance backward in time, and without even knowing whom specifically you want to target. Tell me where this person was last year. List all the red sedans that drove down this road in the past month. List all of the people who purchased all the ingredients for a pressure cooker bomb in the past year. Find me all the pairs of phones that were moving toward each other, turned themselves off, then turned themselves on again an hour later while moving away from each other (a sign of a secret meeting).

    Similarly, mass spying will change the nature of spying. All the data will be saved. It will all be searchable, and understandable, in bulk. Tell me who has talked about a particular topic in the past month, and how discussions about that topic have evolved. Person A did something; check if someone told them to do it. Find everyone who is plotting a crime, or spreading a rumor, or planning to attend a political protest.

    There’s so much more. To uncover an organizational structure, look for someone who gives similar instructions to a group of people, then all the people they have relayed those instructions to. To find people’s confidants, look at whom they tell secrets to. You can track friendships and alliances as they form and break, in minute detail. In short, you can know everything about what everybody is talking about.

    This spying is not limited to conversations on our phones or computers. Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and “Hey Google” are already always listening; the conversations just aren’t being saved yet.

    Knowing that they are under constant surveillance changes how people behave. They conform. They self-censor, with the chilling effects that brings . Surveillance facilitates social control, and spying will only make this worse. Governments around the world already use mass surveillance; they will engage in mass spying as well.

    Corporations will spy on people. Mass surveillance ushered in the era of personalized advertisements; mass spying will supercharge that industry. Information about what people are talking about, their moods, their secrets—it’s all catnip for marketers looking for an edge. The tech monopolies that are currently keeping us all under constant surveillance won’t be able to resist collecting and using all of that data.

    In the early days of Gmail, Google talked about using people’s Gmail content to serve them personalized ads. The company stopped doing it , almost certainly because the keyword data it collected was so poor—and therefore not useful for marketing purposes. That will soon change. Maybe Google won’t be the first to spy on its users’ conversations, but once others start, they won’t be able to resist. Their true customers—their advertisers—will demand it.

    We could limit this capability. We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?

    This essay originally appeared in Slate .

    • chevron_right

      Secret White House Warrantless Surveillance Program

      news.movim.eu / Schneier · Thursday, 23 November, 2023 - 02:03

    There seems to be no end to warrantless surveillance :

    According to the letter, a surveillance program now known as Data Analytical Services (DAS) has for more than a decade allowed federal, state, and local law enforcement agencies to mine the details of Americans’ calls, analyzing the phone records of countless people who are not suspected of any crime, including victims. Using a technique known as chain analysis, the program targets not only those in direct phone contact with a criminal suspect but anyone with whom those individuals have been in contact as well.

    The DAS program, formerly known as Hemisphere, is run in coordination with the telecom giant AT&T, which captures and conducts analysis of US call records for law enforcement agencies, from local police and sheriffs’ departments to US customs offices and postal inspectors across the country, according to a White House memo reviewed by WIRED. Records show that the White House has, for the past decade, provided more than $6 million to the program, which allows the targeting of the records of any calls that use AT&T’s infrastructure—­a maze of routers and switches that crisscross the United States.

    • chevron_right

      Applying AI to License Plate Surveillance

      news.movim.eu / Schneier · Tuesday, 15 August, 2023 - 16:55

    License plate scanners aren’t new. Neither is using them for bulk surveillance. What’s new is that AI is being used on the data, identifying “suspicious” vehicle behavior:

    Typically, Automatic License Plate Recognition (ALPR) technology is used to search for plates linked to specific crimes. But in this case it was used to examine the driving patterns of anyone passing one of Westchester County’s 480 cameras over a two-year period. Zayas’ lawyer Ben Gold contested the AI-gathered evidence against his client, decrying it as “dragnet surveillance.”

    And he had the data to back it up. A FOIA he filed with the Westchester police revealed that the ALPR system was scanning over 16 million license plates a week, across 480 ALPR cameras. Of those systems, 434 were stationary, attached to poles and signs, while the remaining 46 were mobile, attached to police vehicles. The AI was not just looking at license plates either. It had also been taking notes on vehicles’ make, model and color—useful when a plate number for a suspect vehicle isn’t visible or is unknown.