close
    • chevron_right

      New York Times Doesn’t Want Its Stories Archived

      news.movim.eu / TheIntercept · Sunday, 17 September - 10:00 · 4 minutes

    The New York Times tried to block a web crawler that was affiliated with the famous Internet Archive, a project whose easy-to-use comparisons of article versions has sometimes led to embarrassment for the newspaper.

    In 2021, the New York Times added “ia_archiver” — a bot that, in the past, captured huge numbers of websites for the Internet Archive — to a list that instructs certain crawlers to stay out of its website.

    Crawlers are programs that work as automated bots to trawl websites, collecting data and sending it back to a repository, a process known as scraping. Such bots power search engines and the Internet Archive’s Wayback Machine , a service that facilitates the archiving and viewing of historic versions of websites going back to 1996.

    The New York Times has, in the past, faced public criticisms over some of its stealth edits.

    The Internet Archive’s Wayback Machine has long been used to compare webpages as they are updated over time, clearly delineating the differences between two iterations of any given page. Several years ago, the archive added a feature called “ Changes ” that lets users compare two archived versions of a website from different dates or times on a single display. The tool can be used to uncover changes in news stories that have been made without any accompanying editorial notes, so-called stealth edits.

    The Times has, in the past, faced public criticisms over some of its stealth edits. In a notorious 2016 incident, the paper revised an article about then-Democratic presidential candidate Sen. Bernie Sanders, I-Vt., so drastically after publication — changing the tone from one of praise to skepticism — that it came in for a round of opprobrium from other outlets as well as the Times’s own public editor . The blogger who first noticed the revisions and set off the firestorm demonstrated the changes by using the Wayback Machine.

    More recently, the Times stealth-edited an article that originally listed “death” as one of six ways “you can still cancel your federal student loan debt.” Following the edit, the “death” section title was changed to a more opaque heading of “debt won’t carry on.”

    A service called NewsDiffs — which provides a similar comparative service but focuses on news outlets such as the New York Times, CNN, the Washington Post, and others — has also chronicled a long list of significant examples of articles that have undergone stealth edits, though the service appears to not have been updated in several years.

    The New York Times declined to comment on why it is barring the ia_archiver bot from crawling its website.

    Robots.txt Files

    The mechanism that websites use to block certain crawlers is a robots.txt file. If website owners want to request that a particular search engine or other automated bot not scan their website, they can add the crawler’s name to the file, which the website owner then uploads to their site where it can be publicly accessed.

    Based on a web standard known as the Robots Exclusion Protocol , a robots.txt file allows site owners to specify whether they want to allow a bot to crawl either part of or their whole websites. Though bots can always choose to ignore the presence of the file, many crawler services respect the requests.

    The current robots.txt file on the New York Times’s website includes an instruction to disallow all site access to the ia_archiver bot.

    The relationship between ia_archiver and the Internet Archive is not completely straightforward. While the Internet Archive crawls the web itself, it also receives data from other entities. Ia_archiver was, for more than a decade, a prolific supplier of website data to the archive.

    The bot belonged to Alexa Internet, a web traffic analysis company co-founded by Brewster Kahle, who went on to create the Internet Archive right after Alexa. Alexa Internet went on to be acquired by Amazon in 1999 — its trademark name was later used for Amazon’s signature voice-activated assistant — and was eventually sunset in 2022.

    Throughout its existence, Alexa Internet was intricately intertwined with the Internet Archive. From 1996 to the end of 2020, the Internet Archive received over 3 petabytes — more than 3,000 terabytes — of crawled website data from Alexa. Its role in helping to fill the archive with material led users to urge website owners not to block ia_archiver under the mistaken notion that it was unrelated to the Internet Archive.

    As late as 2015, the Wayback Machine offered instructions for preventing a site from being ingested into the Wayback Machine — by using the site’s robots.txt file. News websites such as the Washington Post proceeded to take full advantage of this and disallowed the ia_archiver bot.

    By 2017, however, the Internet Archive announced its intention to stop abiding by the dictates of a site’s robots.txt. While the Internet Archive had already been disregarding the robots.txt for military and government sites, the new update expanded the move to disregard robots.txt for all sites. Instead, website owners could make manual exclusion requests by email.

    Reputation management firms, for one, are keenly aware of the change. The New York Times, too, appears to have mobilized the more selective manual exclusion process, as certain Times stories are not available via the Wayback Machine.

    Some news sites such as the Washington Post have since removed ia_archiver from their list of blocked crawlers. While other websites removed their ia_archiver blocks, however, in 2021, the New York Times decided to add it.

    The post New York Times Doesn’t Want Its Stories Archived appeared first on The Intercept .

    • wifi_tethering open_in_new

      This post is public

      theintercept.com /2023/09/17/new-york-times-website-internet-archive/

    • Pictures 5 image

    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • chevron_right

      Vice Pulled a Documentary Critical of Saudi Arabia. But Here It Is.

      news.movim.eu / TheIntercept · Saturday, 9 September - 11:00 · 4 minutes

    In the past , Vice has documented the history of censorship on YouTube. More recently, since the company’s near implosion, it became an active participant in making things disappear.

    In June, six months after announcing a partnership deal with a Saudi Arabian government-owned media company, Vice uploaded but then quickly removed a documentary critical of the Persian Gulf monarchy’s notorious dictator, Crown Prince Mohammed bin Salman, or MBS.

    The nearly nine-minute film, titled “Inside Saudi Crown Prince’s Ruthless Quest for Power,” was uploaded to the Vice News YouTube channel on June 19, 2023. It garnered more than three-quarters of a million views before being set to “private” within four days of being posted. It can no longer be seen at its original link on Vice’s YouTube channel; visitors see a message that says “video unavailable.” Vice did not respond to a request for comment on why the video was published and then made private or if there are any plans to make the video public again.

    The Guardian first reported that a “film in the Vice world news Investigators series about Saudi crown prince Mohammed bin Salman was deleted from the internet after being uploaded.” Though Vice did remove the film from its public YouTube channel, it is, in fact, not “deleted from the internet” and presently remains publicly accessible via web archival services.

    Vice’s description of the video, now also unavailable on YouTube, previously stated that Saudi Crown Prince Mohammed “orchestrates The Ritz Purge, kidnaps Saudi’s elites and royal relatives with allegations of torture inside, and his own men linked to the brutal hacking of Journalist Khashoggi – a murder that stunned the world.” The description goes on to state that Wall Street Journal reporters Bradley Hope and Justin Scheck “attempt to unfold the motivations of the prince’s most reckless decision-making.” Hope and Scheck are the co-authors of the 2020 book “Blood and Oil: Mohammed bin Salman’s Ruthless Quest for Global Power.”

    A screenshot from the documentary “Inside Saudi Crown Prince’s Ruthless Quest for Power,” which Vice News deleted from its YouTube channel.

    Image: The Intercept; Source: Vice News

    In the documentary, Hope states that Crown Prince Mohammed is “disgraced internationally” owing to the Jamal Khashoggi murder, a topic which Vice critically covered at length in the past. More recently, however, Vice has shifted its coverage of Saudi Arabia, apparently due to the growth of its commercial relationship with the kingdom. The relationship appears to have begun in 2017 , owing to MBS’s younger brother, Khalid bin Salman, being infatuated with the brand; bin Salman reportedly set up a meeting between Vice co-founder Shane Smith and MBS.

    By the end of 2018, Vice had worked with the Saudi Research and Media Group to produce promotional videos for Saudi Arabia . A few days after the Guardian piece detailing the deal came out, an “industry source” told Variety (whose parent company, Penske Media Corporation, received $200 million from the Saudi sovereign wealth fund earlier that year) that Vice was “reviewing” its contract with SRMG.

    A subsequent Guardian investigation revealed that in 2020, Vice helped organize a Saudi music festival subsidized by the Saudi government. Vice’s name was not listed on publicity materials for the event, and contractors working on the event were presented with nondisclosure agreements.

    In 2021, Vice opened an office in Riyadh, Saudi Arabia. The media company has gone from being “ banned from filming in Riyadh ” in 2018 to now actively recruiting for a producer “responsible for developing and assisting the producing of video content from short form content to long-form for our new media brand, headquartered in Riyadh.” The company lists 11 other Riyadh-based openings .

    Commenting on the opening of the Riyadh office, a Vice spokesperson told the Guardian that “our editorial voice has and always will report with complete autonomy and independence.” In response to the Guardian recently asking about the rationale for the removal of the film, a Vice source stated that this was partially owing to concerns about the safety of Saudi-based staff.

    In September 2022, the New York Times reported that Vice was considering engaging in a deal with the Saudi media company MBC. The deal was officially announced at the start of 2023. Most recently, the Guardian reported that Vice shelved a story which stated that the “Saudi state is helping families to harass and threaten transgender Saudis based overseas.” In response to this latest instance of apparent capitulation to advancing Saudi interests, the Vice Union issued a statement saying that it was “horrified but not shocked.” It added, “We know the company is financially bankrupt, but it shouldn’t be morally bankrupt too.”

    Meanwhile, a map of Saudi Arabia reportedly hangs on a wall in Vice’s London office.

    The post Vice Pulled a Documentary Critical of Saudi Arabia. But Here It Is. appeared first on The Intercept .

    • chevron_right

      The Online Christian Counterinsurgency Against Sex Workers

      news.movim.eu / TheIntercept · Saturday, 29 July, 2023 - 10:00 · 16 minutes

    The most popular video on Vaught Victor Marx’s YouTube now has more than 15 million views. Standing solemnly in a dark blue karate gi while his son Shiloh Vaughn Marx smiles and points a gun at his face, Marx uses his expertise as a seventh-degree black belt in “Cajun Karate Keichu-Do” to perform what he claims was the world’s fastest gun disarm. Over a period of just 80 milliseconds — according to Marx’s measurement — he snatches the gun from his son and effortlessly ejects the magazine. It’s a striking display, one that unequivocally shouts: I am here to stop bad guys.

    Marx is more than just a competitive gun-disarmer and martial artist. He is also a former Marine, a self-proclaimed exorcist, and an author and filmmaker. He also helped launch the Skull Games, a privatized intelligence outfit that purports to hunt pedophiles, sex traffickers, and other “demonic activity” using a blend of sock-puppet social media accounts and commercial surveillance tools — including face recognition software.

    The Skull Games events have attracted notable corporate allies. Recent games have been “powered” by the internet surveillance firm Cobwebs, and an upcoming competition is partnered with cellphone-tracking data broker Anomaly Six .

    The moral simplicity of Skull Games’s mission is emblazoned across its website in fierce, all-caps type: “We hunt predators.” And Marx has savvily ridden recent popular attention to the independent film “ Sound of Freedom ,” a dramatization of the life of fellow anti-trafficking crusader Tim Ballard. In the era of QAnon and conservative “groomer” panic, vowing to take down shadowy — and frequently exaggerated — networks of “traffickers” under the aegis of Christ is an exercise in shrewd branding.

    Although its name is a reference to the mind games played by pimps and traffickers, Skull Games, which Marx’s church is no longer officially involved in, is itself a form of sport for its participants: a sort of hackathon for would-be Christian saviors, complete with competition. Those who play are awarded points based on their sleuthing. Finding a target’s high school diploma or sonogram imagery nets 15 points, while finding the same tattoo on multiple women would earn a whopping 300. On at least one occasion, according to materials reviewed by The Intercept and Tech Inquiry, participants competed for a chance at prizes, including paid work for Marx’s California church and one of its surveillance firm partners.

    While commercially purchased surveillance exists largely outside the purview of the law, Skull Games was founded to answer to a higher power. The event started under the auspices of All Things Possible Ministries, the Murrieta, California, evangelical church Marx founded in 2003.

    Marx has attributed his conversion to Christianity to becoming reunited with his biological father — according to Marx, formerly a “practicing warlock” — toward the end of his three years in the Marine Corps. Marx’s tendency to blame demons and warlocks would become the central cause of controversy of his own ministry, largely as a result of his focus on exorcisms as the solutions to issues ranging from pornography to veteran suicides. As Marx recently told “The Spillover” podcast, “I hunt pedophiles, but I also hunt demons.”

    Skull Games also ends up being a hunt for sex workers, conflating them with trafficking victims as they prepare intelligence dossiers on women before turning them over to police.

    Groups seeking to rescue sex workers — whether through religion, prosecution , or both — are nothing new, said Kristen DiAngelo, executive director of the advocacy group Sex Workers Outreach Project Sacramento. What Skull Games represents — the technological outsourcing of police work to civilian volunteers — presents a new risk to sex workers, she argued.

    “I think it’s dangerous because you set up people to have that vigilante mentality.”

    “I think it’s dangerous because you set up people to have that vigilante mentality — that idea that, we’re going to go out and we’re going to catch somebody — and they probably really believe that they are going to ‘save someone,’” DiAngelo told The Intercept and Tech Inquiry. “And that’s that savior complex. We don’t need saving; we need support and resources.”

    The eighth Skull Games, which took place over the weekend of July 21, operated out of a private investigation firm headquartered in a former church in Wanaque, New Jersey. A photo of the event shared by the director of intelligence of Skull Games showed 57 attendees — almost all wearing matching black T-shirts — standing in front of corporate due diligence firm Hetherington Group’s office with a Skull Games banner unfurled across its front doors. Hetherington Group’s address is simple to locate online, but their office signage doesn’t mention the firm’s name, only saying “593 Ringwood LLC” above the words “In God We Trust.” (Cynthia Hetherington, the CEO of Hetherington Group and a board member of Skull Games, distanced her firm from the surveillance programs normally used at the events. “Cobwebs brought the bagels, which I’m still trying to digest,” she said. “I didn’t see their software anywhere in the event.”)

    The attempt to merge computerized counterinsurgency techniques with right-wing evangelism has left some Skull Games participants uncomfortable. One experienced attendee of the January 2023 Skull Games was taken aback by an abundance of prayer circles and paucity of formal training. “Within the first 10 minutes,” the participant recalled of a training webinar, “I was like, ‘What the fuck is this?’”

    2M69C9D Jeff Tiegs, chief operations officer of All Things Possible Ministries, blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, April 20, 2022. Tiegs said the hand gesture popularized by Star Trek originated as a blessing of the descendants of Aaron, a Jewish High Priest in the Torah.

    Jeff Tiegs blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, on April 20, 2022.

    Photo: Alamy

    Delta Force OSINT

    The numbers of nongovernmental surveillance practitioners has risen in tandem with the post-9/11 boom in commercial tools for social media surveillance, analyzing private chat rooms, and tracking cellphone pings.

    Drawing on this abundance of civilian expertise, Skull Games brings together current and former military and law enforcement personnel, along with former sex workers and even employees of surveillance firms themselves. Both Skull Games and the high-profile, MAGA-beloved Operation Underground Railroad have worked with Cobwebs, but Skull Games roots its branding in counterinsurgency and special operations rather than homeland security.

    “I fought the worst of the worst: ISIS, Al Qaeda, the Taliban,” Skull Games president and former Delta Force soldier Jeff Tiegs has said . “But the adversary I despise the most are human traffickers.” Tiegs has told interviewers that he takes “counterterrorism / counterinsurgency principles” and applies them to these targets.

    “I fought the worst of the worst: ISIS, Al Qaeda, the Taliban. But the adversary I despise the most are human traffickers.”

    The plan broadly mimicked a widely praised Pentagon effort to catch traffickers that was ultimately shut down this May due to a lack of funding. In a training session earlier this month, Tiegs noted that active-duty military service members take part in the hunts; veterans like Tiegs himself are everywhere. The attendee list for a recent training event shows participants with day jobs at the Department of Defense, Portland Police Bureau, and Air Force, as well as a lead contracting officer from U.S. Citizenship and Immigration Services.

    Skull Games employs U.S. Special Forces jargon, which dominates the pamphlets handed out to volunteers. Each volunteer is assigned the initial informal rank of private and works out of a “Special Operations Coordination Center.” Government acronyms abound: Participants are asked to keep in mind CCIRs — Commander’s Critical Information Requirements — while preventing EEFIs — Essential Elements of Friendly Information— from falling into the hands of the enemy.

    Tiegs’s transition from counterinsurgency to counter-human-trafficking empresario came after he met Jeff Keith, the founder of the anti-trafficking nonprofit Guardian Group, where Tiegs was an executive for nearly five years. While Tiegs was developing Guardian Group’s tradecraft for identifying victims, he was also beginning to work more closely with Marx, whom he met on a trip to Iraq in 2017. By the end of 2018, Marx and Tiegs had joined each others’ boards.

    Beyond the Special Forces acumen of its leadership, what sets Skull Games apart from other amateur predator-hunting efforts is its reliance on “open-source intelligence.” OSINT, as it’s known, is a military euphemism popular among its practitioners that refers to a broad amalgam of intelligence-gathering techniques , most relying on surveilling the public internet and purchasing sensitive information from commercial data brokers.

    Related

    American Phone-Tracking Firm Demo’d Surveillance Powers by Spying on CIA and NSA

    Sensitive personal information is today bought and sold so widely, including by law enforcement and spy agencies, that the Office of the Director of National Intelligence recently warned that data “that could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety” is available on “nearly everyone.”

    Skull Games’s efforts to tap this unregulated sprawl of digital personal data function as sort of vice squad auxiliaries. Participants scour the U.S. for digital evidence of sex work before handing their findings over to police — officers the participants often describe as friends and collaborators.

    After publicly promoting 2020 as the year Guardian Group would “scale” its tradecraft up to tackling many more cases, Tiegs abruptly jumped from his role as chief operating officer of the organization into the same title at All Things Possible — Marx’s church. By December 2021, Tiegs had launched the first Skull Games under the umbrella of All Things Possible. The event was put together in close partnership with Echo Analytics, which had been acquired earlier that year by Quiet Professionals, a surveillance contractor led by a former Delta Force sergeant major. The first Skull Games took place in the Tampa offices of Echo Analytics, just 13 miles from the headquarters of U.S. Special Operations Command.

    As of May 2023, Tiegs has separated from All Things Possible and leads the Skull Games as a newly independent, tax-exempt nonprofit. “Skull Games is separate and distinct from ATP,” he said in an emailed statement. “There is no role for ATP or Marx in Skull Games.”

    The Hunt

    Reached by phone, Tiegs downplayed the role of powerful surveillance tools in Skull Games’s work while also conceding he wasn’t always aware of what technologies were being used in the hunt for predators — or how.

    Despite its public emphasis on taking down traffickers, much of Skull Games’s efforts boil down to scrolling through sex worker ad listings and attempting to identify the women. Central to the sleuthing, according to Tiegs and training materials reviewed by The Intercept and Tech Inquiry, is the search for visual indicators in escort ads and social media posts that would point to a woman being trafficked. An October 2022 report funded by the research and development arm of the U.S. Department of Justice, however, concluded that the appearance of many such indicators — mostly emojis and acronyms — was statistically insignificant.

    Tiegs spoke candidly about the centrality of face recognition to Skull Games. “So here’s a girl, she’s being exploited, we don’t know who she is,” he said. “All we have is a picture and a fake name, but, using some of these tools, you’re able to identify her mugshot. Now you know everything about her, and you’re able to start really putting a case together.”

    According to notes viewed by The Intercept and Tech Inquiry, the competition recommended that volunteers use FaceCheck.id and PimEyes , programs that allow users to conduct reverse image searches for an uploaded picture of face . In a July Skull Games webinar, one participant noted that they had been able to use PimEyes to find a sex worker’s driver’s license posted to the web.

    Related

    Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency”

    In January, Cobwebs Technologies, an Israeli firm, announced it would provide Skull Games with access to its Tangles surveillance platform. According to Tiegs, the company is “one of our biggest supporters.” Previous reporting from Motherboard detailed the IRS Criminal Investigation unit’s usage of Cobwebs for undercover investigations.

    Skull Games training materials provided to The Intercept and Tech Inquiry provide detailed instructions on the creation of “sock puppet” social media accounts: fake identities for covert research and other uses. Tiegs denied recommending the creation of such pseudonymous accounts, but on the eve of the eighth Skull Games, team leader Joe Labrozzi told fellow volunteers, “We absolutely recommend sock puppets,” according to a training seminar transcript reviewed by The Intercept and Tech Inquiry. Other volunteers shared tips on creating fake social media accounts, including the use of ChatGPT and machine learning-based face-generation tools to build convincing social media personas.

    Tiegs also denied a participant’s assertion that Clearview AI’s face recognition software was heavily used in the January 2023 Skull Games. Training materials obtained by Tech Inquiry and The Intercept, however, suggest otherwise. At one point in a July training webinar, a Virginia law enforcement volunteer who didn’t give their name asked what rules were in place for using their official access to face recognition and other law enforcement databases. “It’s easier to ask for forgiveness than permission,” replied another participant, adding that some police Skull Games volunteers had permission to tap their departmental access to Clearview AI and Spotlight, an investigative tool that uses Amazon’s Rekognition technology to identify faces.

    Cobwebs — which became part of the American wiretapping company PenLink earlier this month — provides a broad array of surveillance capabilities, according to a government procurement document obtained through a Freedom of Information Act request. Cobwebs provides investigators with the ability to continuously monitor the web for certain keyphrases. The Tangles platform can also provide face recognition; fuse OSINT with personal account data collected from search warrants; and pinpoint individuals through the locations of their phones — granting the ability to track a person’s movements going back as many as three years without judicial oversight.

    When reached for comment, Cobwebs said, “Only through collaboration between all sectors of society — government, law enforcement, academia — and the proper tools, can we combat human trafficking.” The company did not respond to detailed questions about how its platform is used by Skull Games.

    According to a source who previously attended a Skull Games event, and who asked for anonymity because of their ongoing role in counter-trafficking, only one member of the “task force” of participants had access to the Tangles platform: a representative from Cobwebs itself who could run queries from other task force analysts when requested. The rest of the group was equipped with whatever OSINT-gathering tools they already had access to outside of Skull Games, creating a lopsided exercise in which some participants were equipped with little more than their keyboards and Google searches, while others tapped tools like Clearview or Thomson Reuters CLEAR, an analytics tool used by U.S. Immigration and Customs Enforcement .

    Related

    Powerful Mobile Phone Surveillance Tool Operates in Obscurity Across the Country

    Tiegs acknowledged that most Skull Games participants likely have some professional OSINT expertise. By his account, they operate on a sort of BYO-intelligence-gathering-tool basis and, owing to Skull Games’s ad hoc use of technology, said he couldn’t confirm how exactly Cobwebs may have been used in the past. Despite Skull Games widely advertising its partnership with another source of cellphone location-tracking data — the commercial surveillance company Anomaly Six — Tiegs said, “We’re not pinpointing the location of somebody.” He claimed Skull Games uses less sophisticated techniques to generate leads for police who may later obtain a court order for, say, geolocational data. (Anomaly Six said that it is not providing its software or data to Skull Games.)

    Tiegs also expressed frustration with the notion that deploying surveillance tools to crack down on sex work would be seen as impermissible. “We allow Big Data to monitor everything you’re doing to sell you iPods or sunglasses or new socks,” he said, “but if you need to leverage some of the same technology to protect women and children, all of the sudden everybody’s up in arms.”

    Tiegs added, “I’m really conflicted how people rationalize that.”

    People march in support of sex workers, Sunday, June 2, 2019, in Las Vegas. People marched in support of decriminalizing sex work and against the Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act, among other issues. (AP Photo/John Locher)

    People march in support of sex workers and decriminalizing sex work on June 2, 2019, in Las Vegas.

    Photo: John Locher/AP

    “Pure Evil”

    A potent strain of anti-sex work sentiment — not just opposition to trafficking — has pervaded Skull Games since its founding. Although the events are no longer affiliated with a church, Tiegs and his lieutenants’ devout Christianity suggests the digital hunt for pedophiles and pimps remains a form of spiritual warfare.

    Michele Block, a Canadian military intelligence veteran who has worked as Skull Games’s director of intelligence since its founding at All Things Possible, is open about her belief that their surveillance efforts are part of a battle against Satan. In a December 2022 interview at America Fest, a four-day conference organized by the right-wing group Turning Point USA, Block described her work as a fight against “pure evil,” claiming that many traffickers are specifically targeting Christian households.

    Tiegs argued that “100 percent” of sex work is human trafficking and that “to legalize the purchasing of women is a huge mistake.”

    The combination of digital surveillance and Christian moralizing could have serious consequences not only for “predators,” but also their prey: The America Fest interview showed that Skull Games hopes to take down alleged traffickers by first going after the allegedly trafficked.

    “So basically, 24/7, our intelligence department identifies victims of sex trafficking.”

    “So basically, 24/7,” Block explained, “our intelligence department identifies victims of sex trafficking.” All of this information — both the alleged trafficker and alleged victim — is then handed over to police. Although Tiegs says Skull Games has provided police with “a couple hundred” such OSINT leads since its founding, he conceded the group has no information about how many have resulted in prosecutions or indictments of actual traffickers.

    When asked about Skull Games’s position on arresting victims, Tiegs emphasized that “arresting is different from prosecuting” and argued, “Sometimes they do need to make the arrest, because of the health and welfare of that person. She needs to get clean, maybe she’s high. … Very rarely, in my opinion, is it right to charge and prosecute a girl.”

    Sex worker advocates, however, say any punitive approach is not only ungrounded in the reality of the trade, but also hurts the very people it purports to help. Although exploitation and coercion are dire realities for many sex workers, most women choose to go into sex work either out of personal preference or financial necessity, according to DiAngelo, of Sex Workers Outreach Project Sacramento. (The Chicago branch of SWOP was a plaintiff in the American Civil Liberties Union’s successful 2020 lawsuit against Clearview AI in Illinois.)

    Referring to research she had conducted with the University of California, Davis, DiAngelo explained that socioeconomic desperation is the most common cause of trafficking, a factor only worsened by a brush with the law. “The majority of the people we interview, even if we removed the person who was exploiting them from their life, they still wanted to be in the sex trade,” DiAngelo explained.

    Both DiAngelo and Savannah Sly of the nonprofit New Moon Network, an advocacy group for sex workers, pointed to flaws in the techniques that police claim detect trafficking from coded language in escort ads. “You can’t tell just by looking at a picture whether someone’s trafficked or not,” Sly said. The “dragnet” surveillance of sex workers performed by groups like Skull Games, she claimed, imperils their human rights. “If I become aware I’m being surveilled, that’s not helping my situation,” Sly said, “Sex workers live with a high degree of paranoia.”

    Rather than “rescuing” women from trafficking, DiAngelo argued Skull Games’s collaboration with police risks driving women into the company of people seeking to take advantage of them — particularly if they’ve been arrested and face diminished job prospects outside of sex work. DiAngelo said, “They’re going to lock them into sex work, because once you get the scarlet letter, nobody wants you anymore.”

    The post The Online Christian Counterinsurgency Against Sex Workers appeared first on The Intercept .

    • wifi_tethering open_in_new

      This post is public

      theintercept.com /2023/07/29/skull-games-surveillance-sex-workers/

    • Pictures 9 image

    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • chevron_right

      Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency”

      news.movim.eu / TheIntercept · Wednesday, 26 July, 2023 - 19:03 · 7 minutes

    The Texas Department of Public Safety purchased access to powerful software capable of locating and following people through their phones as part of Republican Gov. Greg Abbott’s “border security disaster” efforts, according to documents reviewed by The Intercept.

    In 2021, Abbott proclaimed that the “surge of individuals unlawfully crossing the Texas-Mexico border posed an ongoing and imminent threat of disaster” to the state and its residents. Among other effects, the disaster declaration opened a spigot of government money to a variety of private firms ostensibly paid to help patrol and blockade the state’s border with Mexico.

    One of the private companies that got in on the cash disbursements was Cobwebs Technologies, a little-known Israeli surveillance contractor. Cobwebs’s marquee product, the surveillance platform Tangles, offers its users a bounty of different tools for tracking people as they navigate both the internet and the real world, synthesizing social media posts, app activity, facial recognition, and phone tracking.

    “As long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

    News of the purchase comes as Abbott’s border crackdown escalated to new heights, following a Department of Public Safety whistleblower’s report of severe mistreatment of migrants by state law enforcement and a Justice Department lawsuit over the governor’s deployment of razor wire on the Rio Grande. The Cobwebs documents show that Abbott’s efforts to usurp the federal government’s constitutional authority to conduct immigration enforcement have extended into the electronic realm as well. The implications could reach far beyond the geographic bounds of the border and into the private lives of citizens and noncitizens alike.

    “Government agencies systematically buying data that has been originally collected to provide consumer services or digital advertising represents the worst possible kind of decontextualized misuse of personal information,” Wolfie Christl, a privacy researcher who tracks data brokerages, told The Intercept. “But as long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

    Like its competitors in the world of software tracking tools, Cobwebs — which sells its services to the Department of Homeland Security, the IRS, and a variety of undisclosed corporate customers — lets its clients track the movements of private individuals without a court order. Instead of needing a judge’s sign-off, these tracking services rely on bulk-purchasing location pings pulled from smartphones, often through unscrupulous mobile apps or in-app advertisers, an unregulated and increasingly pervasive form of location tracking.

    In August 2021, the Texas Department of Public Safety’s Intelligence and Counterterrorism division purchased a year of Tangles access for $198,000, according to contract documents, obtained through a public records request by Tech Inquiry, a watchdog and research organization, and shared with The Intercept. The state has renewed its Tangles subscription twice since then, though the discovery that Cobwebs failed to pay taxes owed in Texas briefly derailed the renewal last April, according to an email included in the records request. (Cobwebs declined to comment for this story.)

    A second 2021 contract document shared with The Intercept shows DPS purchased “unlimited” access to Clearview AI, a controversial face recognition platform that matches individuals to tens of billions of photos scraped from the internet. The purchase, according to the document, was made “in accordance/governed by the Texas Governor’s Disaster Declaration for the Texas-Mexico border for ongoing and imminent threats.” (Clearview did not respond to a request for comment.)

    Each of the three yearlong subscriptions notes Tangles was purchased “in accordance to the provisions outlined in the Texas Governor-Proclaimed Border Disaster Declaration signed May 22, 2022, per Section 418.011 of the Texas Government Code.”

    The disaster declaration, which spans more than 50 counties, is part of an ongoing campaign by Abbott that has pushed the bounds of civil liberties in Texas, chiefly through the governor’s use of the Department of Public Safety.

    Related

    The Texas Border County at the Center of a Dangerous Right-Wing Experiment

    Under Operation Lone Star, Abbott has spent $4.5 billion surging 10,000 Department of Public Safety troopers and National Guard personnel to the border as part of a stated effort to beat back a migrant “ invasion ,” which he claims is aided and abetted by President Joe Biden. The resulting project has been riddled with scandal , including migrants languishing for months in state jails without charges and several suicides among personnel deployed on the mission. Just this week, the Houston Chronicle obtained an internal Department of Public Safety email revealing that troopers had been “ordered to push small children and nursing babies back into the Rio Grande” and “told not to give water to asylum seekers even in extreme heat.”

    On Monday, the U.S. Justice Department sued Texas over Abbott’s deployment of floating barricades on the Rio Grande. Abbott, having spent more than two years angling for a states’ rights border showdown with the Biden administration, responded last week to news of the impending lawsuit by tweeting : “I’ll see you in court, Mr. President.”

    Despite Abbott’s repeated claims that Operation Lone Star is a targeted effort focused specifically on crimes at the border, a joint investigation by the Texas Tribune, ProPublica, and the Marshall Project last year found that the state was counting arrests and drug charges far from the U.S-Mexico divide and unrelated to the Operation Lone Star mandate. Records obtained by the news organizations last summer showed that the Justice Department opened a civil rights investigation into Abbott’s operation. The status of the investigation has not been made public.

    Where the Department of Public Safety’s access to Tangles’s powerful cellphone tracking software will fit into Abbott’s controversial border enforcement regime remains uncertain. (The Texas Department of Public Safety did not respond to a request for comment.)

    Although Tangles provides an array of options for keeping tabs on a given target, the most powerful feature obtained by the Department of Public Safety is Tangles’s “WebLoc” feature: “a cutting-edge location solution which automatically monitors and analyzes location-based data in any specified geographic location,” according to company marketing materials. While Cobwebs claims it sources device location data from multiple sources, the Texas Department of Public Safety contract specifically mentions “ad ID,” a reference to the unique strings of text used to identify and track a mobile phone in the online advertising ecosystem.

    MCALLEN, TX - JUNE 23: A Guatemalan father and his daughter arrives with dozens of other women, men and their children at a bus station following release from Customs and Border Protection on June 23, 2018 in McAllen, Texas. Once families and individuals are released and given a court hearing date they are brought to the Catholic Charities Humanitarian Respite Center to rest, clean up, enjoy a meal and to get guidance to their next destination. Before President Donald Trump signed an executive order Wednesday that halts the practice of separating families who are seeking asylum, over 2,300 immigrant children had been separated from their parents in the zero-tolerance policy for border crossers (Photo by Spencer Platt/Getty Images)

    “Every second, hundreds of consumer data brokers most people never heard of collect and sell huge amounts of personal information on everyone,” explained Christl, the privacy researcher. “Most of these shady and opaque data practices are systematically enabled by today’s digital marketing and advertising industry, which has gotten completely out of control.”

    While advertisers defend this practice on the grounds that the device ID itself doesn’t contain a person’s name, Christl added that “several data companies sell information that helps to link mobile device identifiers to email addresses, phone numbers, names and postal addresses.” Even without extra context, tying a real name to an “anonymized” advertising identifier’s location ping is often trivial, as a person’s daily movement patterns typically quickly reveal both where they live and work.

    Cobwebs advertises that WebLoc draws on “huge sums of location-based data,” and it means huge: According to a WebLoc promotional brochure, it affords customers “worldwide coverage” of smartphone pings based on “billions of data points to ensure maximum location based data coverage.” WebLoc not only provides the exact locations of smartphones, but also personal information associated with their owners, including age, gender, languages spoken, and interests — “e.g., music, luxury goods, basketball” — according to a contract document from the Office of Naval Intelligence, another Cobwebs customer.

    The ability to track a person wherever they go based on an indispensable object they keep on or near them every hour of every day is of obvious appeal to law enforcement officials, particularly given that no judicial oversight is required to use a tool like Tangles. Critics of the technology have argued that a legislative vacuum allows phone-tracking tools, fed by the unregulated global data broker market, to give law enforcement agencies a way around Fourth Amendment protections.

    The power to track people through Tangles, however, is valuable even in countries without an ostensible legal prohibition against unreasonable searches. In 2021, Facebook announced it had removed 200 accounts used by Cobwebs to track its users in Bangladesh, Saudi Arabia, Poland, and several other countries.

    “In addition to targeting related to law enforcement activities,” the company explained, “we also observed frequent targeting of activists, opposition politicians and government officials in Hong Kong and Mexico.”

    Beryl Lipton, an investigative researcher with the Electronic Frontier Foundation, told The Intercept that bolstering surveillance powers under the aegis of an emergency declaration adds further risk to an already fraught technology. “We need to be very skeptical of any expansion of surveillance that occurs under disaster declarations, particularly open-ended claims of emergency,” Lipton said. “They can undermine legislative checks on the executive branch and obviate bounds on state behavior that exist for good reason.”

    The post Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency” appeared first on The Intercept .

    • chevron_right

      FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers “Threat Actors”

      news.movim.eu / TheIntercept · Thursday, 6 July, 2023 - 17:11 · 6 minutes

    The FBI’s primary tool for monitoring social media threats is the same contractor that labeled peaceful Black Lives Matter protest leaders DeRay McKesson and Johnetta Elzie as “threat actors” requiring “continuous monitoring” in 2015.

    The contractor, ZeroFox, identified McKesson and Elzie as posing a “high severity” physical threat, despite including no evidence that McKesson or Elzie were suspected of criminal activity. “It’s been almost a decade since the referenced 2015 incident and in that time we have invested heavily in fine-tuning our collections, analysis and labeling of alerts,” Lexie Gunther, a spokesperson for ZeroFox, told The Intercept, “including the addition of a fully managed service that ensures human analysis of every alert that comes through the ZeroFox Platform to ensure we are only alerting customers to legitimate threats and are labeling those threats appropriately.”

    The FBI, which declined to comment, hired ZeroFox in 2021, a fact referenced in the new 106-page Senate report about the intelligence community’s failure to anticipate the January 6, 2021, uprising at the U.S. Capitol. The June 27 report, produced by Democrats on the Senate Homeland Security Committee, shows the bureau’s broad authorities to surveil social media content — authorities the FBI previously denied it had, including before Congress. It also reveals the FBI’s reliance on outside companies to do much of the filtering for them.

    The FBI’s $14 million contract to ZeroFox for “FBI social media alerting” replaced a similar contract with Dataminr, another firm with a history of scrutinizing racial justice movements. Dataminr, like ZeroFox, subjected the Black Lives Matter movement to web surveillance on behalf of the Minneapolis Police Department, previous reporting by The Intercept has shown.

    In testimony before the Senate in 2021, the FBI’s then-Assistant Director for Counterterrorism Jill Sanborn flatly denied that the FBI had the power to monitor social media discourse.

    “So, the FBI does not monitor publicly available social media conversations?” asked Arizona Sen. Kyrsten Sinema.

    “Correct, ma’am. It’s not within our authorities,” Sanborn replied, citing First Amendment protections barring such activities.

    Sanborn’s statement was widely publicized at the time and cited as evidence that concerns about federal government involvement in social media were unfounded. But, as the Senate report stresses, Sanborn’s answer was false.

    “FBI leadership mischaracterized the Bureau’s authorities to monitor social media,” the report concludes, calling it an “exaggeration of the limits on FBI’s authorities,” which in fact are quite broad.

    It is under these authorities that the FBI sifts through vast amounts of social media content searching for threats, the report reveals.

    “Prior to 2021, FBI contracted with the company Dataminr that used pre-defined search terms to identify potential threats from voluminous open-source posts online, which FBI could then investigate further as appropriate,” the report states, citing internal FBI communications obtained as part of the committee’s investigation. “Effective Jan. 1, 2021, FBI’s contract for these services switched to a new company called ZeroFox that would perform similar functions under a new system.”

    The FBI has maintained that its “intent is not to ‘scrape’ or otherwise monitor individual social media activity,” instead insisting that it “seeks to identify an immediate alerting capability to better enable the FBI to quickly respond to ongoing national security and public safety-related incidents.” Dataminr has also previously told The Intercept that its software “does not provide any government customers with the ability to target, monitor or profile social media users, perform geospatial, link or network analysis, or conduct any form of surveillance.”

    While it may be technically true that flagging social media posts based on keywords isn’t the same as continuously flagging posts from a specific account, the notion that this doesn’t amount to monitoring specific users is misleading. If an account is routinely using certain keywords (e.g. #BlackLivesMatter), flagging those keywords would surface the same accounts repeatedly.

    The 2015 threat report for which ZeroFox was criticized specifically called for “continuous monitoring” of McKesson and Elzie. In an interview with The Intercept, Elzie stressed how incompetent the FBI’s analysis of social media was in her situation. She described a visit the FBI paid her parents in 2016, telling them that it was imperative she not attend the Republican National Convention in Cleveland — an event she says she had no intention of attending and which troll accounts on Twitter bearing her name claimed she would be at to foment violence. (The FBI confirmed that it was “reaching out to people to request their assistance in helping our community host a safe and secure convention,” but did not respond to allegations that they were trying to discourage activists from attending the convention.)

    “My parents were like why would she be going to the RNC? And that’s where the conversation ended because they couldn’t answer that.”

    “I don’t think [ZeroFox] should be getting $14 million dollars [from] the same FBI that knocked on my family’s door [in Missouri] and looked for me when it was world news that I was in Baton Rouge at the time,” Elzie told The Intercept. “They’re just very unserious, both organizations.”

    The FBI was so dependent on automated social media monitoring for ascertaining threats that the temporary loss of access to such software led to panic by bureau officials.

    “This investigation found that FBI’s efforts to effectively detect threats on social media in the lead-up to January 6th were hampered by the Bureau’s change in contracts mere days before the attack,” the report says. “Internal FBI communications obtained by the Committee show how that transition caused confusion and concern as the Bureau’s open-source monitoring capabilities were degraded less than a week before January 6th.”

    One of the FBI communications obtained by the committee was an email from an FBI official at the Washington Field Office, lamenting the loss of Dataminr, which the official deemed “crucial.”

    “Their key term search allows Intel to enter terms we are interested in without having to constantly monitor social media as we’ll receive notification alerts when a social media posts [sic] hits on one of our key terms,” the FBI official said.

    “The amount of time saved combing through endless streams of social media is spent liaising with partners and collaborating and supporting operations,” the email continued. “We will lose this time if we do not have a social media tool and will revert to scrolling through social media looking for concerning posts.”

    But civil libertarians have routinely cautioned against the use of automated social media surveillance tools not just because they place nonviolent, constitutionally protected speech under suspicion, but also for their potential to draw undue scrutiny to posts that represent no threat whatsoever.

    While tools like ZeroFox and Dataminr may indeed spare FBI analysts from poring over timelines, the company’s in-house definition of what posts are relevant or constitute a “threat” can be immensely broad. Dataminr has monitored the social media usage of people and communities of color based on law enforcement biases and stereotypes .

    A May report by The Intercept also revealed that the U.S. Marshals Service’s contract with Dataminr had the company relaying not only information about peaceful abortion rights protests, but also web content that had no apparent law enforcement relevance whatsoever, including criticism of the Met Gala and jokes about Donald Trump’s weight.

    The FBI email closes noting that “Dataminr is user friendly and does not require an expertise in social media exploitation.” But that same user-friendliness can lead government agencies to rely heavily on the company’s designations of what is important or what constitutes a threat.

    The dependence is mutual. In its Securities and Exchange Commission filing , ZeroFox says that “one U.S. government customer accounts for a substantial portion” of its revenue.

    The post FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers “Threat Actors” appeared first on The Intercept .

    • chevron_right

      Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center

      news.movim.eu / TheIntercept · Tuesday, 23 May, 2023 - 10:00 · 7 minutes

    After killing Joe Biden’s audacious Build Back Better legislation in 2021 and emerging as a constant roadblock to Democrats’ sweeping climate agenda, Sen. Joe Manchin’s sprawling coal empire became the focus of intense scrutiny for its impact on the citizens and ecosystem of northern West Virginia. What went unnoticed at the time was another company the senator is quietly profiting off of, housed in the very same building where his coal company Enersystems is headquartered, with an even greater reach.

    Manchin has said in recent weeks that he won’t rule out running to replace Biden in the 2024 presidential election. He maintains a cozy relationship with the moderate political nonprofit No Labels, which has raised tens of millions of dollars to run a third-party presidential ticket in 2024, and he himself has raised millions from special interest groups cheering on his intransigence. But while Manchin has long cultivated the image of a liberty-loving champion, his financial ties to a biometric surveillance company draw a sharp contrast.

    Related

    As Manchin Eyes Presidential Run, His Allies at No Labels Face Mounting Legal Challenges

    For decades, Manchin has been the landlord of the lucrative biometric surveillance firm co-founded in 1991 by his then-23-year-old daughter Heather Bresch, along with her late husband Jack Kirby and Manchin’s brother-in-law, Manuel Llaneza.

    According to Tygart Technology’s website, its mission focuses on “leveraging technology to support National Security.” Since at least 1999, the company has operated out of the Manchin Professional building, where Manchin has collected tens of thousands of dollars in rent over the years, according to deed records, patent applications, and financial disclosures recording rent collection from the enterprise.

    The firm received large contracts from the West Virginia state government in the years that Manchin served as secretary of state and then as governor. In more recent years, Tygart has secured tens of millions of dollars in federal contracts from law enforcement and defense agencies to supply biometric data collection services to intelligence operations in West Virginia and across the country.

    Bresch has held no financial interests in the company since her divorce from Kirby in 1999, according to reporting from the Charleston Gazette , but she is still registered as an agent for the company, according to West Virginia Secretary of State records. Kirby died in 2019, but Tygart’s new president also has ties to the senator. John Waugaman served on Manchin’s transition team for governor, according to the company’s website , and has donated some $12,000 to Manchin in the past decade. Neither a spokesperson for Manchin nor Tygart Technology responded to The Intercept’s questions.

    While the Pentagon and contractors like Tygart justify mass biometric surveillance in the name of national security, both civil liberties advocates and members of Congress have moved to head off what they view as excessive and dangerous data collection.

    Federal lawmakers, led by Sen. Ed Markey, D-Mass., have introduced legislation since 2021 to ban biometric surveillance by the federal government, citing civil liberties advocates’ concerns about racial bias in biometric technology and the mass collection of personal data. Manchin has not supported this year’s bill or its previous iterations.

    “The year is 2023, but we are living through 1984,” Markey said during the bill’s reintroduction this year. “Between the risks of sliding into a surveillance state and the dangers of perpetuating discrimination, this technology creates more problems than solutions. Every American who values their right to privacy, stands against discrimination, and believes people are innocent until proven guilty should be concerned. Enacting a federal moratorium on this technology is critical to ensure our communities are protected from inappropriate surveillance.”

    “For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

    John Davisson, director of litigation and senior counsel at the Electronic Privacy Information Center, or EPIC, said Manchin’s connection to the mass collection of biometric data — which he described as an “alarming activity” — is cause for concern. “Particularly when in the hands of law enforcement, mass biometric technology poses a heightened risk of civil liberties violations,” he told The Intercept. “For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

    Tygart received its first contract from West Virginia in 2000, eventually billing the state for more than $6 million, including web service subcontracts worth tens of thousands of dollars. In 2006, the state auditor launched an investigation into the company as part of a larger audit request by then-Secretary of State Betty Ireland, embroiling Manchin, then governor, in a no-bid contract scandal for services rendered by Tygart Technology.

    The audit ultimately found that Tygart’s accounting procedures were error-ridden, but the auditor nonetheless ruled that “on the surface, there seems to be no criminal intent.” The majority of contracts involving Tygart came in under $10,000, the threshold required under state law for a competitive bidding process. In the months following the audit, Manchin signed House Bill 4031, which raised the cap for no-bid contracts from $10,000 to $25,000.

    By 2009, Tygart was picking up federal contracts. The company has raked in over $117 million in government contracts to provide technology and software products to a host of federal agencies, including the FBI, the Department of Defense, the U.S. Army, the General Services Administration, and the Department of Health and Human Services. The company’s federal contracts peaked in 2015, when it brought in $19.1 million. So far this year, Tygart has $4.8 million worth of business with federal agencies.

    The firm’s Pentagon contracts include providing support for an Automated Biometric Information System, or ABIS, which stores and queries millions of peoples’ biometric files collected both domestically and abroad.

    At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

    “I am a strong supporter of the work done at this facility,” Manchin said during a 2013 Armed Services Committee hearing, referring to a biometrics center in Clarksburg, West Virginia. “More than 6,000 terrorists have been captured or killed as a direct result of the real-time information provided by ABIS to [Special Operations Forces] working in harm’s way. However, the funding for this work will run out on April 4, 2013.”

    Manchin went on to vote for the Bipartisan Budget Act of 2013 to raise limits on discretionary appropriations, which allowed for more funding for the Clarksburg facility.

    At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

    Two years later, Manchin was cheering on investments in biometric surveillance in his home state. In 2015, he welcomed attendees to the Identification Intelligence Expo, which was held in West Virginia for the first time. Tygart was among the attendees, which also included representatives from multiple divisions of the FBI and major defense contractors like Northrop Grumman. That same year, the FBI opened a new biometric technology center on its Clarksburg campus, bringing the Defense Department and FBI’s biometric operations under one roof . “I think we all have to realize it’s a very troubled world we live in,” Manchin said during the ribbon cutting. ”We’re going to have to continue to stay ahead of the curve and be on the cutting edge of technology.”

    According to a report from the Government Accountability Office, the joint FBI/Defense Department facility can screen an individual through both the military’s massive ABIS and the FBI’s sprawling fingerprint database, known as IAFIS. “The IAFIS database includes the fingerprint records of more than 51 million persons who have been arrested in the United States as well as information submitted by other agencies such as the Department of Homeland Security, the Department of State, and Interpol,” the report reads.

    Tygart Technology supplies the hardware used to collect biometric data processed in Clarksburg through its MXSERVER and MatchBox technologies, a contract worth tens of millions of dollars. These facial recognition products are used to search photographic and video databases and monitor surveillance camera streams in real time.

    The technology allows law enforcement officials to track a person’s movement, scan through social media to find people, and identify individuals “using smart phones — including the ability to quickly scan crowds for threats using a mobile device’s embedded video camera.”

    That the Pentagon and the Defense Department are jointly using such technologies is a recipe for violating Americans’ civil liberties, said Davisson of EPIC. “Anytime you’ve got a center like this that’s combining these two operations of criminal enforcement and national security,” he said, “there’s a risk and almost a certainty that the center is going to be blurring lines and running afoul of limitations on what the FBI is allowed to do in a law enforcement context.”

    The post Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center appeared first on The Intercept .

    • wifi_tethering open_in_new

      This post is public

      theintercept.com /2023/05/23/joe-manchin-rents-office-space-to-firm-powering-fbi-pentagon-biometric-surveillance-center/

    • Pictures 5 image

    • visibility
    • visibility
    • visibility
    • visibility
    • visibility
    • chevron_right

      Can the Pentagon Use ChatGPT? OpenAI Won’t Answer.

      news.movim.eu / TheIntercept · Monday, 8 May, 2023 - 10:00 · 9 minutes

    As automated text generators have rapidly, dazzlingly advanced from fantasy to novelty to genuine tool, they are starting to reach the inevitable next phase: weapon. The Pentagon and intelligence agencies are openly planning to use tools like ChatGPT to advance their mission — but the company behind the mega-popular chatbot is silent.

    OpenAI, the nearly $30 billion R&D titan behind ChatGPT, provides a public list of ethical lines it will not cross, business it will not pursue no matter how lucrative, on the grounds that it could harm humanity. Among many forbidden use cases, OpenAI says it has preemptively ruled out military and other “high risk” government applications. Like its rivals, Google and Microsoft, OpenAI is eager to declare its lofty values but unwilling to earnestly discuss what these purported values mean in practice, or how — or even if — they’d be enforced.

    “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves.”

    AI policy experts who spoke to The Intercept say the company’s silence reveals the inherent weakness of self-regulation, allowing firms like OpenAI to appear principled to an AI-nervous public as they develop a powerful technology, the magnitude of which is still unclear. “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves,” said Sarah Myers West, managing director of the AI Now Institute and former AI adviser to the Federal Trade Commission.

    The question of whether OpenAI will allow the militarization of its tech is not an academic one. On March 8, the Intelligence and National Security Alliance gathered in northern Virginia for its annual conference on emerging technologies. The confab brought together attendees from both the private sector and government — namely the Pentagon and neighboring spy agencies — eager to hear how the U.S. security apparatus might join corporations around the world in quickly adopting machine-learning techniques. During a Q&A session, the National Geospatial-Intelligence Agency’s associate director for capabilities, Phillip Chudoba, was asked how his office might leverage AI. He responded at length:

    We’re all looking at ChatGPT and, and how that’s kind of maturing as a useful and scary technology. … Our expectation is that … we’re going to evolve into a place where we kind of have a collision of you know, GEOINT, AI, ML and analytic AI/ML and some of that ChatGPT sort of stuff that will really be able to predict things that a human analyst, you know, perhaps hasn’t thought of, perhaps due to experience, or exposure, and so forth.

    Stripping away the jargon, Chudoba’s vision is clear: using the predictive text capabilities of ChatGPT (or something like it) to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, a relatively obscure outfit compared to its three-letter siblings, is the nation’s premier handler of geospatial intelligence, often referred to as GEOINT. This practice involves crunching a great multitude of geographic information — maps, satellite photos, weather data, and the like — to give the military and spy agencies an accurate picture of what’s happening on Earth. “Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA,” the agency boasts on its site. On April 14, the Washington Post reported the findings of NGA documents that detailed the surveillance capabilities of Chinese high-altitude balloons that had caused an international incident earlier this year.

    Forbidden Uses

    But Chudoba’s AI-augmented GEOINT ambitions are complicated by the fact that the creator of the technology in question has seemingly already banned exactly this application: Both “Military and warfare” and “high risk government decision-making” applications are explicitly forbidden, according to OpenAI’s “Usage policies” page . “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy reads. “Repeated or serious violations may result in further action, including suspending or terminating your account.”

    By industry standards, it’s a remarkably strong, clear document, one that appears to swear off the bottomless pit of defense money available to less scrupulous contractors, and would appear to be a pretty cut-and-dry prohibition against exactly what Chudoba is imagining for the intelligence community. It’s difficult to imagine how an agency that keeps tabs on North Korean missile capabilities and served as a “silent partner” in the invasion of Iraq, according to the Department of Defense , is not the very definition of high-risk military decision-making.

    While the NGA and fellow intel agencies seeking to join the AI craze may ultimately pursue contracts with other firms, for the time being few OpenAI competitors have the resources required to build something like GPT-4, the large language model that underpins ChatGPT. Chudoba’s namecheck of ChatGPT raises a vital question: Would the company take the money? As clear-cut as OpenAI’s prohibition against using ChatGPT for crunching foreign intelligence may seem, the company refuses to say so. OpenAI CEO Sam Altman referred The Intercept to company spokesperson Alex Beck, who would not comment on Chudoba’s remarks or answer any questions. When asked about how OpenAI would enforce its use policy in this case, Beck responded with a link to the policy itself and declined to comment further.

    “I think their unwillingness to even engage on the question should be deeply concerning,” Myers of the AI Now Institute told The Intercept. “I think it certainly runs counter to everything that they’ve told the public about the ways that they’re concerned about these risks, as though they are really acting in the public interest. If when you get into the details, if they’re not willing to be forthcoming about these kinds of potential harms, then it shows sort of the flimsiness of that stance.”

    Public Relations

    Even the tech sector’s clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else: Twitter simultaneously forbids using its platform for surveillance while directly enabling it, and Google sells AI services to the Israeli Ministry of Defense while its official “AI principles” prohibit applications “that cause or are likely to cause overall harm” and “whose purpose contravenes widely accepted principles of international law and human rights.” Microsoft’s public ethics policies note a “commitment to mitigating climate change” while the company helps Exxon analyze oil field data , and similarly professes a “commitment to vulnerable groups” while selling surveillance tools to American police.

    It’s an issue OpenAI won’t be able to dodge forever: The data-laden Pentagon is increasingly enamored with machine learning, so ChatGPT and its ilk are obviously desirable. The day before Chudoba was talking AI in Arlington, Kimberly Sablon, Principal Director for Trusted AI and Autonomy at the Undersecretary of Defense for Research and Engineering, told a conference in Hawaii that “There’s a lot of good there in terms of how we can utilize large language models like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club, “Honestly, we’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies.”

    Steven Aftergood, a scholar of government secrecy and longtime intelligence community observer with the Federation of American Scientists, explained why Chudoba’s plan makes sense for the agency. “NGA is swamped with worldwide geospatial information on a daily basis that is more than an army of human analysts could deal with,” he told The Intercept. “To the extent that the initial data evaluation process can be automated or assigned to quasi-intelligent machines, humans could be freed up to deal with matters of particular urgency. But what is suggested here is that AI could do more than that and that it could identify issues that human analysts would miss.” Aftergood said he doubted an interest in ChatGPT had anything to do with its highly popular chatbot abilities, but in the underlying machine learning model’s potential to sift through massive datasets and draw inferences. “It will be interesting, and a little scary, to see how that works out,” he added.

    U.S. Army Reserve soldiers receive an overview of Washington D.C. as part of the 4th Annual Day with the Army Reserve May 25, 2016.  The event was led by the Private Public Partnership office. (U.S. Army photo by Sgt. 1st Class Marisol Walker)

    The Pentagon seen from above in Washington, D.C, on May 25, 2016.

    Photo: U.S. Army

    Persuasive Nonsense

    One reason it’s scary is because while tools like ChatGPT can near-instantly mimic the writing of a human, the underlying technology has earned a reputation for stumbling over basic facts and generating plausible-seeming but entirely bogus responses. This tendency to confidently and persuasively churn out nonsense — a chatbot phenomenon known as “hallucinating” — could pose a problem for hard-nosed intelligence analysts. It’s one thing for ChatGPT to fib about the best places to get lunch in Cincinnati, and another matter to fabricate meaningful patterns from satellite images over Iran. On top of that, text-generating tools like ChatGPT generally lack the ability to explain exactly how and why they produced their outputs; even the most clueless human analyst can attempt to explain how they reached their conclusion.

    Lucy Suchman, a professor emerita of anthropology and militarized technology at Lancaster University, told The Intercept that feeding a ChatGPT-like system brand new information about the world represents a further obstacle. “Current [large language models] like those that power ChatGPT are effectively closed worlds of already digitized data; famously the data scraped for ChatGPT ends in 2021,” Suchman explained. “And we know that rapid retraining of models is an unsolved problem. So the question of how LLMs would incorporate continually updated real time data, particularly in the rapidly changing and always chaotic conditions of war fighting, seems like a big one. That’s not even to get into all of the problems of stereotyping, profiling, and ill-informed targeting that plague current data-drive military intelligence.”

    OpenAI’s unwillingness to rule out the NGA as a future customer makes good business sense, at least. Government work, particularly of the national security flavor, is exceedingly lucrative for tech firms: In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle landed a CIA contract reportedly worth tens of billions of dollars over its lifetime. Microsoft, which has invested a reported $13 billion into OpenAI and is quickly integrating the smaller company’s machine-learning capabilities into its own products, has earned tens of billions in defense and intelligence work on its own . Microsoft declined to comment.

    But OpenAI knows this work is highly controversial, potentially both with its staff and the broader public. OpenAI is currently enjoying a global reputation for its dazzling machine-learning tools and toys, a gleaming public image that could be quickly soiled by partnering with the Pentagon. “OpenAI’s righteous presentations of itself are consistent with recent waves of ethics-washing in relation to AI,” Suchman noted. “Ethics guidelines set up what my UK friends call ‘hostages to fortune,’ or things you say that may come back to bite you.” Suchman added, “Their inability even to deal with press queries like yours suggests that they’re ill-prepared to be accountable for their own policy.”

    The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept .

    • chevron_right

      Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad

      news.movim.eu / TheIntercept · Saturday, 29 April, 2023 - 17:30 · 5 minutes

    Ernest Moret, a foreign rights manager for the French publishing house La Fabrique, boarded a train in Paris bound for London in early April. He was on his way to attend the London Book Fair.

    When Moret arrived at St. Pancras station in the United Kingdom, two plainclothes cops who apparently said they were “counter-terrorist police” proceeded to terrorize Monet. They interrogated him for six hours, asking everything from his views on pension reform to wanting him to name “anti-government” authors his company had published, according to the publisher, before proceeding to arrest him for refusing to give up the passwords to his phone and laptop. Following his arrest, Moret was released on bail, though his devices were not returned to him.


    The case, while certainly showcasing the United Kingdom’s terrifying anti-terror legislation , also highlights the crucial importance of taking operational security seriously when traveling — even when going on seemingly innocuous trips like a two-and-a-half-hour train ride between London and Paris. One never knows what will trigger the authorities to put a damper on your international excursion.

    Every trip is unique and, ideally, each would get a custom-tailored threat model: itemizing the risks you foresee, and knowing the steps you can take to avoid them. There are nonetheless some baseline digital security precautions to consider before embarking on any trip.

    Travel Devices, Apps, and Accounts

    The first digital security rule of traveling is to leave your usual personal devices at home. Go on your trip with “burner” travel devices instead.

    Aside from the potential for compromise or seizure by authorities, you also run the gamut of risks ranging from having your devices lost or stolen during your trip. It’s typically way less dangerous to just leave your usual devices behind, and to bring along devices you only use when traveling. This doesn’t need to be cost prohibitive: You can buy cheap laptops and either inexpensive new phones or refurbished versions of pricier models. (And also get privacy screens for your new phones and laptops, to reduce the information that’s visible to any onlookers.)

    Spots

    Illustration: Pierre Buttin for The Intercept

    Your travel devices should not have anything sensitive on them. If you’re ever coerced to provide passwords or at risk of otherwise having the devices be taken away from you, you can readily hand over the credentials without compromising anything important.

    If you do need access to sensitive information while traveling, store it in a cloud account somewhere using cloud encryption tools like Cryptomator to encrypt the data first. Be sure to then both log out of your cloud account and make sure it’s not in your browsing history, as well as uninstall Cryptomator or other encryption apps, and only reinstall them and re-log in to your accounts after you’ve reached your destination and are away from your port of entry. (Don’t login to your accounts while still at the airport or train station.)

    Just as you shouldn’t bring your usual devices, you also shouldn’t bring your usual accounts. Make sure you’re logged out of any personal or work accounts which contain sensitive information. If you need to access particular services, use travel accounts you’ve created for your trip. Make sure the passwords to your travel accounts are different from the passwords to your regular accounts, and check if your password manager has a travel mode which lets you access only particular account credentials while traveling.

    Before your trip, do your research to make sure the apps you’re planning to use — like your virtual private network and secure chat app of choice — are not banned or blocked in the region you’re visiting.

    Maintain a line of sight with your devices at all times while traveling. If, for instance, a customs agent or border officer takes your phone or laptop to another room, the safe bet is to consider that device compromised if it’s brought back later, and to immediately procure new devices in-region, if possible.

    If you’re entering a space where it won’t be possible to maintain line of sight — like an embassy or other government building where you’re told to store devices in a locker prior to entry — put the devices into a tamper-evident bag, which you can buy in bulk online before your trip. While this, of course, won’t prevent the devices from being messed with, it will nonetheless give you a ready indication that something may be amiss. Likewise, use tamper-evident bags if ever leaving your devices unattended, like in your hotel room.

    Phone Numbers

    Sensitive information you may have on your devices doesn’t just mean documents, photos, or other files. It can also include things like contacts and chat histories. Don’t place your contacts in danger by leaving them on your device: Keep them in your encrypted cloud drive until you can access them in a safe location.

    Spots

    Illustration: Pierre Buttin for The Intercept

    Much like you shouldn’t bring your usual phone, you also shouldn’t bring your normal SIM card. Instead, use a temporary SIM card to avoid the possibility of authorities taking control of your phone number. Depending on which region you’re going to, it may make more sense to either buy a temporary SIM card when in-region, or buy one beforehand. The advantage of buying a card at your destination is that it may have a higher chance of working, whereas if you buy one in advance, the claims that vendors make about their cards working in a particular region may or may not pan out.

    On the other hand, the region you’re traveling to may have draconian identification requirements in order to purchase a SIM. And, if you’re waiting to purchase a card at your destination, you won’t have phone access while traveling and won’t be able to reach an emergency contact number if you encounter difficulties en route.

    Heading Back

    Keep in mind that the travel precautions outlined here don’t just apply for your inbound trip, they apply just as much for your return trip back home. You may be questioned either as you’re leaving the host country, or as you’re arriving back at your local port of entry. Follow all of the same steps of making sure there is nothing sensitive on your devices prior to heading back home.

    Taking precautions like obtaining and setting up travel devices and accounts, or establishing a temporary phone number, may all seem like hassles for a standard trip, but the point of undertaking these measures is that they’re ultimately less hassle than the repercussions of exposing sensitive information or contacts — or of being interrogated and caged.

    The post Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad appeared first on The Intercept .

    • wifi_tethering open_in_new

      This post is public

      theintercept.com /2023/04/29/phone-laptop-security-international-travel/

    • Pictures 3 image

    • visibility
    • visibility
    • visibility
    • chevron_right

      Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math

      news.movim.eu / TheIntercept · Sunday, 9 April, 2023 - 10:00 · 5 minutes

    Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, US, on Tuesday, Jan. 24, 2023. Investors suing Tesla and Musk argue that his August 2018 tweets about taking Tesla private with funding secured were indisputably false and cost them billions of dollars by spurring wild swings in Tesla's stock price. Photographer: Marlena Sloss/Bloomberg via Getty Images

    Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, on Jan. 24, 2023.

    Photo: Marlena Sloss/Bloomberg via Getty Images

    If there’s one thing you can say for sure about Elon Musk, it’s that he has a huge number of opinions and loves to share them at high volume with the world. The problem here is that his opinions are often stunningly wrong.

    Generally, these stunningly wrong opinions are the conventional wisdom among the ultra-right and ultra-rich.

    In particular, like most of the ultra-right ultra-rich, Musk is desperately concerned that the U.S. is about to be overwhelmed by the costs of Social Security and Medicare.

    He’s previously tweeted — in response to the Christian evangelical humor site Babylon Bee — that “True national debt, including unfunded entitlements, is at least $60 trillion.” On the one hand, this is arguably true. On the other hand, you will understand it’s not a problem if you are familiar with 1) this subject and 2) basic math.

    More recently, Musk favored us with this perspective on Social Security:


    There’s so much wrong with this that it’s difficult to know where to start explaining, but let’s try.

    First of all, Musk is saying that the U.S. will have difficulty paying Social Security benefits in the future due to a low U.S. birth rate. People who believe this generally point to the falling ratio of U.S. workers to Social Security beneficiaries. The Peter G. Peterson Foundation, founded by another billionaire, is happy to give you the numbers : In 1960, there were 5.1 workers per beneficiary, and now there are only 2.8. Moreover, the ratio is projected to fall to 2.3 by 2035.

    This does sound intuitively like it must be a big problem — until you think about it for five seconds. As in many other cases, this is the five seconds of thinking that Musk has failed to do.

    You don’t need to know anything about the intricacies of how Social Security works to understand it. Just use your little noggin. The obvious reality is that if a falling ratio of workers to beneficiaries is an enormous problem, this problem would already have manifested itself.

    Again, look at those numbers. In 1960, 5.1. Now, 2.8. The ratio has dropped by almost half. (In fact, it’s dropped by more than that in Social Security’s history . In 1950 the worker-to-beneficiary ratio was 16.5.) And yet despite a plunge in the worker-retiree ratio that has already happened, the Social Security checks today go out every month like clockwork. There is no mayhem in the streets. There’s no reason to expect disaster if the ratio goes down a little more, to 2.3.

    The reason this is possible is the same reason the U.S. overall is a far richer country than it was in the past: an increase in worker productivity. Productivity is the measure of how much the U.S. economy produces per worker , and probably the most important statistic regarding economic well being. We invent bulldozers, and suddenly one person can do the work of 30 people with shovels. We invent computer printers, and suddenly one person can do the work of 100 typists. We invent E-ZPass, and suddenly zero people can do the work of thousands of tollbooth operators.

    This matters because, when you strip away the complexity, retirement income of any kind is simply money generated by present-day workers being taken from them and given to people who aren’t working. This is true with Social Security, where the money is taken in the form of taxes. But it’s also true with any kind of private savings. The transfer there just uses different mechanisms — say, Dick Cheney, 82, getting dividends from all the stock he owns.

    So it’s all about how much present day workers can produce. And if productivity goes up fast enough, it will swamp any fall in the worker-beneficiary ratio — and the income of both present day workers and retirees can rise indefinitely. This is exactly what happened in the past. And we can see that there’s no reason to believe it won’t continue, again using the concept of math.

    The economist Dean Baker of the Center for Economic and Policy Research, a Washington think tank, has done this math . U.S. productivity has grown at more than 1 percent per year — sometimes much more — over every 15-year period since World War II. If it grows at 1 percent for the next 15 years, it will be possible for both workers and retirees to see their income increase by almost 9 percent. If it grows at 2 percent — about the average since World War II — the income of both workers and retirees can grow by 20 percent during the next 15 years. This does not seem like the “reckoning” predicted by Musk.

    What Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

    What’s even funnier about Musk’s fretting is that it contradicts literally everything about his life. He’s promised for years that Tesla’s cars will soon achieve “full self-driving.” If indeed humans can invent vehicles that can drive without people, this will generate a huge increase in productivity — so much so that some people worry about what millions of truck drivers would do if their jobs are shortly eliminated. Meanwhile, if low birth rates mean there are fewer workers available, the cost of labor will rise, meaning that it will be worth it for Tesla to invest more in creating self-driving trucks. So what Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

    Finally, there’s Musk’s characterization of Japan as a “leading indictor.” Here’s a picture of Tokyo, depicting what a poverty-stricken hellscape Japan has now become due to its low birthrate:

    People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023. (Photo by Philip FONG / AFP) (Photo by PHILIP FONG/AFP via Getty Images)

    People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023.

    Photo: Philip Fong/AFP via Getty Images

    That is a joke. Japan is an extremely rich country by world standards, and the aging of its population has not changed that. The statistic to pay attention here is a country’s per capita income. Aging might be a problem if so many people were old and out of the workforce that per capita income fell, but, as the World Bank will tell you, that hasn’t happened in Japan . In fact, thanks to the magic of productivity, per capita income has continued to rise, albeit more slowly than in Japan’s years of fastest growth.

    So if you’re tempted by Musk’s words to be concerned about what a low birth rate means for Social Security, you don’t need to sweat it. A much bigger problem, for Social Security and the U.S. in general, are the low-functioning brains of our billionaires.

    The post Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math appeared first on The Intercept .