close
  • chevron_right

    Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center

    news.movim.eu / TheIntercept · Tuesday, 23 May - 10:00 · 7 minutes

After killing Joe Biden’s audacious Build Back Better legislation in 2021 and emerging as a constant roadblock to Democrats’ sweeping climate agenda, Sen. Joe Manchin’s sprawling coal empire became the focus of intense scrutiny for its impact on the citizens and ecosystem of northern West Virginia. What went unnoticed at the time was another company the senator is quietly profiting off of, housed in the very same building where his coal company Enersystems is headquartered, with an even greater reach.

Manchin has said in recent weeks that he won’t rule out running to replace Biden in the 2024 presidential election. He maintains a cozy relationship with the moderate political nonprofit No Labels, which has raised tens of millions of dollars to run a third-party presidential ticket in 2024, and he himself has raised millions from special interest groups cheering on his intransigence. But while Manchin has long cultivated the image of a liberty-loving champion, his financial ties to a biometric surveillance company draw a sharp contrast.

Related

As Manchin Eyes Presidential Run, His Allies at No Labels Face Mounting Legal Challenges

For decades, Manchin has been the landlord of the lucrative biometric surveillance firm co-founded in 1991 by his then-23-year-old daughter Heather Bresch, along with her late husband Jack Kirby and Manchin’s brother-in-law, Manuel Llaneza.

According to Tygart Technology’s website, its mission focuses on “leveraging technology to support National Security.” Since at least 1999, the company has operated out of the Manchin Professional building, where Manchin has collected tens of thousands of dollars in rent over the years, according to deed records, patent applications, and financial disclosures recording rent collection from the enterprise.

The firm received large contracts from the West Virginia state government in the years that Manchin served as secretary of state and then as governor. In more recent years, Tygart has secured tens of millions of dollars in federal contracts from law enforcement and defense agencies to supply biometric data collection services to intelligence operations in West Virginia and across the country.

Bresch has held no financial interests in the company since her divorce from Kirby in 1999, according to reporting from the Charleston Gazette , but she is still registered as an agent for the company, according to West Virginia Secretary of State records. Kirby died in 2019, but Tygart’s new president also has ties to the senator. John Waugaman served on Manchin’s transition team for governor, according to the company’s website , and has donated some $12,000 to Manchin in the past decade. Neither a spokesperson for Manchin nor Tygart Technology responded to The Intercept’s questions.

While the Pentagon and contractors like Tygart justify mass biometric surveillance in the name of national security, both civil liberties advocates and members of Congress have moved to head off what they view as excessive and dangerous data collection.

Federal lawmakers, led by Sen. Ed Markey, D-Mass., have introduced legislation since 2021 to ban biometric surveillance by the federal government, citing civil liberties advocates’ concerns about racial bias in biometric technology and the mass collection of personal data. Manchin has not supported this year’s bill or its previous iterations.

“The year is 2023, but we are living through 1984,” Markey said during the bill’s reintroduction this year. “Between the risks of sliding into a surveillance state and the dangers of perpetuating discrimination, this technology creates more problems than solutions. Every American who values their right to privacy, stands against discrimination, and believes people are innocent until proven guilty should be concerned. Enacting a federal moratorium on this technology is critical to ensure our communities are protected from inappropriate surveillance.”

“For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

John Davisson, director of litigation and senior counsel at the Electronic Privacy Information Center, or EPIC, said Manchin’s connection to the mass collection of biometric data — which he described as an “alarming activity” — is cause for concern. “Particularly when in the hands of law enforcement, mass biometric technology poses a heightened risk of civil liberties violations,” he told The Intercept. “For a senator to be attached to an industrial-scale biometrics operation used in a wide range of criminal justice contexts is unsettling.”

Tygart received its first contract from West Virginia in 2000, eventually billing the state for more than $6 million, including web service subcontracts worth tens of thousands of dollars. In 2006, the state auditor launched an investigation into the company as part of a larger audit request by then-Secretary of State Betty Ireland, embroiling Manchin, then governor, in a no-bid contract scandal for services rendered by Tygart Technology.

The audit ultimately found that Tygart’s accounting procedures were error-ridden, but the auditor nonetheless ruled that “on the surface, there seems to be no criminal intent.” The majority of contracts involving Tygart came in under $10,000, the threshold required under state law for a competitive bidding process. In the months following the audit, Manchin signed House Bill 4031, which raised the cap for no-bid contracts from $10,000 to $25,000.

By 2009, Tygart was picking up federal contracts. The company has raked in over $117 million in government contracts to provide technology and software products to a host of federal agencies, including the FBI, the Department of Defense, the U.S. Army, the General Services Administration, and the Department of Health and Human Services. The company’s federal contracts peaked in 2015, when it brought in $19.1 million. So far this year, Tygart has $4.8 million worth of business with federal agencies.

The firm’s Pentagon contracts include providing support for an Automated Biometric Information System, or ABIS, which stores and queries millions of peoples’ biometric files collected both domestically and abroad.

At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

“I am a strong supporter of the work done at this facility,” Manchin said during a 2013 Armed Services Committee hearing, referring to a biometrics center in Clarksburg, West Virginia. “More than 6,000 terrorists have been captured or killed as a direct result of the real-time information provided by ABIS to [Special Operations Forces] working in harm’s way. However, the funding for this work will run out on April 4, 2013.”

Manchin went on to vote for the Bipartisan Budget Act of 2013 to raise limits on discretionary appropriations, which allowed for more funding for the Clarksburg facility.

At the same time that Tygart was doing business with the Defense Department, Manchin was touting the Pentagon’s biometrics surveillance work and warning about looming budget cuts.

Two years later, Manchin was cheering on investments in biometric surveillance in his home state. In 2015, he welcomed attendees to the Identification Intelligence Expo, which was held in West Virginia for the first time. Tygart was among the attendees, which also included representatives from multiple divisions of the FBI and major defense contractors like Northrop Grumman. That same year, the FBI opened a new biometric technology center on its Clarksburg campus, bringing the Defense Department and FBI’s biometric operations under one roof . “I think we all have to realize it’s a very troubled world we live in,” Manchin said during the ribbon cutting. ”We’re going to have to continue to stay ahead of the curve and be on the cutting edge of technology.”

According to a report from the Government Accountability Office, the joint FBI/Defense Department facility can screen an individual through both the military’s massive ABIS and the FBI’s sprawling fingerprint database, known as IAFIS. “The IAFIS database includes the fingerprint records of more than 51 million persons who have been arrested in the United States as well as information submitted by other agencies such as the Department of Homeland Security, the Department of State, and Interpol,” the report reads.

Tygart Technology supplies the hardware used to collect biometric data processed in Clarksburg through its MXSERVER and MatchBox technologies, a contract worth tens of millions of dollars. These facial recognition products are used to search photographic and video databases and monitor surveillance camera streams in real time.

The technology allows law enforcement officials to track a person’s movement, scan through social media to find people, and identify individuals “using smart phones — including the ability to quickly scan crowds for threats using a mobile device’s embedded video camera.”

That the Pentagon and the Defense Department are jointly using such technologies is a recipe for violating Americans’ civil liberties, said Davisson of EPIC. “Anytime you’ve got a center like this that’s combining these two operations of criminal enforcement and national security,” he said, “there’s a risk and almost a certainty that the center is going to be blurring lines and running afoul of limitations on what the FBI is allowed to do in a law enforcement context.”

The post Joe Manchin Rents Office Space to Firm Powering FBI, Pentagon Biometric Surveillance Center appeared first on The Intercept .

  • wifi_tethering open_in_new

    This post is public

    theintercept.com /2023/05/23/joe-manchin-rents-office-space-to-firm-powering-fbi-pentagon-biometric-surveillance-center/

  • Pictures 5 image

  • visibility
  • visibility
  • visibility
  • visibility
  • visibility
  • chevron_right

    Can the Pentagon Use ChatGPT? OpenAI Won’t Answer.

    news.movim.eu / TheIntercept · Monday, 8 May - 10:00 · 9 minutes

As automated text generators have rapidly, dazzlingly advanced from fantasy to novelty to genuine tool, they are starting to reach the inevitable next phase: weapon. The Pentagon and intelligence agencies are openly planning to use tools like ChatGPT to advance their mission — but the company behind the mega-popular chatbot is silent.

OpenAI, the nearly $30 billion R&D titan behind ChatGPT, provides a public list of ethical lines it will not cross, business it will not pursue no matter how lucrative, on the grounds that it could harm humanity. Among many forbidden use cases, OpenAI says it has preemptively ruled out military and other “high risk” government applications. Like its rivals, Google and Microsoft, OpenAI is eager to declare its lofty values but unwilling to earnestly discuss what these purported values mean in practice, or how — or even if — they’d be enforced.

“If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves.”

AI policy experts who spoke to The Intercept say the company’s silence reveals the inherent weakness of self-regulation, allowing firms like OpenAI to appear principled to an AI-nervous public as they develop a powerful technology, the magnitude of which is still unclear. “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves,” said Sarah Myers West, managing director of the AI Now Institute and former AI adviser to the Federal Trade Commission.

The question of whether OpenAI will allow the militarization of its tech is not an academic one. On March 8, the Intelligence and National Security Alliance gathered in northern Virginia for its annual conference on emerging technologies. The confab brought together attendees from both the private sector and government — namely the Pentagon and neighboring spy agencies — eager to hear how the U.S. security apparatus might join corporations around the world in quickly adopting machine-learning techniques. During a Q&A session, the National Geospatial-Intelligence Agency’s associate director for capabilities, Phillip Chudoba, was asked how his office might leverage AI. He responded at length:

We’re all looking at ChatGPT and, and how that’s kind of maturing as a useful and scary technology. … Our expectation is that … we’re going to evolve into a place where we kind of have a collision of you know, GEOINT, AI, ML and analytic AI/ML and some of that ChatGPT sort of stuff that will really be able to predict things that a human analyst, you know, perhaps hasn’t thought of, perhaps due to experience, or exposure, and so forth.

Stripping away the jargon, Chudoba’s vision is clear: using the predictive text capabilities of ChatGPT (or something like it) to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, a relatively obscure outfit compared to its three-letter siblings, is the nation’s premier handler of geospatial intelligence, often referred to as GEOINT. This practice involves crunching a great multitude of geographic information — maps, satellite photos, weather data, and the like — to give the military and spy agencies an accurate picture of what’s happening on Earth. “Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA,” the agency boasts on its site. On April 14, the Washington Post reported the findings of NGA documents that detailed the surveillance capabilities of Chinese high-altitude balloons that had caused an international incident earlier this year.

Forbidden Uses

But Chudoba’s AI-augmented GEOINT ambitions are complicated by the fact that the creator of the technology in question has seemingly already banned exactly this application: Both “Military and warfare” and “high risk government decision-making” applications are explicitly forbidden, according to OpenAI’s “Usage policies” page . “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy reads. “Repeated or serious violations may result in further action, including suspending or terminating your account.”

By industry standards, it’s a remarkably strong, clear document, one that appears to swear off the bottomless pit of defense money available to less scrupulous contractors, and would appear to be a pretty cut-and-dry prohibition against exactly what Chudoba is imagining for the intelligence community. It’s difficult to imagine how an agency that keeps tabs on North Korean missile capabilities and served as a “silent partner” in the invasion of Iraq, according to the Department of Defense , is not the very definition of high-risk military decision-making.

While the NGA and fellow intel agencies seeking to join the AI craze may ultimately pursue contracts with other firms, for the time being few OpenAI competitors have the resources required to build something like GPT-4, the large language model that underpins ChatGPT. Chudoba’s namecheck of ChatGPT raises a vital question: Would the company take the money? As clear-cut as OpenAI’s prohibition against using ChatGPT for crunching foreign intelligence may seem, the company refuses to say so. OpenAI CEO Sam Altman referred The Intercept to company spokesperson Alex Beck, who would not comment on Chudoba’s remarks or answer any questions. When asked about how OpenAI would enforce its use policy in this case, Beck responded with a link to the policy itself and declined to comment further.

“I think their unwillingness to even engage on the question should be deeply concerning,” Myers of the AI Now Institute told The Intercept. “I think it certainly runs counter to everything that they’ve told the public about the ways that they’re concerned about these risks, as though they are really acting in the public interest. If when you get into the details, if they’re not willing to be forthcoming about these kinds of potential harms, then it shows sort of the flimsiness of that stance.”

Public Relations

Even the tech sector’s clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else: Twitter simultaneously forbids using its platform for surveillance while directly enabling it, and Google sells AI services to the Israeli Ministry of Defense while its official “AI principles” prohibit applications “that cause or are likely to cause overall harm” and “whose purpose contravenes widely accepted principles of international law and human rights.” Microsoft’s public ethics policies note a “commitment to mitigating climate change” while the company helps Exxon analyze oil field data , and similarly professes a “commitment to vulnerable groups” while selling surveillance tools to American police.

It’s an issue OpenAI won’t be able to dodge forever: The data-laden Pentagon is increasingly enamored with machine learning, so ChatGPT and its ilk are obviously desirable. The day before Chudoba was talking AI in Arlington, Kimberly Sablon, Principal Director for Trusted AI and Autonomy at the Undersecretary of Defense for Research and Engineering, told a conference in Hawaii that “There’s a lot of good there in terms of how we can utilize large language models like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club, “Honestly, we’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies.”

Steven Aftergood, a scholar of government secrecy and longtime intelligence community observer with the Federation of American Scientists, explained why Chudoba’s plan makes sense for the agency. “NGA is swamped with worldwide geospatial information on a daily basis that is more than an army of human analysts could deal with,” he told The Intercept. “To the extent that the initial data evaluation process can be automated or assigned to quasi-intelligent machines, humans could be freed up to deal with matters of particular urgency. But what is suggested here is that AI could do more than that and that it could identify issues that human analysts would miss.” Aftergood said he doubted an interest in ChatGPT had anything to do with its highly popular chatbot abilities, but in the underlying machine learning model’s potential to sift through massive datasets and draw inferences. “It will be interesting, and a little scary, to see how that works out,” he added.

U.S. Army Reserve soldiers receive an overview of Washington D.C. as part of the 4th Annual Day with the Army Reserve May 25, 2016.  The event was led by the Private Public Partnership office. (U.S. Army photo by Sgt. 1st Class Marisol Walker)

The Pentagon seen from above in Washington, D.C, on May 25, 2016.

Photo: U.S. Army

Persuasive Nonsense

One reason it’s scary is because while tools like ChatGPT can near-instantly mimic the writing of a human, the underlying technology has earned a reputation for stumbling over basic facts and generating plausible-seeming but entirely bogus responses. This tendency to confidently and persuasively churn out nonsense — a chatbot phenomenon known as “hallucinating” — could pose a problem for hard-nosed intelligence analysts. It’s one thing for ChatGPT to fib about the best places to get lunch in Cincinnati, and another matter to fabricate meaningful patterns from satellite images over Iran. On top of that, text-generating tools like ChatGPT generally lack the ability to explain exactly how and why they produced their outputs; even the most clueless human analyst can attempt to explain how they reached their conclusion.

Lucy Suchman, a professor emerita of anthropology and militarized technology at Lancaster University, told The Intercept that feeding a ChatGPT-like system brand new information about the world represents a further obstacle. “Current [large language models] like those that power ChatGPT are effectively closed worlds of already digitized data; famously the data scraped for ChatGPT ends in 2021,” Suchman explained. “And we know that rapid retraining of models is an unsolved problem. So the question of how LLMs would incorporate continually updated real time data, particularly in the rapidly changing and always chaotic conditions of war fighting, seems like a big one. That’s not even to get into all of the problems of stereotyping, profiling, and ill-informed targeting that plague current data-drive military intelligence.”

OpenAI’s unwillingness to rule out the NGA as a future customer makes good business sense, at least. Government work, particularly of the national security flavor, is exceedingly lucrative for tech firms: In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle landed a CIA contract reportedly worth tens of billions of dollars over its lifetime. Microsoft, which has invested a reported $13 billion into OpenAI and is quickly integrating the smaller company’s machine-learning capabilities into its own products, has earned tens of billions in defense and intelligence work on its own . Microsoft declined to comment.

But OpenAI knows this work is highly controversial, potentially both with its staff and the broader public. OpenAI is currently enjoying a global reputation for its dazzling machine-learning tools and toys, a gleaming public image that could be quickly soiled by partnering with the Pentagon. “OpenAI’s righteous presentations of itself are consistent with recent waves of ethics-washing in relation to AI,” Suchman noted. “Ethics guidelines set up what my UK friends call ‘hostages to fortune,’ or things you say that may come back to bite you.” Suchman added, “Their inability even to deal with press queries like yours suggests that they’re ill-prepared to be accountable for their own policy.”

The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept .

  • chevron_right

    Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad

    news.movim.eu / TheIntercept · Saturday, 29 April - 17:30 · 5 minutes

Ernest Moret, a foreign rights manager for the French publishing house La Fabrique, boarded a train in Paris bound for London in early April. He was on his way to attend the London Book Fair.

When Moret arrived at St. Pancras station in the United Kingdom, two plainclothes cops who apparently said they were “counter-terrorist police” proceeded to terrorize Monet. They interrogated him for six hours, asking everything from his views on pension reform to wanting him to name “anti-government” authors his company had published, according to the publisher, before proceeding to arrest him for refusing to give up the passwords to his phone and laptop. Following his arrest, Moret was released on bail, though his devices were not returned to him.


The case, while certainly showcasing the United Kingdom’s terrifying anti-terror legislation , also highlights the crucial importance of taking operational security seriously when traveling — even when going on seemingly innocuous trips like a two-and-a-half-hour train ride between London and Paris. One never knows what will trigger the authorities to put a damper on your international excursion.

Every trip is unique and, ideally, each would get a custom-tailored threat model: itemizing the risks you foresee, and knowing the steps you can take to avoid them. There are nonetheless some baseline digital security precautions to consider before embarking on any trip.

Travel Devices, Apps, and Accounts

The first digital security rule of traveling is to leave your usual personal devices at home. Go on your trip with “burner” travel devices instead.

Aside from the potential for compromise or seizure by authorities, you also run the gamut of risks ranging from having your devices lost or stolen during your trip. It’s typically way less dangerous to just leave your usual devices behind, and to bring along devices you only use when traveling. This doesn’t need to be cost prohibitive: You can buy cheap laptops and either inexpensive new phones or refurbished versions of pricier models. (And also get privacy screens for your new phones and laptops, to reduce the information that’s visible to any onlookers.)

Spots

Illustration: Pierre Buttin for The Intercept

Your travel devices should not have anything sensitive on them. If you’re ever coerced to provide passwords or at risk of otherwise having the devices be taken away from you, you can readily hand over the credentials without compromising anything important.

If you do need access to sensitive information while traveling, store it in a cloud account somewhere using cloud encryption tools like Cryptomator to encrypt the data first. Be sure to then both log out of your cloud account and make sure it’s not in your browsing history, as well as uninstall Cryptomator or other encryption apps, and only reinstall them and re-log in to your accounts after you’ve reached your destination and are away from your port of entry. (Don’t login to your accounts while still at the airport or train station.)

Just as you shouldn’t bring your usual devices, you also shouldn’t bring your usual accounts. Make sure you’re logged out of any personal or work accounts which contain sensitive information. If you need to access particular services, use travel accounts you’ve created for your trip. Make sure the passwords to your travel accounts are different from the passwords to your regular accounts, and check if your password manager has a travel mode which lets you access only particular account credentials while traveling.

Before your trip, do your research to make sure the apps you’re planning to use — like your virtual private network and secure chat app of choice — are not banned or blocked in the region you’re visiting.

Maintain a line of sight with your devices at all times while traveling. If, for instance, a customs agent or border officer takes your phone or laptop to another room, the safe bet is to consider that device compromised if it’s brought back later, and to immediately procure new devices in-region, if possible.

If you’re entering a space where it won’t be possible to maintain line of sight — like an embassy or other government building where you’re told to store devices in a locker prior to entry — put the devices into a tamper-evident bag, which you can buy in bulk online before your trip. While this, of course, won’t prevent the devices from being messed with, it will nonetheless give you a ready indication that something may be amiss. Likewise, use tamper-evident bags if ever leaving your devices unattended, like in your hotel room.

Phone Numbers

Sensitive information you may have on your devices doesn’t just mean documents, photos, or other files. It can also include things like contacts and chat histories. Don’t place your contacts in danger by leaving them on your device: Keep them in your encrypted cloud drive until you can access them in a safe location.

Spots

Illustration: Pierre Buttin for The Intercept

Much like you shouldn’t bring your usual phone, you also shouldn’t bring your normal SIM card. Instead, use a temporary SIM card to avoid the possibility of authorities taking control of your phone number. Depending on which region you’re going to, it may make more sense to either buy a temporary SIM card when in-region, or buy one beforehand. The advantage of buying a card at your destination is that it may have a higher chance of working, whereas if you buy one in advance, the claims that vendors make about their cards working in a particular region may or may not pan out.

On the other hand, the region you’re traveling to may have draconian identification requirements in order to purchase a SIM. And, if you’re waiting to purchase a card at your destination, you won’t have phone access while traveling and won’t be able to reach an emergency contact number if you encounter difficulties en route.

Heading Back

Keep in mind that the travel precautions outlined here don’t just apply for your inbound trip, they apply just as much for your return trip back home. You may be questioned either as you’re leaving the host country, or as you’re arriving back at your local port of entry. Follow all of the same steps of making sure there is nothing sensitive on your devices prior to heading back home.

Taking precautions like obtaining and setting up travel devices and accounts, or establishing a temporary phone number, may all seem like hassles for a standard trip, but the point of undertaking these measures is that they’re ultimately less hassle than the repercussions of exposing sensitive information or contacts — or of being interrogated and caged.

The post Digital Security Tips to Prevent the Cops From Ruining Your Trip Abroad appeared first on The Intercept .

  • wifi_tethering open_in_new

    This post is public

    theintercept.com /2023/04/29/phone-laptop-security-international-travel/

  • Pictures 3 image

  • visibility
  • visibility
  • visibility
  • chevron_right

    Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math

    news.movim.eu / TheIntercept · Sunday, 9 April - 10:00 · 5 minutes

Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, US, on Tuesday, Jan. 24, 2023. Investors suing Tesla and Musk argue that his August 2018 tweets about taking Tesla private with funding secured were indisputably false and cost them billions of dollars by spurring wild swings in Tesla's stock price. Photographer: Marlena Sloss/Bloomberg via Getty Images

Elon Musk, chief executive officer of Tesla Inc., departs court in San Francisco, California, on Jan. 24, 2023.

Photo: Marlena Sloss/Bloomberg via Getty Images

If there’s one thing you can say for sure about Elon Musk, it’s that he has a huge number of opinions and loves to share them at high volume with the world. The problem here is that his opinions are often stunningly wrong.

Generally, these stunningly wrong opinions are the conventional wisdom among the ultra-right and ultra-rich.

In particular, like most of the ultra-right ultra-rich, Musk is desperately concerned that the U.S. is about to be overwhelmed by the costs of Social Security and Medicare.

He’s previously tweeted — in response to the Christian evangelical humor site Babylon Bee — that “True national debt, including unfunded entitlements, is at least $60 trillion.” On the one hand, this is arguably true. On the other hand, you will understand it’s not a problem if you are familiar with 1) this subject and 2) basic math.

More recently, Musk favored us with this perspective on Social Security:


There’s so much wrong with this that it’s difficult to know where to start explaining, but let’s try.

First of all, Musk is saying that the U.S. will have difficulty paying Social Security benefits in the future due to a low U.S. birth rate. People who believe this generally point to the falling ratio of U.S. workers to Social Security beneficiaries. The Peter G. Peterson Foundation, founded by another billionaire, is happy to give you the numbers : In 1960, there were 5.1 workers per beneficiary, and now there are only 2.8. Moreover, the ratio is projected to fall to 2.3 by 2035.

This does sound intuitively like it must be a big problem — until you think about it for five seconds. As in many other cases, this is the five seconds of thinking that Musk has failed to do.

You don’t need to know anything about the intricacies of how Social Security works to understand it. Just use your little noggin. The obvious reality is that if a falling ratio of workers to beneficiaries is an enormous problem, this problem would already have manifested itself.

Again, look at those numbers. In 1960, 5.1. Now, 2.8. The ratio has dropped by almost half. (In fact, it’s dropped by more than that in Social Security’s history . In 1950 the worker-to-beneficiary ratio was 16.5.) And yet despite a plunge in the worker-retiree ratio that has already happened, the Social Security checks today go out every month like clockwork. There is no mayhem in the streets. There’s no reason to expect disaster if the ratio goes down a little more, to 2.3.

The reason this is possible is the same reason the U.S. overall is a far richer country than it was in the past: an increase in worker productivity. Productivity is the measure of how much the U.S. economy produces per worker , and probably the most important statistic regarding economic well being. We invent bulldozers, and suddenly one person can do the work of 30 people with shovels. We invent computer printers, and suddenly one person can do the work of 100 typists. We invent E-ZPass, and suddenly zero people can do the work of thousands of tollbooth operators.

This matters because, when you strip away the complexity, retirement income of any kind is simply money generated by present-day workers being taken from them and given to people who aren’t working. This is true with Social Security, where the money is taken in the form of taxes. But it’s also true with any kind of private savings. The transfer there just uses different mechanisms — say, Dick Cheney, 82, getting dividends from all the stock he owns.

So it’s all about how much present day workers can produce. And if productivity goes up fast enough, it will swamp any fall in the worker-beneficiary ratio — and the income of both present day workers and retirees can rise indefinitely. This is exactly what happened in the past. And we can see that there’s no reason to believe it won’t continue, again using the concept of math.

The economist Dean Baker of the Center for Economic and Policy Research, a Washington think tank, has done this math . U.S. productivity has grown at more than 1 percent per year — sometimes much more — over every 15-year period since World War II. If it grows at 1 percent for the next 15 years, it will be possible for both workers and retirees to see their income increase by almost 9 percent. If it grows at 2 percent — about the average since World War II — the income of both workers and retirees can grow by 20 percent during the next 15 years. This does not seem like the “reckoning” predicted by Musk.

What Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

What’s even funnier about Musk’s fretting is that it contradicts literally everything about his life. He’s promised for years that Tesla’s cars will soon achieve “full self-driving.” If indeed humans can invent vehicles that can drive without people, this will generate a huge increase in productivity — so much so that some people worry about what millions of truck drivers would do if their jobs are shortly eliminated. Meanwhile, if low birth rates mean there are fewer workers available, the cost of labor will rise, meaning that it will be worth it for Tesla to invest more in creating self-driving trucks. So what Musk is essentially saying is that technology in general, and his car company in particular, are going to fail.

Finally, there’s Musk’s characterization of Japan as a “leading indictor.” Here’s a picture of Tokyo, depicting what a poverty-stricken hellscape Japan has now become due to its low birthrate:

People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023. (Photo by Philip FONG / AFP) (Photo by PHILIP FONG/AFP via Getty Images)

People walk under cherry blossoms in full bloom at a park in the Sumida district of Tokyo on March 22, 2023.

Photo: Philip Fong/AFP via Getty Images

That is a joke. Japan is an extremely rich country by world standards, and the aging of its population has not changed that. The statistic to pay attention here is a country’s per capita income. Aging might be a problem if so many people were old and out of the workforce that per capita income fell, but, as the World Bank will tell you, that hasn’t happened in Japan . In fact, thanks to the magic of productivity, per capita income has continued to rise, albeit more slowly than in Japan’s years of fastest growth.

So if you’re tempted by Musk’s words to be concerned about what a low birth rate means for Social Security, you don’t need to sweat it. A much bigger problem, for Social Security and the U.S. in general, are the low-functioning brains of our billionaires.

The post Elon Musk Wants to Cut Your Social Security Because He Doesn’t Understand Math appeared first on The Intercept .

  • chevron_right

    Twitter Deploys Classic Musk Tactics to Hunt Down Leaker

    news.movim.eu / TheIntercept · Saturday, 8 April - 10:00 · 4 minutes

Twitter last month submitted a Digital Millennium Copyright Act notice to GitHub — a web service designed to host user-uploaded source code — demanding that certain content be taken down because it was allegedly “[p]roprietary source code for Twitter’s platform and internal tools.” Twitter subsequently filed a declaration in federal court supporting its request for a DMCA subpoena, the ostensible aim of which was “to identify the alleged infringer or infringers who posted Twitter’s source code on systems operated by GitHub without Twitter’s authorization.”

However, Twitter appears to have revised its DMCA notice, essentially a claim of copyright infringement, the same day it was filed to request not only information about the uploader, but also “any related upload / download / access history (and any contact info, IP addresses, or other session info related to same), and any associated logs related to this repo or any forks thereof.” In other words, Twitter is now seeking information not only about the alleged leaker, but also about anyone who interacted with the particular GitHub repository, the online space for storing source code, in any way, including simply by accessing it. Trying to strong-arm GitHub into revealing information about visitors to a particular repository it hosts via a request for a subpoena is a move reminiscent of the Justice Department attempting to compel a web-hosting company to reveal information about visitors to an anti-Trump website.

DMCA: The Doxxing and Censorship Tool of Choice

This isn’t the first time that corporations have tried to use DMCA subpoenas to identify leakers. A Marvel Studios affiliate recently petitioned for DMCA subpoenas to force Reddit and Google to reveal information about someone who uploaded a film script to Google and posted about it on Reddit before the movie was released. DMCA claims also have a sordid history of being used in doxxing attempts. False DMCA claims can be filed to lure a targeted user to then file a counterclaim, which necessitates that they fill in their name and address, which in turn gets passed on to the original filer. At other times, the DMCA is used simply to censor content, whether to muzzle members of civil society or for reputation management .

No Subpoena Required?

GitHub has seemed all too willing to provide information about both its repository owners and its visitors, even without a subpoena. When the owner of another, unrelated repository recently asked GitHub to provide access logs of users who had visited it, GitHub appears to have readily complied, obscuring only the last octet of the visitor IP address, with the unredacted portion still potentially revealing information such as a user’s internet service provider and approximate location.

There are also any number of public ways to extract user information from GitHub, such as email addresses associated with a particular GitHub account. Ironically, some scripts hosted on GitHub are designed to automate the exfiltration of a GitHub user’s email address. Once an email address is learned, the process of requesting a subpoena for further information about a particular user may be repeated in an attempt to obtain yet more sensitive data.

Musk’s Bag of Tricks

Aside from claiming to use watermarking methods to catch leakers, Musk’s other companies have also sought subpoenas to force service providers to reveal information about leakers. For instance, when Musk zeroed in on (and subsequently harassed ) a suspected leaker who provided internal documents to a reporter about large amounts of waste being generated at Tesla’s “Gigafactory,” Tesla moved to subpoena Apple , AT&T , Dropbox , Facebook , Google , Microsoft , Open Whisper Systems (the organization formerly behind the secure messaging app Signal ), and WhatsApp . The proposed subpoenas “commanded” their targets to preserve any information about the suspected leaker’s accounts, as well as all documents that the suspected leaker “has deleted from the foregoing accounts but that are still accessible by you.”

In addition to proposed subpoenas, Tesla has reportedly tried to identify leakers by reviewing surveillance footage to see who had been taking photos (the original Business Insider story that prompted the Tesla investigation mentioned that the source had provided images to corroborate their claims of waste at the factory). The company has also checked file access logs to see who had accessed data that was provided to the news outlet.

Following identification of the suspected leaker, Tesla reportedly engaged in an extensive surveillance campaign, including hacking the suspect’s phone; requesting that the suspect turn over their laptop for an “update” that was, in fact, a forensic audit; deploying a plainclothes security guard to monitor the suspect on the factory floor; and hiring private investigators to conduct further surveillance.

Takeaways for Leakers

Given the lax approach to divulging user information by service providers, coupled with the aggressive tactics employed by companies to reveal sources, the takeaway for would-be leakers is clear: Do not trust service providers to protect any information they may have about you. Websites may reveal information about the leaker, intentionally or not, and whether legally obligated or of their own accord. Leakers would do well to avoid using their home or other proximate internet connection and to further obfuscate it using tools such as the Tor Browser . Additionally, it’s best to ensure that any information required to set up a particular account, such as an email address or phone number, not be traceable to the leaker.

The post Twitter Deploys Classic Musk Tactics to Hunt Down Leaker appeared first on The Intercept .

  • chevron_right

    Get used to disappointment: Why technology often doesn’t meet the hype

    news.movim.eu / ArsTechnica · Saturday, 1 April - 13:30 · 1 minute

Image of a supersonic jet airliner.

Enlarge / Once the future of travel, now a museum piece. (credit: Didier Messens )

Vaclav Smil reminds us that despite the onslaught of popular techno-pundits claiming otherwise, immense and rapid progress in one realm does not mean immense and rapid progress in all realms.

Let’s just get this out of the way at the start: Smil is Bill Gates’ favorite author. He’s written 40 books, all of them about some combination of energy, China, or the combination of food, agriculture, and ecology. His newest book, Invention and Innovation: A Brief History of Hype and Failure , is somewhat of a departure, although it does touch on all of these. Primarily, it is a tale of thwarted promise.

Smil is very intentional about the types of flops he highlights. He is not interested in embarrassing design failures (the Titanic, Betamax, Google Glass) or undesirable side effects of inventions everyone still uses despite them (prescription drugs, cars, plastic). Rather, he focuses on the categories chosen to demonstrate the limits of innovation. Although astoundingly rapid progress has been made in the fields of electronics and computing over the past 50 or so years, it does not follow that we are thus in some unprecedented golden age of disruptive, transformative growth in every field.

Read 10 remaining paragraphs | Comments

  • chevron_right

    Elon Musk’s Twitter Widens Its Censorship of Modi’s Critics

    news.movim.eu / TheIntercept · Tuesday, 28 March - 21:16 · 5 minutes

Two months after teaming up with the Indian government to censor a BBC documentary on human rights abuses by Prime Minister Narendra Modi, Twitter is yet again collaborating with India to impose an extraordinarily broad crackdown on speech.

Last week, the Indian government imposed an internet blackout across the northern state of Punjab, home to 30 million people, as it conducted a manhunt for a local Sikh nationalist leader, Amritpal Singh. The shutdown paralyzed internet and SMS communications in Punjab (some Indian users told The Intercept that the shutdown was targeted at mobile devices).

While Punjab police detained hundreds of suspected followers of Singh, Twitter accounts from over 100 prominent politicians, activists, and journalists in India and abroad have been blocked in India at the request of the government. On Monday, the account of the BBC News Punjabi was also blocked — the second time in a few months that the Indian government has used Twitter to throttle BBC services in its country. The Twitter account for Jagmeet Singh (no relation to Amritpal), a leading progressive Sikh Canadian politician and critic of Modi, was also not viewable inside India.

Under the leadership of owner and CEO Elon Musk, Twitter has promised to reduce censorship and allow a broader range of voices on the platform. But after The Intercept reported on Musk’s censorship of the BBC documentary in January, as well as Twitter’s intervention against high-profile accounts who shared it, Musk said that he had been too busy to focus on the issue. “First I’ve heard,” Musk wrote on January 25. “It is not possible for me to fix every aspect of Twitter worldwide overnight, while still running Tesla and SpaceX, among other things.”

Two months later, he still hasn’t found the time. Musk had previously pledged to step down as Twitter CEO, but no public progress has been made since his announcement.

While Modi’s suppression has focused on Punjab, Twitter’s collaboration has been nationwide, restricting public debate about the government’s aggressive move. Critics say that the company is failing the most basic test of allowing the platform to operate freely under conditions of government pressure.

“In India, Twitter, Facebook, and other social media companies have today become handmaidens to authoritarianism,” said Arjun Sethi, a human rights lawyer and adjunct professor of law at Georgetown University Law Center. “They routinely agree to requests not just to block social media accounts not just originating in India, but all over the world.”

Punjab was the site of a brutal government counterinsurgency campaign in the ’80s and ’90s that targeted a separatist movement that sought to create an independent state for Sikhs. More recently, Punjab was the site of massive protests by farmers groups against bills to deregulate agricultural markets. The power struggles between the government and resistance movements have fueled repressive conditions on the ground.

“Punjab is a de facto police state,” said Sukhman Dhami, co-director of Ensaaf, a human rights organization focused on Punjab. “Despite being one of the tiniest states in India, it has one of the highest density of police personnel, stations and checkpoints — as is typical of many of India’s minority-majority states — as well as a huge number of military encampments because it shares a border with Pakistan and Kashmir.”

“Punjab is a de facto police state.”

Modi’s Hindu nationalist government has justified its efforts to arrest followers of Amritpal Singh by claiming that he was promoting separatism and “disturbing communal harmony” in recent speeches.

In late February, Singh’s followers sacked a Punjab police station in an attempt to free allies held there. The Indian media reported that the attack triggered the government’s response.

In the void left by Twitter blocks and the internet shutdown across much of the region, Indian news outlets, increasingly themselves under the thumb of the ruling government and its allies, have filled the airwaves with speculation on Singh’s whereabouts. On Tuesday, Indian news reports claimed that CCTV footage appeared to show Singh walking around Delhi masked and without a turban.

The Modi administration has told the public a story of a dangerous, radical preacher who must be stopped at any cost. Efforts by dissidents to contextualize Modi’s crackdown within his increasingly intolerant and authoritarian nationalism have been smothered by Twitter.

“People within Punjab are unable to reach one another, and members of the diaspora are unable to reach their family members, friends, and colleagues,” Sethi told The Intercept. “India leads the world in terms of government imposed blackouts and regularly imposes them as a part of mass censorship and disinformation campaigns. Human rights defenders documenting atrocities in Punjab are blocked, and activists in the diaspora raising information about what is happening on the ground are blocked as well.”

Modi’s government tried to throttle Twitter even before Musk’s takeover. Twitter India staff have been threatened with arrest over refusals to block government critics and faced other forms of pressure inside the country. At the time that Musk took charge of the company, it had a mere 20 percent compliance rate with Indian government requests. Following massive layoffs that reduced 90 percent of Twitter India’s staff, the platform appears to have become far more obliging in the face of government pressure, as its actions to censor its critics now show.

Musk, who has consistently characterized his acquisition of Twitter as a triumph of free speech, has framed his compliance as mere deference to the will of governments in countries where Twitter operates. “Like I said, my preference is to hew close to the laws of countries in which Twitter operates,” Musk tweeted last year. “If the citizens want something banned, then pass a law to do so, otherwise it should be allowed.”

“The main thing that the Indian government is trying to accomplish is to protect the reputation of Modi.”

Critics say that Musk’s policy of deferring to government requests is dangerous and irresponsible, as it empowers governments to suppress speech they find inconvenient. And a request from the executive branch is not necessarily the same thing as an order from a court; under previous ownership, Twitter regularly fought such requests from government officials, including those in the Modi administration.

As the manhunt for Singh and his supporters continues, large protests have broken out in foreign countries with large Punjabi diasporas, including a protest in London that resulted in the vandalism of the Indian Embassy. Despite this backlash, Modi appears to be pressing ahead with internet shutdowns.

“The main thing that the Indian government is trying to accomplish is to protect the reputation of Modi,” said Dhami. “They have a zero tolerance for anything that harms his reputation, and what triggers them most of all is a sense that his reputation is being attacked.”

The post Elon Musk’s Twitter Widens Its Censorship of Modi’s Critics appeared first on The Intercept .

  • chevron_right

    Anti-Palestinian Hate on Social Media Is Growing, Says a Facebook Partner

    news.movim.eu / TheIntercept · Monday, 27 March - 09:00 · 6 minutes

Violent and racist anti-Palestinian rhetoric grew more prevalent across social media platforms last year, according to a new report published by 7amleh, an organization that partners with Meta, the parent company of Instagram and Facebook.

Hateful anti-Palestinian remarks grew by 10 percent in 2022, compared to the prior year, according to the new report, based on an aggregated analysis of mentions of “Arabs,” “Palestinians,” and related keywords by Israeli social media users. 7amleh attributes the increase to a spate of real-world violence, including the killing of Al Jazeera journalist Shireen Abu Akleh and Israeli military raids at the Al-Aqsa Mosque in Jerusalem. As The Intercept previously reported, 2022 was the deadliest year for Palestinians in the West Bank since the end of the Second Intifada, with 2023 already on track to surpass that toll.

“The 10 percent increase in violent speech against Arabs and Palestinians is alarming and should be taken on serious matter from the tech giants so that everyone enjoys their rights and freedoms in this digital space,” said Mona Shtaya, the advocacy and communications director of 7amleh.

“The 10 percent increase in violent speech against Arabs and Palestinians is alarming.”

The 7amleh report also claims a pronounced increase in bigotry and violent incitement directed against Palestinian members of the Knesset, Israeli’s parliamentary body, a spike attributed to the coalition government formed by Knesset members Naftali Bennett and Yair Lapid. Much of Bennett’s hateful rhetoric flagged in the report took the shape of claims that Arabs are terrorists, that Arab members of the Knesset support terrorism, and calls for the death or forced displacement of Palestinian Arabs.

While the report states Facebook remains a hotbed of anti-Arab hate, “Twitter continues to be the main platform for violent discourse against Palestinians inside Israel.”

Civil society groups like 7amleh have long tracked the ways in which social media platforms censor Palestinians online through biased, lopsided enforcement of content moderation policies, using rulebooks that often conflate nonviolent political speech with the endorsement of terrorism.

Following The Intercept’s publication of Meta’s roster of so-called Dangerous Individuals and Organizations, content moderation scholars noted that Middle Eastern, South Asian, and Muslim people and groups were overrepresented. 7amleh and other groups say these biases result in imbalanced censorship for Palestinians and relative latitude for Israelis during periods of violence.

7amleh is one of hundreds of global civil society organizations Meta has worked with in an effort to “better understand the impact” of its platforms around the world. “We partner with expert organizations that represent the voices and experiences of marginalized users around the globe and are equipped to raise questions and concerns about content on Facebook and Instagram,” Meta says on its website. “In addition to reporting content, Trusted Partners provide crucial feedback on our content policies and enforcement to help ensure that our efforts keep users safe.”

Advocates for Palestinian rights say those efforts have fallen flat.

“The Israeli right wing has been more than happy to declare on social media what they’d like to do to the Palestinian people,” Ubai Al-Aboudi, a Palestinian human rights activist and executive director of the Bisan Center for Research and Development, a prominent civil society group, told The Intercept. “There is a proliferation of hate speech against Palestinians. And this is the result of an asymmetrical power relation where big tech is happy to endorse the Israeli narrative while meanwhile suppressing the Palestinian narrative.”

Proliferation of online anti-Palestinian rhetoric and explicit incitement to violence was on display earlier this year during one of the worst episodes of violence by Israeli settlers in the West Bank to date. Hundreds of settlers went on a nighttime rampage in the town of Huwara, near the city of Nablus, torching homes and cars. One Palestinian was killed, and dozens more were injured.

The incident, which was widely condemned and referred to as a “ pogrom ,” was also widely celebrated on social media, including by top figures in Israel’s new extremist government. Bezalel Smotrich, a far-right politician who is Israel’s current finance minister and a minister of defense in charge of civilian affairs in the West Bank, liked a tweet that made a call “to wipe out the village of Huwara today.” Later, Smotrich publicly repeated the remark himself, before being forced to apologize . (Two weeks after making those comments, Smotrich was in the U.S ., where he was shunned by officials and several prominent Jewish organizations, but welcomed by others.)

The rampage in Huwara, which was documented in real time on social media, was launched following public calls for an attack against the town after a Palestinian man killed two Israeli settlers as they drove through. In the days following the attack, incitement to violence only escalated, with several accounts, including one popular among settlers, calling for yet more “ vengeance .”

“The Israeli right wing is promoting hate speech on social media against Palestinians, like the pogrom on Huwara,” said Al-Aboudi of the Bisan Center for Research and Development. “They were calling for it, before it happened, on social media. And even after the incident, the celebrations were well tolerated by big tech.”

7amleh’s findings on the proliferation of anti-Palestinian online speech stand in stark contrast with social media companies’ active crackdown on Palestinian speech online. As The Intercept has repeatedly reported , platforms’ content moderation policies are regularly enforced in an arbitrary manner that has resulted in the censorship of Palestinian voices, including the frequent suspensions of Palestinian journalists’ accounts.

Last year, a review commissioned by Meta concluded that the company’s actions during a May 2021 Israeli bombing campaign on the occupied Gaza Strip had an “an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”

The report’s conclusions also point to a glaring double standard in Israeli officials’ efforts to moderate online speech. Israel has long worked with social media companies in an effort to remove content that it considers incitement, frequently flagging posts for removal.

Earlier this year, Israeli officials with the Knesset’s Committee for Immigration, Absorption and Diaspora Affairs revealed that they had proactively lobbied TikTok for content removal to rates significantly higher than those of most other countries. The officials cited partial reports from TikTok for 2022 that it received 2,713 requests from various governments around the world to remove or limit content or accounts, with the Israeli government coming second only to Russia in calling for content removal. Israel made 252 official requests, 9.2 percent of the total number of requests to TikTok worldwide. By comparison, the U.S. government submitted only 13 applications, the French government submitted 27, the United Kingdom 71, and Germany 167.

“The Israeli right wing is promoting hate speech on social media against Palestinians, like the pogrom on Huwara.”

“Incitement on social media is a problem that needs to be dealt with in-depth,” Knesset member Oded Forer, the committee’s chair, said at the time, referring specifically to anti-Semitic speech. “It is clear to everyone that the extreme discourse on social networks increases and encourages acts of terrorism against Jews.” The committee made no reference to anti-Arab and anti-Palestinian speech in that context.

Lobbying for content removal is not the only way Israeli officials have worked to control speech on social media platforms. This week, the Israeli military acknowledged orchestrating a covert social media operation during the May 2021 Gaza campaign to “improve the Israeli public’s view of Israel’s performance in the conflict,” the Associated Press reported. As part of the operation, Israeli Defense Forces officials created fake accounts to “conceal the campaign’s origins and engage audiences” on Twitter, Facebook, Instagram, and TikTok and coordinated the effort with real social media influencers.

While Israeli military officials regularly use social media to monitor and gather intelligence on Palestinians, this was seemingly the first time that an Israeli influence campaign targeted the Israeli public.

The post Anti-Palestinian Hate on Social Media Is Growing, Says a Facebook Partner appeared first on The Intercept .