close
    • chevron_right

      Can the Pentagon Use ChatGPT? OpenAI Won’t Answer.

      news.movim.eu / TheIntercept · Monday, 8 May, 2023 - 10:00 · 9 minutes

    As automated text generators have rapidly, dazzlingly advanced from fantasy to novelty to genuine tool, they are starting to reach the inevitable next phase: weapon. The Pentagon and intelligence agencies are openly planning to use tools like ChatGPT to advance their mission — but the company behind the mega-popular chatbot is silent.

    OpenAI, the nearly $30 billion R&D titan behind ChatGPT, provides a public list of ethical lines it will not cross, business it will not pursue no matter how lucrative, on the grounds that it could harm humanity. Among many forbidden use cases, OpenAI says it has preemptively ruled out military and other “high risk” government applications. Like its rivals, Google and Microsoft, OpenAI is eager to declare its lofty values but unwilling to earnestly discuss what these purported values mean in practice, or how — or even if — they’d be enforced.

    “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves.”

    AI policy experts who spoke to The Intercept say the company’s silence reveals the inherent weakness of self-regulation, allowing firms like OpenAI to appear principled to an AI-nervous public as they develop a powerful technology, the magnitude of which is still unclear. “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves,” said Sarah Myers West, managing director of the AI Now Institute and former AI adviser to the Federal Trade Commission.

    The question of whether OpenAI will allow the militarization of its tech is not an academic one. On March 8, the Intelligence and National Security Alliance gathered in northern Virginia for its annual conference on emerging technologies. The confab brought together attendees from both the private sector and government — namely the Pentagon and neighboring spy agencies — eager to hear how the U.S. security apparatus might join corporations around the world in quickly adopting machine-learning techniques. During a Q&A session, the National Geospatial-Intelligence Agency’s associate director for capabilities, Phillip Chudoba, was asked how his office might leverage AI. He responded at length:

    We’re all looking at ChatGPT and, and how that’s kind of maturing as a useful and scary technology. … Our expectation is that … we’re going to evolve into a place where we kind of have a collision of you know, GEOINT, AI, ML and analytic AI/ML and some of that ChatGPT sort of stuff that will really be able to predict things that a human analyst, you know, perhaps hasn’t thought of, perhaps due to experience, or exposure, and so forth.

    Stripping away the jargon, Chudoba’s vision is clear: using the predictive text capabilities of ChatGPT (or something like it) to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, a relatively obscure outfit compared to its three-letter siblings, is the nation’s premier handler of geospatial intelligence, often referred to as GEOINT. This practice involves crunching a great multitude of geographic information — maps, satellite photos, weather data, and the like — to give the military and spy agencies an accurate picture of what’s happening on Earth. “Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA,” the agency boasts on its site. On April 14, the Washington Post reported the findings of NGA documents that detailed the surveillance capabilities of Chinese high-altitude balloons that had caused an international incident earlier this year.

    Forbidden Uses

    But Chudoba’s AI-augmented GEOINT ambitions are complicated by the fact that the creator of the technology in question has seemingly already banned exactly this application: Both “Military and warfare” and “high risk government decision-making” applications are explicitly forbidden, according to OpenAI’s “Usage policies” page . “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy reads. “Repeated or serious violations may result in further action, including suspending or terminating your account.”

    By industry standards, it’s a remarkably strong, clear document, one that appears to swear off the bottomless pit of defense money available to less scrupulous contractors, and would appear to be a pretty cut-and-dry prohibition against exactly what Chudoba is imagining for the intelligence community. It’s difficult to imagine how an agency that keeps tabs on North Korean missile capabilities and served as a “silent partner” in the invasion of Iraq, according to the Department of Defense , is not the very definition of high-risk military decision-making.

    While the NGA and fellow intel agencies seeking to join the AI craze may ultimately pursue contracts with other firms, for the time being few OpenAI competitors have the resources required to build something like GPT-4, the large language model that underpins ChatGPT. Chudoba’s namecheck of ChatGPT raises a vital question: Would the company take the money? As clear-cut as OpenAI’s prohibition against using ChatGPT for crunching foreign intelligence may seem, the company refuses to say so. OpenAI CEO Sam Altman referred The Intercept to company spokesperson Alex Beck, who would not comment on Chudoba’s remarks or answer any questions. When asked about how OpenAI would enforce its use policy in this case, Beck responded with a link to the policy itself and declined to comment further.

    “I think their unwillingness to even engage on the question should be deeply concerning,” Myers of the AI Now Institute told The Intercept. “I think it certainly runs counter to everything that they’ve told the public about the ways that they’re concerned about these risks, as though they are really acting in the public interest. If when you get into the details, if they’re not willing to be forthcoming about these kinds of potential harms, then it shows sort of the flimsiness of that stance.”

    Public Relations

    Even the tech sector’s clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else: Twitter simultaneously forbids using its platform for surveillance while directly enabling it, and Google sells AI services to the Israeli Ministry of Defense while its official “AI principles” prohibit applications “that cause or are likely to cause overall harm” and “whose purpose contravenes widely accepted principles of international law and human rights.” Microsoft’s public ethics policies note a “commitment to mitigating climate change” while the company helps Exxon analyze oil field data , and similarly professes a “commitment to vulnerable groups” while selling surveillance tools to American police.

    It’s an issue OpenAI won’t be able to dodge forever: The data-laden Pentagon is increasingly enamored with machine learning, so ChatGPT and its ilk are obviously desirable. The day before Chudoba was talking AI in Arlington, Kimberly Sablon, Principal Director for Trusted AI and Autonomy at the Undersecretary of Defense for Research and Engineering, told a conference in Hawaii that “There’s a lot of good there in terms of how we can utilize large language models like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club, “Honestly, we’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies.”

    Steven Aftergood, a scholar of government secrecy and longtime intelligence community observer with the Federation of American Scientists, explained why Chudoba’s plan makes sense for the agency. “NGA is swamped with worldwide geospatial information on a daily basis that is more than an army of human analysts could deal with,” he told The Intercept. “To the extent that the initial data evaluation process can be automated or assigned to quasi-intelligent machines, humans could be freed up to deal with matters of particular urgency. But what is suggested here is that AI could do more than that and that it could identify issues that human analysts would miss.” Aftergood said he doubted an interest in ChatGPT had anything to do with its highly popular chatbot abilities, but in the underlying machine learning model’s potential to sift through massive datasets and draw inferences. “It will be interesting, and a little scary, to see how that works out,” he added.

    U.S. Army Reserve soldiers receive an overview of Washington D.C. as part of the 4th Annual Day with the Army Reserve May 25, 2016.  The event was led by the Private Public Partnership office. (U.S. Army photo by Sgt. 1st Class Marisol Walker)

    The Pentagon seen from above in Washington, D.C, on May 25, 2016.

    Photo: U.S. Army

    Persuasive Nonsense

    One reason it’s scary is because while tools like ChatGPT can near-instantly mimic the writing of a human, the underlying technology has earned a reputation for stumbling over basic facts and generating plausible-seeming but entirely bogus responses. This tendency to confidently and persuasively churn out nonsense — a chatbot phenomenon known as “hallucinating” — could pose a problem for hard-nosed intelligence analysts. It’s one thing for ChatGPT to fib about the best places to get lunch in Cincinnati, and another matter to fabricate meaningful patterns from satellite images over Iran. On top of that, text-generating tools like ChatGPT generally lack the ability to explain exactly how and why they produced their outputs; even the most clueless human analyst can attempt to explain how they reached their conclusion.

    Lucy Suchman, a professor emerita of anthropology and militarized technology at Lancaster University, told The Intercept that feeding a ChatGPT-like system brand new information about the world represents a further obstacle. “Current [large language models] like those that power ChatGPT are effectively closed worlds of already digitized data; famously the data scraped for ChatGPT ends in 2021,” Suchman explained. “And we know that rapid retraining of models is an unsolved problem. So the question of how LLMs would incorporate continually updated real time data, particularly in the rapidly changing and always chaotic conditions of war fighting, seems like a big one. That’s not even to get into all of the problems of stereotyping, profiling, and ill-informed targeting that plague current data-drive military intelligence.”

    OpenAI’s unwillingness to rule out the NGA as a future customer makes good business sense, at least. Government work, particularly of the national security flavor, is exceedingly lucrative for tech firms: In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle landed a CIA contract reportedly worth tens of billions of dollars over its lifetime. Microsoft, which has invested a reported $13 billion into OpenAI and is quickly integrating the smaller company’s machine-learning capabilities into its own products, has earned tens of billions in defense and intelligence work on its own . Microsoft declined to comment.

    But OpenAI knows this work is highly controversial, potentially both with its staff and the broader public. OpenAI is currently enjoying a global reputation for its dazzling machine-learning tools and toys, a gleaming public image that could be quickly soiled by partnering with the Pentagon. “OpenAI’s righteous presentations of itself are consistent with recent waves of ethics-washing in relation to AI,” Suchman noted. “Ethics guidelines set up what my UK friends call ‘hostages to fortune,’ or things you say that may come back to bite you.” Suchman added, “Their inability even to deal with press queries like yours suggests that they’re ill-prepared to be accountable for their own policy.”

    The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept .