• chevron_right

      Credible Handwriting Machine

      news.movim.eu / Schneier · Friday, 19 May, 2023 - 20:19 · 1 minute

    In case you don’t have enough to worry about, someone has built a credible handwriting machine:

    This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

    Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

    Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

    Watch the video.

    My guess is that this is another detection/detection avoidance arms race.

    • chevron_right

      “Meaningful harm” from AI necessary before regulation, says Microsoft exec

      news.movim.eu / ArsTechnica · Thursday, 11 May, 2023 - 19:48

    “Meaningful harm” from AI necessary before regulation, says Microsoft exec

    Enlarge (credit: HJBC | iStock Editorial / Getty Images Plus )

    As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

    The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"

    World Economic Forum Growth Summit 2023 panel "Growth Hotspots: Harnessing the Generative AI Revolution."

    "I would say yes," Schwarz said, likening regulating AI before "a little bit of harm" is caused to passing driver's license laws before people died in car accidents.

    Read 12 remaining paragraphs | Comments

    • chevron_right

      Google’s ChatGPT-killer is now open to everyone, packing new features

      news.movim.eu / ArsTechnica · Wednesday, 10 May, 2023 - 20:16

    The Google Bard logo at Google I/O

    Enlarge (credit: Google)

    At Wednesday's Google I/O conference, Google announced wide availability of its ChatGPT-like AI assistant, Bard , in over 180 countries with no waitlist. It also announced updates such as support for Japanese and Korean, visual responses to queries, integration with Google services, and add-ons that will extend Bard's capabilities.

    Similar to how OpenAI upgraded ChatGPT with GPT-4 after its launch, Bard is getting an upgrade under the hood. Google says that some of Bard's recent enhancements are powered by Google's new PaLM 2 , a family of foundational large language models (LLMs) that have enabled " advanced math and reasoning skills " and better coding capabilities. Previously, Bard used Google's LaMDA AI model.

    Google plans to add Google Lens integration to Bard, which will allow users to include photos and images in their prompts. On the Bard demo page, Google shows an example of uploading a photo of dogs and asking Bard to “write a funny caption about these two." Reportedly, Bard will analyze the photo, detect the dog breeds, and draft some amusing captions on demand.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      OpenAI gives in to Italy’s data privacy demands, ending ChatGPT ban

      news.movim.eu / ArsTechnica · Monday, 1 May, 2023 - 19:17

    OpenAI gives in to Italy’s data privacy demands, ending ChatGPT ban

    Enlarge (credit: SOPA Images / Contributor | LightRocket )

    In March, an Italian privacy regulator temporarily banned OpenAI's ChatGPT , worried that the text generator had no age-verification controls or "legal basis" for gathering online user data to train the AI tool's algorithms. The regulator gave OpenAI until April 30 to fix these issues, and last Friday, OpenAI announced it had implemented many of the requested changes ahead of schedule. In a statement to the Associated Press , OpenAI confirmed Italy lifted the ban.

    "ChatGPT is available again to our users in Italy," OpenAI's statement said. "We are excited to welcome them back, and we remain dedicated to protecting their privacy.”

    OpenAI made several concessions to the Italian Data Protection Authority to bring ChatGPT back to Italy, The Wall Street Journal reported .

    Read 15 remaining paragraphs | Comments

    • chevron_right

      AI to Aid Democracy

      news.movim.eu / Schneier · Saturday, 29 April, 2023 - 21:22 · 8 minutes

    There’s good reason to fear that AI systems like ChatGPT and GPT4 will harm democracy. Public debate may be overwhelmed by industrial quantities of autogenerated argument. People might fall down political rabbit holes, taken in by superficially convincing bullshit, or obsessed by folies à deux relationships with machine personalities that don’t really exist.

    These risks may be the fallout of a world where businesses deploy poorly tested AI systems in a battle for market share, each hoping to establish a monopoly.

    But dystopia isn’t the only possible future. AI could advance the public good, not private profit, and bolster democracy instead of undermining it. That would require an AI not under the control of a large tech monopoly, but rather developed by government and available to all citizens. This public option is within reach if we want it.

    An AI built for public benefit could be tailor-made for those use cases where technology can best help democracy. It could plausibly educate citizens, help them deliberate together, summarize what they think, and find possible common ground. Politicians might use large language models, or LLMs, like GPT4 to better understand what their citizens want.

    Today, state-of-the-art AI systems are controlled by multibillion-dollar tech companies: Google, Meta, and OpenAI in connection with Microsoft. These companies get to decide how we engage with their AIs and what sort of access we have. They can steer and shape those AIs to conform to their corporate interests. That isn’t the world we want. Instead, we want AI options that are both public goods and directed toward public good.

    We know that existing LLMs are trained on material gathered from the internet, which can reflect racist bias and hate. Companies attempt to filter these data sets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. But leaked emails and conversations suggest that they are rushing half-baked products to market in a race to establish their own monopoly.

    These companies make decisions with huge consequences for democracy, but little democratic oversight. We don’t hear about political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they skirt controversial topics completely? Currently, we have to trust companies to tell us the truth about the trade-offs they face.

    A public option LLM would provide a vital independent source of information and a testing ground for technological choices with big democratic consequences. This could work much like public option health care plans, which increase access to health services while also providing more transparency into operations in the sector and putting productive pressure on the pricing and features of private products. It would also allow us to figure out the limits of LLMs and direct their applications with those in mind.

    We know that LLMs often “ hallucinate ,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw of how they work, or whether it can be corrected for. Democracy could be undermined if citizens trust technologies that just make stuff up at random, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

    But a public option AI could do more than check technology companies’ honesty. It could test new applications that could support democracy rather than undermining it.

    Most obviously, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed, whether in social media, letters to the editor, or comments to rule-making agencies in response to policy proposals. By this we don’t mean that AI will replace humans in the political debate, only that they can help us express ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re OK with accepting help to articulate your personal sentiments or political beliefs. AI will make it easier to generate first drafts, and provide editing help and suggest alternative phrasings. How these AI uses are perceived will change over time, and there is still much room for improvement in LLMs—but their assistive power is real. People are already testing and speculating on their potential for speechwriting , lobbying , and campaign messaging . Highly influential people often rely on professional speechwriters and staff to help develop their thoughts, and AI could serve a similar role for everyday citizens.

    If the hallucination problem can be solved, LLMs could also become explainers and educators . Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that has command of the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

    Finally, and most ambitiously, AI could help facilitate radical democracy at scale. As Carnegie Mellon professor of statistics Cosma Shalizi has observed , we delegate decisions to elected politicians in part because we don’t have time to deliberate on every issue. But AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere: identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

    AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

    Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to consensus. Science fiction writer Ruthanna Emrys, in her remarkable novel A Half-Built Garden , imagines how AI might help people have better conversations and make better decisions—rather than taking advantage of these biases to maximize profits.

    This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort—and the greatest challenges in developing public AI systems would be political.

    Some technological tools are already publicly available . In fairness, tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

    While state-of-the-art LLMs achieve spectacular results, they do so using techniques that are mostly well known and widely used throughout the industry. OpenAI has only revealed limited details of how it trained its latest model, but its major advance over its earlier ChatGPT model is no secret: a multi-modal training process that accepts both image and textual inputs.

    Financially, the largest-scale LLMs being trained today cost hundreds of millions of dollars. That’s beyond ordinary people’s reach, but it’s a pittance compared to U.S. federal military spending—and a great bargain for the potential return. While we may not want to expand the scope of existing agencies to accommodate this task, we have our choice of government labs, like the National Institute of Standards and Technology , the Lawrence Livermore National Laboratory , and other Department of Energy labs, as well as universities and nonprofits, with the AI expertise and capability to oversee this effort.

    Instead of releasing half-finished AI systems for the public to test, we need to make sure that they are robust before they’re released—and that they strengthen democracy rather than undermine it. The key advance that made recent AI chatbot models dramatically more useful was feedback from real people. Companies employ teams to interact with early versions of their software to teach them which outputs are useful and which are not. These paid users train the models to align to corporate interests, with applications like web search (integrating commercial advertisements) and business productivity assistive software in mind.

    To build assistive AI for democracy, we would need to capture human feedback for specific democratic use cases, such as moderating a polarized policy discussion, explaining the nuance of a legal proposal, or articulating one’s perspective within a larger debate. This gives us a path to “ align ” LLMs with our democratic values: by having models generate answers to questions, make mistakes, and learn from the responses of human users, without having these mistakes damage users and the public arena.

    Capturing that kind of user interaction and feedback within a political environment suspicious of both AI and technology generally will be challenging. It’s easy to imagine the same politicians who rail against the untrustworthiness of companies like Meta getting far more riled up by the idea of government having a role in technology development.

    As Karl Popper, the great theorist of the open society, argued, we shouldn’t try to solve complex problems with grand hubristic plans. Instead, we should apply AI through piecemeal democratic engineering , carefully determining what works and what does not. The best way forward is to start small, applying these technologies to local decisions with more constrained stakeholder groups and smaller impacts.

    The next generation of AI experimentation should happen in the laboratories of democracy: states and municipalities. Online town halls to discuss local participatory budgeting proposals could be an easy first step. Commercially available and open-source LLMs could bootstrap this process and build momentum toward federal investment in a public AI option.

    Even with these approaches, building and fielding a democratic AI option will be messy and hard. But the alternative—shrugging our shoulders as a fight for commercial AI domination undermines democratic politics—will be much messier and much worse.

    This essay was written with Henry Farrell and Nathan Sanders, and previously appeared on Slate.com.

    EDITED TO ADD: Linux Weekly News discussion .

    • chevron_right

      Why ChatGPT and Bing Chat are so good at making things up

      news.movim.eu / ArsTechnica · Thursday, 6 April, 2023 - 15:58

    Why ChatGPT and Bing Chat are so good at making things up

    Enlarge (credit: Aurich Lawson | Getty Images)

    Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation .

    Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.

    “Hallucinations”—a loaded term in AI

    AI chatbots such as OpenAI's ChatGPT rely on a type of AI called a "large language model" (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate "natural language" text—language as humans would naturally write or talk. Unfortunately, they can also make mistakes.

    Read 41 remaining paragraphs | Comments

    • chevron_right

      ChatGPT data leak has Italian lawmakers scrambling to regulate data collection

      news.movim.eu / ArsTechnica · Friday, 31 March, 2023 - 18:09

    ChatGPT data leak has Italian lawmakers scrambling to regulate data collection

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    Today an Italian regulator, the Guarantor for the Protection of Personal Data (referred to by its Italian acronym, GPDP), announced a temporary ban on ChatGPT in Italy. The ban is effective immediately and will remain in place while the regulator investigates its concerns that OpenAI—the developer of ChatGPT—is unlawfully collecting Italian Internet users’ personal data to train the conversational AI software and has no age verification system in place to prevent kids from accessing the tool.

    The Italian ban comes after a ChatGPT data breach on March 20 , exposing “user conversations and information relating to the payment of subscribers to the paid service,” GPDP said in its press release. OpenAI notified users impacted by the breach and said it was "committed to protecting our users’ privacy and keeping their data safe," apologizing for falling "short of that commitment, and of our users’ expectations."

    Ars could not immediately reach OpenAI to comment. The company has 20 days to respond with proposed measures that could address GPDP’s concerns or face fines of up to 20 million euro or 4 percent of OpenAI’s gross revenue.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      GPT-4 poses too many risks and releases should be halted, AI group tells FTC

      news.movim.eu / ArsTechnica · Thursday, 30 March, 2023 - 19:01

    The ChatGPT website is displayed on a smartphone screen next to two blocks displaying the letters

    Enlarge (credit: Getty Images | VCG)

    A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.

    OpenAI "has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment," said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).

    Calling for "independent oversight and evaluation of commercial AI products offered in the United States," CAIDP asked the FTC to "open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace."

    Read 21 remaining paragraphs | Comments

    • chevron_right

      Hobbyist builds ChatGPT client for MS-DOS

      news.movim.eu / ArsTechnica · Monday, 27 March, 2023 - 19:23

    A photo of an IBM PC 5155 computer running a ChatGPT client written by Yeo Kheng Meng.

    Enlarge / A photo of an IBM PC 5155 portable computer running a ChatGPT client written by Yeo Kheng Meng. (credit: Yeo Kheng Meng )

    On Sunday, Singapore-based retrocomputing enthusiast Yeo Kheng Meng released a ChatGPT client for MS-DOS that can run on a 4.77 MHz IBM PC from 1981, providing a unique way to converse with the popular OpenAI language model.

    Vintage computer development projects come naturally to Yeo, who created a Slack client for Windows 3.1 in 2019. "I thought to try something different this time and develop for an even older platform as a challenge," he writes on his blog. In this case, he turned his attention to MS-DOS , a text-only operating system first released in 1981, and ChatGPT , an AI-powered large language model (LLM) released by OpenAI in November.

    As a conversational AI model, ChatGPT draws on knowledge scraped from the Internet to answer questions and generate text. Thanks to an API that launched his month , anyone with the programming chops can interface ChatGPT with their own custom application.

    Read 9 remaining paragraphs | Comments