• chevron_right

    Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented” / ArsTechnica · 4 days ago - 18:52

Robotic hand points to a line on a document while a human signs it with a pen. A judge's gavel is in the background.

Enlarge (credit: Getty Images | Andrey Popov)

A lawyer is in trouble after admitting he used ChatGPT to help write court filings that cited six nonexistent cases invented by the artificial intelligence tool.

Lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity," Schwartz wrote in an affidavit on May 24 regarding the bogus citations previously submitted in US District Court for the Southern District of New York.

Schwartz wrote that "the use of generative artificial intelligence has evolved within law firms" and that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed."

Read 18 remaining paragraphs | Comments

  • chevron_right

    Google Search starts rolling out ChatGPT-style generative AI results / ArsTechnica · Thursday, 25 May - 17:39 · 1 minute

Google's "Search Generative Experience" is a plan to put ChatGPT-style generative AI results right in your Google search results page, and the company announced the feature is beginning to roll out today . At least, the feature is rolling out to the mobile apps for people who have been on the waitlist and were chosen as early access users.

Unlike the normally stark-white Google page with 10 blue links, Google's generative AI results appear in colorful boxes above the normal search results. Google will scrape a bunch of information from all over the Internet and present it in an easy list, with purchase links to Best Buy and manufacturers' websites.

If this ever rolls out widely, it would be the biggest change to Google Search results ever, and this design threatens to upend the entire Internet. One example screenshot of a "Bluetooth speaker" search on desktop shows a big row of "Sponsored" shopping ads, then the generative AI results start to show up in a big blue box about halfway down the first page. The blue box summarizes a bunch of information harvested from somewhere and lists several completely unsourced statements and opinions about each speaker. In Google's example, users are never told where this information comes from, so they can't make any judgment as to its trustworthiness. The links all appear to go to manufacturer websites and below that blue box, about two or three screens down, there are finally links to more neutral external websites. The end design goal seems to be "no one will ever click on an external search link ever again," and that would force a lot of sites to shut down.

Read 1 remaining paragraphs | Comments

  • chevron_right

    Built-in ChatGPT-driven Copilot will transform Windows 11 starting in June / ArsTechnica · Tuesday, 23 May - 17:08 · 1 minute

Windows Copilot is an AI-assisted feature coming to Windows 11 preview builds starting in June.

Enlarge / Windows Copilot is an AI-assisted feature coming to Windows 11 preview builds starting in June. (credit: Microsoft)

A couple of months ago, Microsoft added generative AI features to Windows 11 in the form of a taskbar-mounted version of the Bing chatbot . Starting this summer, the company will be going even further, adding a new ChatGPT-driven Copilot feature that can be used alongside your other Windows apps. The company announced the change at its Build developer conference alongside another new batch of Windows 11 updates due later this year. Windows Copilot will be available to Windows Insiders starting in June.

Like the Microsoft 365 Copilot , Windows Copilot is a separate window that opens up along the right side of your screen and assists with various tasks based on what you ask it to do. A Microsoft demo video shows Copilot changing Windows settings, rearranging windows with Snap Layouts , summarizing and rewriting documents that were dragged into it, and opening apps like Spotify, Adobe Express, and Teams. Copilot is launched with a dedicated button on the taskbar.

"Once open, the Windows Copilot side bar stays consistent across your apps, programs and windows, always available to act as your personal assistant. It makes every user a power user, helping you take action, customize your settings, and seamlessly connect across your favorite apps," wrote Microsoft Chief Product Officer Panos Panay.

Read 7 remaining paragraphs | Comments

Interesting essay on the poisoning of LLMs—ChatGPT in particular:

Given that we’ve known about model poisoning for years, and given the strong incentives the black-hat SEO crowd has to manipulate results, it’s entirely possible that bad actors have been poisoning ChatGPT for months. We don’t know because OpenAI doesn’t talk about their processes, how they validate the prompts they use for training, how they vet their training data set, or how they fine-tune ChatGPT. Their secrecy means we don’t know if ChatGPT has been safely managed.

They’ll also have to update their training data set at some point. They can’t leave their models stuck in 2021 forever.

Once they do update it, we only have their word— pinky-swear promises —that they’ve done a good enough job of filtering out keyword manipulations and other training data attacks, something that the AI researcher El Mahdi El Mhamdi posited is mathematically impossible in a paper he worked on while he was at Google .

  • Sc chevron_right

    Credible Handwriting Machine / Schneier · Friday, 19 May - 20:19 · 1 minute

In case you don’t have enough to worry about, someone has built a credible handwriting machine:

This is still a work in progress, but the project seeks to solve one of the biggest problems with other homework machines, such as this one that I covered a few months ago after it blew up on social media. The problem with most homework machines is that they’re too perfect. Not only is their content output too well-written for most students, but they also have perfect grammar and punctuation ­ something even we professional writers fail to consistently achieve. Most importantly, the machine’s “handwriting” is too consistent. Humans always include small variations in their writing, no matter how honed their penmanship.

Devadath is on a quest to fix the issue with perfect penmanship by making his machine mimic human handwriting. Even better, it will reflect the handwriting of its specific user so that AI-written submissions match those written by the student themselves.

Like other machines, this starts with asking ChatGPT to write an essay based on the assignment prompt. That generates a chunk of text, which would normally be stylized with a script-style font and then output as g-code for a pen plotter. But instead, Devadeth created custom software that records examples of the user’s own handwriting. The software then uses that as a font, with small random variations, to create a document image that looks like it was actually handwritten.

Watch the video.

My guess is that this is another detection/detection avoidance arms race.

  • chevron_right

    “Meaningful harm” from AI necessary before regulation, says Microsoft exec / ArsTechnica · Thursday, 11 May - 19:48

“Meaningful harm” from AI necessary before regulation, says Microsoft exec

Enlarge (credit: HJBC | iStock Editorial / Getty Images Plus )

As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."

The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"

World Economic Forum Growth Summit 2023 panel "Growth Hotspots: Harnessing the Generative AI Revolution."

"I would say yes," Schwarz said, likening regulating AI before "a little bit of harm" is caused to passing driver's license laws before people died in car accidents.

Read 12 remaining paragraphs | Comments

  • chevron_right

    Google’s ChatGPT-killer is now open to everyone, packing new features / ArsTechnica · Wednesday, 10 May - 20:16

The Google Bard logo at Google I/O

Enlarge (credit: Google)

At Wednesday's Google I/O conference, Google announced wide availability of its ChatGPT-like AI assistant, Bard , in over 180 countries with no waitlist. It also announced updates such as support for Japanese and Korean, visual responses to queries, integration with Google services, and add-ons that will extend Bard's capabilities.

Similar to how OpenAI upgraded ChatGPT with GPT-4 after its launch, Bard is getting an upgrade under the hood. Google says that some of Bard's recent enhancements are powered by Google's new PaLM 2 , a family of foundational large language models (LLMs) that have enabled " advanced math and reasoning skills " and better coding capabilities. Previously, Bard used Google's LaMDA AI model.

Google plans to add Google Lens integration to Bard, which will allow users to include photos and images in their prompts. On the Bard demo page, Google shows an example of uploading a photo of dogs and asking Bard to “write a funny caption about these two." Reportedly, Bard will analyze the photo, detect the dog breeds, and draft some amusing captions on demand.

Read 6 remaining paragraphs | Comments

  • chevron_right

    OpenAI gives in to Italy’s data privacy demands, ending ChatGPT ban / ArsTechnica · Monday, 1 May - 19:17

OpenAI gives in to Italy’s data privacy demands, ending ChatGPT ban

Enlarge (credit: SOPA Images / Contributor | LightRocket )

In March, an Italian privacy regulator temporarily banned OpenAI's ChatGPT , worried that the text generator had no age-verification controls or "legal basis" for gathering online user data to train the AI tool's algorithms. The regulator gave OpenAI until April 30 to fix these issues, and last Friday, OpenAI announced it had implemented many of the requested changes ahead of schedule. In a statement to the Associated Press , OpenAI confirmed Italy lifted the ban.

"ChatGPT is available again to our users in Italy," OpenAI's statement said. "We are excited to welcome them back, and we remain dedicated to protecting their privacy.”

OpenAI made several concessions to the Italian Data Protection Authority to bring ChatGPT back to Italy, The Wall Street Journal reported .

Read 15 remaining paragraphs | Comments