close
  • chevron_right

    Fake Pentagon “explosion” photo sows confusion on Twitter

    news.movim.eu / ArsTechnica · Tuesday, 23 May - 21:01 · 1 minute

A fake AI-generated image of an

Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

Read 6 remaining paragraphs | Comments

  • chevron_right

    Artists astound with AI-generated film stills from a parallel universe

    news.movim.eu / ArsTechnica · Friday, 7 April - 22:49

An AI-generated image from an <a class=#aicinema still series called" src="https://cdn.arstechnica.net/wp-content/uploads/2023/04/wieland_hero_2-800x450.jpg" />

Enlarge / An AI-generated image from an #aicinema still series called "Vinyl Vengeance" by Julie Wieland, created using Midjourney. (credit: Julie Wieland / Midjourney )

Since last year, a group of artists have been using an AI image generator called Midjourney to create still photos of films that don't exist. They call the trend "AI cinema." We spoke to one of its practitioners, Julie Wieland, and asked her about her technique, which she calls "synthography," for synthetic photography.

The origins of “AI cinema” as a still image art form

Last year, image synthesis models like DALL-E 2 , Stable Diffusion , and Midjourney began allowing anyone with a text description (called a "prompt") to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and run with them.

While anyone with a prompt can make an AI-generated image, it soon became clear that some people possessed a special talent for finessing these new AI tools to produce better content. As with painting or photography, the human creative spark is still necessary to produce notable results consistently.

Read 22 remaining paragraphs | Comments

  • chevron_right

    Yes, Virginia, there is AI joy in seeing fake Will Smith ravenously eat spaghetti

    news.movim.eu / ArsTechnica · Thursday, 30 March - 21:02

Stills from an AI-generated video of Will Smith eating spaghetti.

Enlarge / Stills from an AI-generated video of Will Smith eating spaghetti that has been heating up the Internet. (credit: chaindrop / Reddit )

Amid this past week's controversies in AI over regulation , fears of world-ending doom , and job disruption , the clouds have briefly parted. For a brief and shining moment, we can enjoy an absolutely ridiculous AI-generated video of Will Smith eating spaghetti that is now lighting up our lives with its terrible glory.

On Monday, a Reddit user named "chaindrop" shared the AI-generated video on the r/StableDiffusion subreddit. It quickly spread to other forms of social media and inspired mixed ruminations in the press. For example, Vice said the video will "haunt you for the rest of your life," while the AV Club called it the "natural end point for AI development."

We're somewhere in between. The 20-second silent video consists of 10 independently generated two-second segments stitched together. Each one shows different angles of a simulated Will Smith (at one point, even two Will Smiths) ravenously gobbling up spaghetti. It's entirely computer-generated, thanks to AI.

Read 8 remaining paragraphs | Comments

  • chevron_right

    Ethical AI art generation? Adobe Firefly may be the answer

    news.movim.eu / ArsTechnica · Wednesday, 22 March - 17:27 · 1 minute

Adobe Firefly AI image generator example.

Enlarge / An Adobe Firefly AI image generator example. (credit: Adobe)

On Tuesday, Adobe unveiled Firefly, its new AI image synthesis generator. Unlike other AI art models such as Stable Diffusion and DALL-E , Adobe says its Firefly engine, which can generate new images from text descriptions, has been trained solely on legal and ethical sources, making its output clear for use by commercial artists. It will be integrated directly into Creative Cloud, but for now, it is only available as a beta.

Since the mainstream debut of image synthesis models last year, the field has been fraught with issues around ethics and copyright. For example, the AI art generator called Stable Diffusion gained its ability to generate images from text descriptions after researchers trained an AI model to analyze hundreds of millions of images scraped from the Internet. Many (probably most) of those images were copyrighted and obtained without the consent of their rights holders, which led to lawsuits and protests from artists .

To avoid those legal and ethical issues, Adobe created an AI art generator trained solely on Adobe Stock images, openly licensed content, and public domain content, ensuring the generated content is safe for commercial use. Adobe goes into more detail in its news release :

Read 3 remaining paragraphs | Comments

  • chevron_right

    Making faces: How to train an AI on your face to create silly portraits

    news.movim.eu / ArsTechnica · Wednesday, 22 March - 11:30 · 1 minute

Ever want to be a superhero? We'll show you how.

Enlarge / Ever want to be a superhero? We'll show you how. (credit: Shaun Hutchinson | Aurich Lawson | Stable Diffusion)

By now, you've read a lot about generative AI technologies such as Midjourney and Stable Diffusion , which translate text input into images in seconds. If you're anything like me, you immediately wondered how you could use that technology to slap your face onto the Mona Lisa or Captain America. After all, who doesn’t want to be America’s ass?

I have a long history of putting my face on things. Previously, doing so was a painstaking process of finding or taking a picture with the right angle and expression and then using Photoshop to graft my face onto the original. While I considered the results demented yet worthwhile, the process required a lot of time. But with Stable Diffusion and Dreambooth , I’m now able to train a model on my face and then paste it onto anything my strange heart desires.

In this walkthrough, I'll show you how to install Stable Diffusion locally on your computer, train Dreambooth on your face, and generate so many pictures of yourself that your friends and family will eventually block you to stop the deluge of silly photos. The entire process will take about two hours from start to finish, with the bulk of the time spent babysitting a Google Colab notebook while it trains on your images.

Read 58 remaining paragraphs | Comments

  • chevron_right

    AI imager Midjourney v5 stuns with photorealistic images—and 5-fingered hands

    news.movim.eu / ArsTechnica · Thursday, 16 March - 20:50

An example of lighting and skin effects in the AI image generator Midjourney v5.

Enlarge / An example of lighting and skin effects in the AI image generator Midjourney v5. (credit: Julie W. Design )

On Wednesday, Midjourney announced version 5 of its commercial AI image synthesis service, which can produce photorealistic images at a quality level that some AI art fans are calling creepy and " too perfect ." Midjourney v5 is available now as an alpha test for customers who subscribe to the Midjourney service, which is available through Discord.

"MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long," said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. "Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing."

Wieland shared some of her Midjourney v5 generations with Ars Technica (seen below in a gallery and in the main image above), and they certainly show a progression in image detail since Midjourney first arrived in March 2022. Version 3 debuted in August, and version 4 debuted in November . Each iteration added more detail to the generated results, as our experiments show:

Read 8 remaining paragraphs | Comments