AI-generated art is already transforming creative work

AI-generated art is already transforming creative work

For years, the conventional wisdom among Silicon Valley futurists was that artificial intelligence and automation spelled doom for workers whose jobs involved repetitive manual labor. Truck drivers, cashiers and warehouse workers will all lose their jobs to robots, they said, while workers in creative fields such as art, entertainment and media would be safe.

Well, an unexpected thing happened recently: AI entered the creative class.

In recent months, AI-based image generators such as DALL-E 2, Midjourney and Stable Diffusion have made it possible for anyone to create unique, hyper-realistic images just by typing a few words into a text box.

Although these apps are new, they are already astonishingly popular. DALL-E 2, for example, has more than 1.5 million users who generate more than two million images every day, while Midjourney’s official Discord server has more than three million members.

These programs use what’s known as “generative AI,” a type of AI that became popular several years ago with the release of text-generating tools like GPT-3, but has since expanded to include images, audio, and video.

It’s still too early to tell if this new wave of apps will end up costing artists and illustrators their jobs. What seems clear, however, is that these tools are already being adopted in creative industries.

I recently spoke with five creative professionals about how they use AI-generated art in their jobs.

Collin Waldoch, 29, a game designer from Brooklyn, recently started using generative AI to create custom art for his online game, Twofer Goofer, which works a bit like a rhyming version of Wordle. Each day, players are given a clue — such as “a set of rhythmic movements while in a semi-conscious state” — and tasked with coming up with a pair of rhyming words that match the clue. (In this case, “trance dance.”)

See also  5 apps to help you keep your New Year's resolutions

Initially, Mr. Waldoch planned to hire human artists through the gig work platform Upwork to illustrate each day’s rhyming word pairs. But when he saw the costs — between $50 and $60 per image, plus time for rounds of feedback and edits — he decided to try using AI instead. He plugged word pairs into Midjourney and DreamStudio, an app based on Stable Diffusion, and tweaked the results until they looked right. Total cost: a few minutes of work, plus a few pennies. (DreamStudio costs about one cent per image; Midjourney’s standard membership costs $30 per month for unlimited images.)

“I typed in ‘carrot parrot’ and it spat back a perfect picture of a parrot made of carrots,” he said. “It was the immediate ‘aha’ moment.”

Mr. Waldoch said he didn’t feel guilty about using AI instead of hiring human artists, because human artists were too expensive to make the game worthwhile.

“We wouldn’t have done this” if not for AI, he said.

Isabella Orsi, 24, an interior designer in San Francisco, recently used a generative AI app called InteriorAI to create a mock-up for a client.

The client, a technology start-up, was looking to refurbish their office. Orsi uploaded images of the client’s office to InteriorAI, then applied a “cyberpunk” filter. The app produced new renderings in seconds – showing how the office entryway would look with colored lighting, contoured furniture and a new set of shelves.

Orsi believes that rather than replacing interior designers entirely, generative AI will help them come up with ideas in the initial stages of a project.

“I think there’s an element of good design that requires the empathetic touch of a human being,” she said. “So I don’t feel like it will take away my job. Someone has to distinguish between the different renderings, and at the end of the day, I think it needs a designer.”

Patrick Clair, 40, a filmmaker in Sydney, Australia, started using AI-generated art this year to help him prepare a presentation for a movie studio.

See also  How to unlock income free options

Mr. Clair, who has worked on hit shows including “Westworld,” was looking for an image of a certain type of marble statue. But when he looked at Getty Images – his usual source for concept art – he came up empty. Instead, he turned to DALL-E 2.

“I put ‘marble statue’ into DALL-E, and it was closer than I could get at the Getty in five minutes,” Mr. Clair said.

Since then, he has used the DALL-E 2 to help him generate images, such as the above image of a Melbourne tram in a dust storm, which are not readily available from online sources.

He predicted that rather than replacing concept artists or putting Hollywood special effects wizards out of a job, AI image generators would simply become part of every filmmaker’s toolkit.

“It’s like working with a really conscious concept artist,” he said.

“Photoshop can do things you can’t do with your hands, the same way a calculator can crunch numbers in a way you can’t in your brain, but Photoshop never surprises you,” he continued. “Whereas DALL-E surprises you, and comes back with things that are genuinely creative.”

During a recent creative brainstorming session, Jason Carmel, 49, an executive at New York ad agency Wunderman Thompson, wondered if AI could help.

“We had three and a half good ideas,” he said of his team. “And the fourth just lacked a visual way to describe it.”

The image they wanted—a group of dogs playing poker, for an ad pitched to a pet medicine company—would have taken an artist all day to sketch. Instead, they asked DALL-E 2 to generate it.

“We thought, what if we could show what the dogs playing poker looked like?” Mr. Carmel said.

The resulting image didn’t end up in an ad, but Mr. Carmel predicts that generative artificial intelligence will become part of every ad agency’s creative process. However, he does not believe that the use of artificial intelligence will meaningfully speed up the work of agencies, or replace their art departments. He said that many of the images generated by AI were not good enough to be shown to clients, and that users who were not experienced users of these apps would likely waste a lot of time trying to formulate the correct instructions.

See also  How mobile app publishers in APAC can unlock growing brand spend

“When I see people writing about how it’s going to destroy creativity, they talk about it like it’s an efficiency game,” Carmel said. “And then I know that maybe they haven’t played with it that much themselves, because it’s a time sink.”

Sarah Drummond, a service designer in London, started using AI-generated images a few months ago to replace the black-and-white sketches she made for her job. These were usually basic drawings that visually represented processes for which she was trying to design improvements, such as a group of customers lining up at a store’s cash register.

Instead of spending hours creating what she called “blob drawings” by hand, Drummond, 36, now writes what she wants in DALL-E 2 or Midjourney.

“Suddenly, I can take about 15 seconds and say, ‘Woman at checkout, standing at kiosk, black-and-white illustration,’ and get something back that looks really professional,” she said.

Drummond acknowledged that AI image generators had limitations. They are not good at more complex sketches, for example, or creating multiple images of the same character. And like the other creative professionals, she said she didn’t think AI designers would directly replace human illustrators.

Would I use it for final output? No. I would hire someone to do what we wanted to achieve,” she said. “But the casting work you do when you’re any kind of designer, whether it’s visual, architectural, urban planning – you sketch, sketch, sketch. And so this is a sketch tool.”

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *