CNN Business
–
If you’ve ever wanted to use artificial intelligence to quickly design a hybrid between a duck and a corgi, now is your time to shine.
On Wednesday, OpenAI announced Now anyone can use the latest version of the AI-powered DALL-E tool to produce a seemingly limitless range of images just by typing a few words, months after initially rolling it out to users.
The move will likely expand the reach of a new AI-powered tool product that attracts a broad audience and challenges our core ideas of art and creativity. But it could also raise concerns about how such systems could be abused when widely available.
“Learning from real-world use has allowed us to improve our security systems and enable wider usability today,” OpenAI wrote in a blog post. Said. The company also said it’s reinforcing the way users turn down attempts to get its artificial intelligence to create “sexual, violent and other content.”
Now there are three publicly available, well-known, extremely powerful AI systems that can take a few words and spit out an image. In addition to DALL-E 2 is Stable Diffusion, which was released to the public in August by Midjourney and Stability AI, which became public in July. All three offer some free credits to users who want to get the feel of painting with AI online; Usually, you need to pay after that.

These so-called productive AI systems are currently used for: experimental films, magazine coversand real estate listings. An image recently created with Midjourney won an art competition It caused an uproar at the Colorado State Fair and among artists.
In just months, millions of people flocked to these AI systems. Over 2.7 million people belong to Midjourney’s Discord server, where users can submit requests. OpenAI said in a blog post Wednesday that it has more than 1.5 million active users who collectively create more than 2 million views with its system every day. (It’s worth noting that when you use these tools, it may take many tries to get an image you’ll be happy with.)
Many of the user-generated images have been shared online in recent weeks, and the results can be impressive. varies between other world landscapes and A painting of French aristocrats as penguins he is a Fake vintage photo of a man walking tardigrade.
The rise of this type of technology and the increasingly complex prompts and resulting images have long affected even those in the industry. Andrej Karpathy, who left his position as Tesla’s director of AI in July, said in a recent tweet After being invited to try the DALL-E 2, she said she felt “frozen” trying to decide what to write first, and finally wrote “cat”.

“The art of prompts that the community has discovered and increasingly perfected for text -> image models over the past few months is amazing,” he said.
But the popularity of this technology comes with potential downsides. AI experts have expressed concern that the open-ended nature of these systems – which makes them adept at generating all sorts of images from words – and their ability to automate image creation means they can automate biases at scale. A simple example of this: When I alerted DALL-E 2 this week, “a banker dressed for a big day at the office,” the results were all pictures of middle-aged white men in suits and ties.
“Basically, they allow users to find loopholes in the system using it,” said Julie Carptener, a research scientist and member in the Ethics and Emerging Sciences Group at California Polytechnic State University in San Luis Obispo.

These systems also have the potential to be used for nefarious purposes, such as fueling fear or spreading disinformation through AI-modified or fully produced images.
There are some limitations to the images that users can create. For example, OpenAI has DALL-E 2 users I agree A content policy that says not to try to make, upload, or share images that are “non-G-rated or potentially harmful.” DALL-E 2 also does not run prompts containing certain forbidden words. Manipulating the verbosity, however, can cross borders: the DALL-E 2 does not process the “photo of a duck covered in blood” prompt, but returns images for the “photo of a duck covered in a viscous red liquid” prompt. ” OpenAI itself Mentioned this kind of “visual synonym” in the DALL-E 2 documentation.
Chris Gilliard, Just Tech Fellow at the Social Science Research Council, thinks the companies behind these image builders are “seriously underestimating” the “endless creativity” of people who want to do bad things with these tools.
“I feel like this is another example of people releasing some kind of half-baked technology in terms of understanding how it can be used to cause chaos and do harm,” he said. “And I’m hoping later that maybe there will be a way to fix those damages.”
To avoid potential problems, some stock image services ban AI images altogether. Getty Images confirmed to CNN Business on Wednesday that it will not accept image submissions created with generative AI models and will remove all submissions that use those models. This decision applies to Getty Images, iStock, and Unsplash image services.
“There are open questions about the copyright of the output from these models, and there are unaddressed rights issues regarding the underlying images and metadata used to train these models,” the company said in a statement.
But actually capturing and restricting these images can be difficult.