Melanie Perkins, a cofounder and the CEO of Canva.
Melanie Perkins, the CEO of Canva, told The Verge what she doesn't want people to use Canva AI for.
  • Canva bans its AI tool from creating images of political candidates or medical terms.
  • The decision aims to prevent harmful or inappropriate content, CEO Melanie Perkins told The Verge.
  • Canva's AI policies appear more artist-friendly than Adobe and Meta, which faced backlash.

Design juggernaut Canva has drawn hard lines around what its AI tool can and can't make.

Canva's AI feature, called Magic Media, doesn't work with medical or political terms, because such content may be harmful or inappropriate, CEO Melanie Perkins said in an interview with The Verge published on Monday. Canva's software can be used to create anything from party invitations to social media content to presentation templates.

"Canva has been designed to be a platform where you can come in and take your idea and turn it into a design, but there are certain things we shouldn't be generating," Perkins, who cofounded the 11-year-old company, said.

For example, Perkins said that if the tool is prompted to create images of political candidates, it will simply tell the user: "You can't do that."

Users can still create designs with political or health content on the platform on their own.

Canva also does not allow AI to be used for generating contracts, legal or financial advice, spam, or adult content, according to its AI product terms.

The company also has a clear policy on AI scraping. Canva does not train its AI on creators' content without permission, and users can opt out of their designs being used for AI training any time, according to a company blog.

By default, all users are opted out of private design content from being used to train AI models, a Canva spokesperson told Business Insider.

Last year, the company created a $200 million fund to pay users who opt into AI training in the next three years.

Canva's stance on AI differs markedly from those of other content creation giants, Adobe and Meta, which have come under fire within the creative community in recent months.

Last month, Meta faced backlash from artists who were angered by Meta using their public photos on Instagram and Facebook to train its artificial intelligence models. Several artists told BI that they're moving to platforms like Cara that ban the use of AI. Meta did not respond to a request for comment at the time.

Around the same time, artists protested how Adobe sent users a re-acceptance of its "Terms of Use," which led some people to think AI would scrape their art and content. A wave of artists boycotted Adobe, boosting sign-ups for alternatives like Linearity, and Affinity, which Canva acquired earlier this year.

At the time, Adobe said in a blog post that content belongs to users and it would never be used to train generative AI tools.

A spokesperson for Adobe referred BI to the company's AI guidelines, which direct users not to create hateful or adult content and not seek medical advice from AI features. The guidelines do not mention whether such content can be generated in the first place.

Read the original article on Business Insider

Tags