Mark Zuckerberg, CEO of Meta, testifies during the Senate Judiciary Committee hearing titled
Mark Zuckerberg, CEO of Meta, testifies during a Senate Judiciary Committee hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis."
  • Meta CEO Mark Zuckerberg has promised that AI will revolutionize the company's ad services.
  • But Meta's use of AI for moderating ads may already be putting the company in hot water.
  • A bipartisan group of lawmakers accused Meta of allowing ads that promote the sale of illicit drugs.

During July's earnings call, Meta CEO Mark Zuckerberg laid out a vision for his company's valuable ad services once they are further bolstered by artificial intelligence.

"In the coming years," he said, "AI will be able to generate creative for advertisers as well and will also be able to personalize it as people see it."

But as the trillion-dollar company hopes to revolutionize its ad tech, Meta's usage of AI may already have put the company in the hot seat.

On Thursday, a bipartisan group of lawmakers, led by Republican Rep. Tim Walberg of Michigan and Democratic Rep. Kathy Castor of Florida, sent a letter to Zuckerberg demanding the CEO answer questions about Meta's advertising services.

The letter comes in light of a March Wall Street Journal report that revealed how federal prosecutors are probing the company for its role in the illicit sale of drugs on its platforms.

"Meta appears to have continued to shirk its social responsibility and defy its own community guidelines," the letter said. "Protecting users online, especially children and teenagers, is one of our top priorities. We are continuously concerned that Meta is not up to the task and this dereliction of duty needs to be addressed."

Zuckerberg already faced senators who grilled the CEO about safety measures for children who use Meta's social media sites. During the senate hearing, Zuckerberg stood up and apologized to families who felt that social media use harmed their kids.

In July, the Tech Transparency Project, a nonprofit watchdog group, reported that Meta continued to make money from hundreds of ads that promoted the sale of illegal or recreational drugs, including cocaine and opioids, which Meta prohibits in its policy regarding ads.

"Many of the ads made no secret of their intentions, showing photos of prescription drug bottles, piles of pills and powders, or bricks of cocaine, and encouraging users to place orders," the watchdog group wrote.

"Our systems are designed to proactively detect and enforce against violating content, and we reject hundreds of thousands of ads for violating our drug policies," a Meta spokesperson told Business Insider, reiterating a statement shared with the Journal. "We continue to invest resources and further improve our enforcement on this kind of content. Our hearts go out to those suffering from the tragic consequences of this epidemic — it requires all of us to work together to stop it."

The spokesperson did not address how Meta uses AI to moderate ads.

Ads poke holes in Meta's AI system

The exact processes for how Meta approves and moderates ads are not public information.

What is known is that the company, in part, relies on artificial intelligence to screen content, as reported by the Journal. The outlet reported that using photos to display the drugs may allow the ads to slip past Meta's moderation system.

Here's what Meta has revealed about its "ad review system":

"Our ad review system relies primarily on automated technology to apply the Advertising Standards to the millions of ads that are run across Meta technologies. However, we do use human reviewers to improve and train our automated systems, and in some cases, to manually review ads."

The company also said it's continuously working to automate the review process further to rely less on humans.

But the revelation of ads promoting drugs on Meta's platforms shows how policy-violating content can still slip through its automated system, even while Zuckerberg paints a picture of a sophisticated ad service that promises improved targeting and creates content for advertisers with generative AI.

Meta's bumpy AI rollout

Meta has experienced a bumpy rollout of its AI-powered services outside ad tech.

Less than a year after Meta introduced celebrity AI assistants, the company discontinued the product and focused on allowing users to create their own AI bots.

Meta also continues to work out kinks for Meta AI, the company's chatbot and AI assistant, which has been shown to hallucinate answers or, with BI's Rob Price, act like a user and hand out his phone number to strangers.

The technical and ethical issues that pervade AI products — not just Meta's — concern many top US companies.

A survey by Arize AI, which conducts research around AI technology, showed that 56% of Fortune 500 companies view AI as a "risk factor," The Financial Times reported.

Sifting by industry, 86% of technology groups, including Salesforce, said that AI presents a business risk, according to the report.

Those concerns, however, conflict with tech companies' evident push to implement AI into every corner of their products, even as the path toward profitability also remains murky.

"There are significant risks involved in developing and deploying AI," Meta said in a 2023 annual report, "and there can be no assurance that the usage of AI will enhance our products or services or be beneficial to our business, including our efficiency or profitability."

Read the original article on Business Insider