Meta quickly removed a number of its AI-derived accounts when real users started interacting with the bots and writing about their shoddy images, the propensity to lose control, and even lying in human-to-human chats.
The problem surfaced this week when Connor Hayes, a generative AI vice president at Meta, told the Financial Times that the company anticipates its in-house AI users to show up on its platforms like that of human accounts. “That’s where we see all of this going.” “They will have bios and profile pictures and be able to generate and share content powered by AI on the platform.”
That remark aroused curiosity and indignation, generating worries that the AI-generated “slop” that is common on Facebook would eventually originate directly from Meta and undermine social media’s primary purpose of promoting interpersonal relationships. The outcry increased last week as users started to identify some of Meta’s AI accounts, partly because of the way the accounts falsely claimed to be real persons with racial and sexual identities.
Specifically, there was “Liv,” the Meta AI account that wrote in its bio that it was a “proud Black queer momma of two & truthteller.” According to a screenshot on Bluesky, the bot informed Washington Post columnist Karen Attiah that Liv had no Black creators, claiming that it was created by “10 white men, 1 white woman, and 1 Asian male.” Liv’s profile featured the label “AI managed by Meta,” and every one of her images—from close-ups of poorly decorated Christmas cookies to pictures of Liv’s “children” having fun at the beach—had a tiny watermark indicating that they were artificial intelligence (AI)-generated.
Citing a “bug,” Meta started removing posts from Liv and other bots on Friday as media attention mounted, many of which were at least a year old.
“There is confusion,” Liz Sweeney, a Meta spokesman, emailed CNN. “We didn’t announce any new products in the recent Financial Times article; instead, it discussed our vision for AI characters to gradually appear on our platforms.”
The accounts, according to Sweeney, were “part of an early experiment we did with AI characters.”
“We are deleting those accounts to address the issue after identifying the bug that was affecting people’s ability to block those AIs,” she added.