Instagram head Adam Mosseri stated on social media that users should remain vigilant about the images they see online, especially those AI-generated contents that can easily be mistaken for reality. Mosseri emphasized that AI technology has significantly enhanced its ability to create realistic content, so users should approach this information with caution and consider its sources, while social platforms have a responsibility to assist in this regard.

QQ20241216-095407.png

He wrote, "As internet platforms, our duty is to label AI-generated content as much as possible." However, Mosseri also acknowledged that due to technological limitations, these labels sometimes miss certain content. Therefore, platforms should also provide background information about the sharers to help users assess the credibility of the information.

Mosseri further pointed out that just as users need to be aware that AI search engines might provide incorrect information, checking whether the publisher of an image or statement comes from a trustworthy account is also an important way to determine the authenticity of the content. Currently, the Meta platform has not provided the type of background information Mosseri mentioned, although the company recently hinted at significant adjustments to content rules in the future.

The system he described sounds more like a user-driven review mechanism, similar to community notes on the X platform and YouTube or Bluesky's custom review features. While there is no clear indication that Meta will introduce similar functionalities, it is worth noting that Meta has recently drawn from Bluesky's experience and may implement corresponding improvements in the future.