Instagram’s Efforts to Label AI-Generated Content

By Brad Anderson Brad Anderson has been verified by Muck Rack's editorial team
Published on August 3, 2023

Artificial intelligence (AI) is playing a larger role and disrupting many markets. In particular, social media platforms have adopted AI tools to better serve their users. Instagram, one of the most popular social media sites, is currently developing new warnings to label posts that were created with artificial intelligence. The goal of this change is to be more open and honest about the use of AI in the production of the media people consume.

In order to inform its users when artificial intelligence (AI) has been used in the production of content, Instagram is currently working on a labeling system. Sharing a screenshot from within the Instagram app, app researcher Alessandro Paluzzi reveals a page with a notice reading, “the creator or Meta said that this content was created or edited with AI.” This warning emphasizes that the content at issue was “generated by Meta AI.” The article continues with a brief explanation of generative AI and some tips for spotting articles that make use of AI.

Meta, Instagram’s parent company, and other major AI players like Google, Microsoft, and OpenAI have recently made commitments that are consistent with the introduction of this labeling system. These pledges were developed in tandem with the White House and are geared toward ensuring the ethical growth of AI. Some examples of these efforts are funding studies on cybersecurity and discrimination and creating a watermarking system to alert audiences to the presence of AI-generated content.

Since the phrase “Meta said” appears in the label, it’s safe to assume that the tech company Meta will proactively apply the notice in some cases rather than relying solely on user disclosures, even though the exact level of automation in Instagram’s labeling system is still unknown. A Meta representative declined to comment on the notice, but this new feature shows the platform’s commitment to openness and responsibility with respect to AI-generated content.

Concerns about the potential spread of misinformation have been raised in light of the emergence of AI-generated content. While this is still in its infancy, we are reminded of the potential ramifications by events like the viral image of Pope Francis wearing a swagged-out puffy jacket earlier this year. The ease with which AI can be used to create and spread misleading or harmful information was highlighted by this image, which was later debunked. This is dangerous not only in trivial contexts like the pope’s jacket, but also in the production of deepfakes in the form of satellite images or political photography.

Recently, Meta has made strides in improving its artificial intelligence (AI) capabilities, including the open-sourcing of its large language model LLaMA 2. Generative AI features that are accessible by the general public for services like Instagram are still in development. Regardless, evidence of the features under development has begun to emerge. Mark Zuckerberg, CEO of Meta, informed employees at a June all-hands meeting that the company was developing features like text prompts to edit photos for Instagram Stories. Alessandro Paluzzi, an app researcher, also found evidence of a “AI brush” feature for Instagram, which would allow users to edit individual pixels in photos. In addition, The Financial Times reported that beginning as soon as next month, Meta could add a ‘personas’ feature to its AI chatbots.

Another industry leader in AI, Google, has come to see the value in informing consumers about content created by AI. The “About this image” feature of the company’s image search will soon be released, as the company has announced. This summer, when it rolls out, a new feature will make it easier to tell if an image is the product of artificial intelligence or not. This feature provides valuable context and insight into the image’s authenticity by revealing the first location where Google indexed it.

The tech industry as a whole is beginning to recognize the importance of transparency and accountability, as evidenced by Instagram and Google’s efforts to label and identify AI-generated content. Users need to know when and how AI is used in content creation as AI develops and becomes more integrated into our daily lives. By providing clear descriptors, these labeling systems help users make educated choices about the content they interact with.

Instagram’s decision to label AI-generated content is a positive step toward greater transparency and accountability in an era when AI technologies are increasingly shaping our digital experiences. Instagram’s goal is to give users more agency over the content they consume by notifying them when AI has been used in the production of that content. Together, this change and Google’s work to identify AI-generated images indicate a growing awareness of the need for openness in the AI age. Platforms and businesses must prioritize the ethical and responsible use of AI as these technologies advance in order to keep users’ trust and make the internet a safer place for everyone.

First reported on The Verge

Frequently Asked Questions

What is Instagram’s goal with the development of a new labeling system?

Instagram’s goal with the new labeling system is to be more open and honest about the use of artificial intelligence (AI) in the production of content on its platform. The labeling system will inform users when AI has been used in creating or editing a post.

Who is developing the labeling system for AI-generated content on Instagram?

Instagram’s parent company, Meta, is developing the labeling system for AI-generated content on the platform.

What will the label on AI-generated content say?

The label on AI-generated content will read, “the creator or Meta said that this content was created or edited with AI.” It emphasizes that the content was “generated by Meta AI.”

Why is the introduction of the labeling system important?

The introduction of the labeling system is important to address concerns about the potential spread of misinformation and deepfakes through AI-generated content. It aims to provide users with more context and transparency about the content they consume.

What are some examples of efforts made by major AI players regarding the ethical growth of AI?

Major AI players like Meta, Google, Microsoft, and OpenAI have made commitments in partnership with the White House to ensure ethical growth in AI. These efforts include funding studies on cybersecurity and discrimination, as well as creating a watermarking system to alert audiences to AI-generated content.

How is Meta applying the notice on AI-generated content?

While the exact level of automation in Instagram’s labeling system is not known, the presence of the phrase “Meta said” in the label suggests that Meta may proactively apply the notice in some cases, in addition to relying on user disclosures.

What other AI features are being developed for Instagram?

Meta is developing other AI features for Instagram, including text prompts to edit photos for Instagram Stories and an “AI brush” feature that allows users to edit individual pixels in photos.

How is Google addressing AI-generated content?

Google is also working on informing consumers about content created by AI. It is set to release an “About this image” feature for its image search, making it easier for users to determine if an image is the product of AI or not.

Why is transparency important in the AI age?

Transparency is crucial in the AI age to give users more agency over the content they interact with and to help them make educated choices about the information they consume. It fosters trust and ensures the responsible use of AI technologies.

What is the broader impact of Instagram’s decision to label AI-generated content?

Instagram’s decision to label AI-generated content is a positive step toward greater transparency and accountability in the use of AI technologies. It reflects a growing awareness in the tech industry of the need for openness and responsibility in the AI age to create a safer online environment for everyone.

Originally published on ReadWrite.

By Brad Anderson Brad Anderson has been verified by Muck Rack's editorial team

Brad Anderson is a syndicate partner and columnist at Grit Daily. He serves as Editor-In-Chief at ReadWrite, where he oversees contributed content. He previously worked as an editor at PayPal and Crunchbase.

Read more

More GD News