Meta Collaborates With Industry Partners To Minimize Risks of Generative AI

Meta is collaborating with a few other tech companies to minimize the misuse of generative AI, particularly in the matter of child exploitation. Find out more about the initiative.

April 24, 2024

Meta Logo on Building
  • Meta is collaborating with tech leaders in an initiative that aims to mitigate the use of generative AI in child exploitation.
  • The initiative will include actions such as the responsible sourcing of datasets and supporting developer ownership of products to improve safety.

Meta announced its plans to collaborate with a few tech companies to establish a framework of principles to mitigate risks arising from the development of generative AI. The tech giant is partnering with nonprofit organizations such as All Tech is Human and Thorn in a bid to curb the use of GenAI tools for child exploitation, even through product development processes.

Meta entered into a similar partnership with IBM and other tech companies called the AI Alliance to encourage the development of responsible, open, and safe AI tools. The initiative even encouraged open-source products and generated educational resources that could benefit policymakers and the public.

See More: Netherlands Government Might Stop Facebook Usage Over Privacy Risks

Meta’s latest effort, however, largely focuses on preventing child exploitation. Consequently, the partnership will look to create guidelines for AI developers throughout the development, deployment, and maintenance phases of a product life cycle.

Some of the guidelines include responsible sourcing of datasets used to train AI models to avoid materials with child sexual abuse and exploitation. Meta is also looking at stress testing and content provenance solutions that will make it easy to identify AI-generated content.

Other measures include responsible hosting of first-party AI models, supporting developer ownership, and creating safeguards against malicious content and conduct by users. Furthermore, the collaboration will work to develop tech solutions to prevent malicious AI applications and eliminate exploitation content from major platforms.

The development comes when Meta has frequently been under fire by government bodies for instances of child exploitation, mental health issues, and rights violations on its platforms. The move could be seen as necessary for the social media company to keep up with the increasingly sophisticated methodologies being used by bad actors in recent months.

What do you think about collaborative efforts in AI development? Let us know your thoughts on LinkedInOpens a new window , XOpens a new window , or FacebookOpens a new window . We’d love to hear from you!

Image source: Shutterstock

LATEST NEWS STORIES

Anuj Mudaliar
Anuj Mudaliar

Assistant Editor - Tech, SWZD

Anuj Mudaliar is a content development professional with a keen interest in emerging technologies, particularly advances in AI. As a tech editor for Spiceworks, Anuj covers many topics, including cloud, cybersecurity, emerging tech innovation, AI, and hardware. When not at work, he spends his time outdoors - trekking, camping, and stargazing. He is also interested in cooking and experiencing cuisine from around the world.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.