Meta Collaborates With Industry Partners To Minimize Risks of Generative AI
Meta is collaborating with a few other tech companies to minimize the misuse of generative AI, particularly in the matter of child exploitation. Find out more about the initiative.
- Meta is collaborating with tech leaders in an initiative that aims to mitigate the use of generative AI in child exploitation.
- The initiative will include actions such as the responsible sourcing of datasets and supporting developer ownership of products to improve safety.
Meta announced its plans to collaborate with a few tech companies to establish a framework of principles to mitigate risks arising from the development of generative AI. The tech giant is partnering with nonprofit organizations such as All Tech is Human and Thorn in a bid to curb the use of GenAI tools for child exploitation, even through product development processes.
Meta entered into a similar partnership with IBM and other tech companies called the AI Alliance to encourage the development of responsible, open, and safe AI tools. The initiative even encouraged open-source products and generated educational resources that could benefit policymakers and the public.
See More: Netherlands Government Might Stop Facebook Usage Over Privacy Risks
Meta’s latest effort, however, largely focuses on preventing child exploitation. Consequently, the partnership will look to create guidelines for AI developers throughout the development, deployment, and maintenance phases of a product life cycle.
Some of the guidelines include responsible sourcing of datasets used to train AI models to avoid materials with child sexual abuse and exploitation. Meta is also looking at stress testing and content provenance solutions that will make it easy to identify AI-generated content.
Other measures include responsible hosting of first-party AI models, supporting developer ownership, and creating safeguards against malicious content and conduct by users. Furthermore, the collaboration will work to develop tech solutions to prevent malicious AI applications and eliminate exploitation content from major platforms.
The development comes when Meta has frequently been under fire by government bodies for instances of child exploitation, mental health issues, and rights violations on its platforms. The move could be seen as necessary for the social media company to keep up with the increasingly sophisticated methodologies being used by bad actors in recent months.
What do you think about collaborative efforts in AI development? Let us know your thoughts on LinkedIn, X, or Facebook. We’d love to hear from you!
Image source: Shutterstock