Meta has dismissed its artificial intelligence team, the team responsible for understanding and preventing the harm related to the development of artificial intelligence technology, including the company’s current shift in resources towards greater focus on generative artificial intelligence.
The changes are part of a broad restructuring of the artificial intelligence teams announced internally by the company this week, according to a report by The Information citing an internal post witnessed.
Most members of the responsible artificial intelligence team are moving to the generative artificial intelligence team at Meta, established in February to manufacture generative artificial intelligence products.
A Meta spokesperson pointed out that other individuals are moving to the artificial intelligence infrastructure division of the company, responsible for developing the necessary systems and tools for designing and operating artificial intelligence products.
The company regularly aims to develop artificial intelligence responsibly, with a dedicated page showcasing its responsible artificial intelligence principles, including accountability, transparency, safety, privacy, and more.
The report quoted Meta’s official representative John Carville as saying, “The company continues to prioritize and invest in developing safe and responsible smart technology,” adding that team members are still supportive of efforts related to developing safe smart technology and using it at Meta.
Earlier this year, the team underwent a restructuring process that involved laying off some employees, resulting in the responsible artificial intelligence team becoming a shell without a real team.
Since 2019, the responsible artificial intelligence team has operated with limited autonomy and had to engage in lengthy negotiations with stakeholders before implementing its initiatives.
Meta established a dedicated artificial intelligence team responsible for identifying issues related to artificial intelligence training methods, such as the need to train the company’s models with a diverse set of information appropriately, and focusing on preventing issues like supervision problems across its platforms.
Automated systems across Meta’s social platforms have led to some issues, such as Facebook’s translation problem that resulted in the wrongful arrest of an individual, creating WhatsApp stickers through artificial intelligence that distorted images when making certain claims, and Instagram algorithms that assist people in finding content related to child sexual abuse.
Meta’s movement comes at a time when governments worldwide are racing to establish a set of regulatory laws for developing artificial intelligence. The US government has entered agreements with artificial intelligence companies, and later, President Biden directed government agencies to establish rules ensuring the safety of artificial intelligence.
Similarly, the European Union has published its principles related to artificial intelligence and continues to work diligently to pass its artificial intelligence legislation.