The emerging Anthropic company has announced a modification in its policy to permit minors to utilize its proprietary artificial intelligence technologies, provided that specific security measures are followed.
The company stated on its official website that it will allow minors to use third-party applications that support its AI models, on the condition that the developers of these applications adhere to specific security procedures.
The company did not specify whether it intends to allow children to use its own services and applications that rely on artificial intelligence.
The company highlighted the security measures that developers must follow when creating AI applications for minors, such as age verification systems, content supervision and filtering, as well as providing educational materials on safe and responsible AI use.
The company also mentioned that it may provide “technical measures” to customize AI product experiences for minors and will require developers targeting minors to implement these measures.
Developers using Anthropic’s proprietary AI models must also comply with child safety rules and data privacy protection standards that apply in various countries worldwide.
The company regularly updates its applications to ensure compliance with policies and will suspend or terminate the accounts of developers who repeatedly violate them.
This shift comes as children increasingly rely on AI applications to complete school assignments and address personal issues.
In the past year, some schools and colleges banned generative AI applications due to concerns of fraud and dissemination of inaccurate information, then later lifted this ban.
UNESCO has previously urged governments to set a specific age limit for smart app users and ensure the protection of their data and privacy.