In a crucial development towards online child protection, Australia’s eSafety Office is set to compel search engines to eradicate AI-generated child sexual abuse content from their search results, according to Commissioner Julie Inman Grant’s recent announcement. This measure is designed to combat the rising threats to children’s rights and privacy instigated by AI and deepfakes.
The online safety codes and standards will address novel risks posed by the inclusion of generative AI. They will apply to multiple segments of the online industry and are aimed at halting the production of ‘synthetic’ child sexual abuse materials. The new codes require industry members to adopt appropriate measures to tackle the risk of this highly reprehensible material surfacing on their platforms within Australia.
Citing the increased use of generative AI, Inman Grant asserted the need for the code to include the implications of AI integration into search functions by industry giants. The current code was deemed insufficient to safeguard the community effectively.
The new safety code will adhere to the guidelines established by the Online Safety Act 2021, which serves as the principal legal frame of reference in Australia that regulates illicit and restricted online content. This law empowers the eSafety Commissioner with robust capabilities to safeguard Australians from online harm.
Presently, the registered industry codes encompass five online sectors: Social Media Services, Internet Carriage Services, App Distribution Services, Hosting Services, and Equipment Providers. With the introduction of the new standards, the focus will extend to two additional sectors: Designated Internet Service and Relevant Electronic Services.
To learn more about this topic, visit the original article here.