In a world where technology is rapidly evolving, one area remains a hot topic of conversation on the tip of everyone’s tongue. AI Technology is constantly presenting new developments and challenges to every day life. But is it now becoming more problematic than we can comfortably handle?
Taylor Swift recently made a sensational splash on the social media circuit. Not uncommon, you may think, but this time it was for all the wrong reasons. In January this year, nude images, seemingly of Tay-Tay, were circulated on X (formerly Twitter) in a frenzy of lurid fanfare. However, it was soon discovered that the racy images were in fact AI deep-fakes, created by a Telegram group specifically designed to generate this type of material.
Deepfake algorithms analyse and manipulate audio and video data to learn patterns and features in human faces and voices, seamlessly swapping faces and voices in videos. The deepfakes of T-Swift ran live on the web for around 17 hours, and were viewed more than 45 million times, before they were finally taken down.
And this isn’t the first time AI has pumped out deepfake pornography of public identities. Many other celebrities have fallen victim to this shabby practice, including the likes of Gal Gadot and Emma Watson. But it’s not just the rich and famous of Tinseltown that are falling victim to AI shenanigans.
Even in our own backyard, the regulators have had to sit up and take careful notice. In a case currently being prosecuted in the Queensland courts, a defendant is being sued by the eSafety Commissioner for allegedly posting fake pornographic images of several women online without their consent.
The eSafety Commissioner has that power under section 75 of the Online Safety Act 2021. Under that section, a person who posts, or threatens to post, an intimate image without the consent of the person depicted in the image may be liable to a civil penalty of up to almost $157,000.00.
The same defendant is also facing charges laid by the Queensland Police Service for creating deepfake images of teachers and students from a prestigious Brisbane school and sending them to various facilities on the Gold Coast. The alleged offences include five counts of obscene publications and exhibitions (pursuant to section 228(1)(a) & (c) of the QueenslandCriminal Code, one count of obscene publication pursuant to 228(1)(a), and one of exhibiting a child under 16 section 228(2)(a).
If convicted, he could face up to two years imprisonment.
But such offences relate only to the publication of intimate or obscene material, not to the creation of it, and certainly not the creation of such material by AI.
However, the Online Safety Act is going under review this year, with the Australian Government wanting to ensure that its provisions are fit-for-purpose. That move is undoubtedly due to, amongst other things, the rise of AI. Whether the criminal sector will follow a similar course is yet to be determined, but we already have a sound framework for such a provision in the offence of Making Child Exploitation Material pursuant to section 228B of the Queensland Code. The task of tailoring a similar section aimed at capturing online creeps creating deepfake AI pornography surely can’t be too far of a leap.