A well-known Stanford professor is accused of including fake AI-generated citations in a legal argument on the dangers of deepfakes.
Minnesota, much like California, has proposed a law that will enforce legal restrictions on the use of deepfakes around election time. Professor Jeff Hancock, a founding director of the Stanford Social Media Lab, submitted a legal argument in support of the bill, the Minnesota Reformer reports.
However, some journalists and legal professors have been unable to locate some of the studies cited in the argument, such as "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance."
Some of the commentators believe that this could be a sign that parts of the argument have been generated by artificial intelligence, saying this could be an example of an "AI Hallucination". This is where an AI, like ChatGPT, simply makes up information that does not exist.
Opponents of Minnesota's new bill have argued that these potential "AI Hallucinations" make the professor's legal argument less reliable. The court filing by conservative and Republican Representative Mary Franson said the mysterious citations "calls the entire document into question."
Professor Hancock is a well-known name in the field of misinformation. One of his TED talks "The Future of Lying," has racked up over 1.5 million views on YouTube, and he also stars in a documentary on misinformation that is available on Netflix.
This isn't the first time that the appearance of fake AI-based legal citations has caused issues. In June 2023, Reuters reported that two New York lawyers were sanctioned after submitting a legal brief that the court ruled was generated by OpenAI's ChatGPT -- racking up a $5,000 fine in the process.
Professor Hancock has yet to publically respond to the allegations against him.
It's perhaps unsurprising that legal arguments about the dangers of deepfakes in elections are under intense scrutiny right now. Elon Musk's X is spearheading a comparable lawsuit challenging California's Defending Democracy From Deepfake Deception Act of 2024, which is also imposing limits on the creation and sharing of deepfakes around election time, arguing these types of restrictions impede the First Amendment.