Representative Andy Biggs (R-AZ) chaired a recent House Judiciary Committee hearing on the criminal application of artificial intelligence (AI), especially its application in creating "deepfakes" of public figures.
Rep. Biggs began the discussion by pointing to Tennessee's ELVIS Act, signed into law last March, which prohibits people from using AI to mimic a person's voice without their permission. Violations can be criminally enforced as Class A misdemeanors.
"What about any other person?" He asked, "What do you do if there is a deepfake of any other public figure, and you have that person say something that is pernicious, bad, or politically inflammatory? What laws do we have in place that would prohibit that, or is it just the Wild, Wild West?"
Does Texas have a constitutional right to defy Supreme Court on protecting its border?
Hearing witness and TRM Labs' Global Head of Policy, Ari Redbord, answered, stating that laws already exist, but AI needs to be incorporated into them, including wire fraud laws.
Another witness, ACLU Senior Policy Counsel Cody Venzke, stated that AI used in the context of political commentary falls under the protection of the First Amendment. Still, Rep. Biggs specified the malicious use of deepfakes.
"Let's say, with malice, you say that Andy Biggs said XYZ, that is just horrible. You put it in the New York Times, and you did it because you want to harm me," the Arizona Congressman clarified.
Venzke cited a February 2024 deepfake of President Biden supposedly reinstating the draft as an example of deepfakes being used for political commentary, but "existing exceptions to the First Amendment still apply to AI."
Massachusetts Institute of Technology's (MIT) Dr. Andrew Bowne agreed with Venzke, adding that malice remains a key factor.
Biggs was finally able to clarify by pointing to incredibly persuasive deepfakes that are difficult for the average person to detect, and how protections can be put in place against them, such as scammers impersonating family members and producers of child sexual abuse material (CSAM).
"It is the same type of deal where you see CSAM, which is so persuasive, sick, and disgusting, that is all AI-generated," Biggs concluded. "You mentioned the Ukraine thing, but what remedy does someone have when they are a victim of this type of generative AI?"
Dr. Bowne agreed that, while protections are in place, there are still gaps that need to be filled.