Nationwide — As November approaches, the expansion of artificial intelligence (AI) proves to be a concern for voters looking for reliable information. Over the last decade, AI technology has become increasingly sophisticated and understanding how to identify AI-generated content can make all the difference in the voting booth. Here are some tips and tricks to help you spot AI-generated material during the election season.
In terms of written content, it can be useful to analyze the writing style and whether it seems coherent or organic. Written posts on social media that include repetitive phrasing or lack of nuance, for example, can be potential red flags.
It is best to also check for contextual inaccuracies. AI can sometimes generate believable content that contains factual errors or inconsistencies.
One way to do this is to cross-reference claims with reputable sources. If that information doesn’t concur, there is a chance that the misinformation was aggregated by artificial intelligence.
AI systems often take information from a wide array of sources, which can lead to the inclusion of unreliable content. AI-generated articles might reference websites or sources without any significant or comprehensive screening or evaluation.
In terms of visual content, look out for generic images as AI-generated articles may use stock images or generic visuals that don’t correlate well with the content.
Pay attention to the quality and relevance of those images. If the visuals seem out of place or of low quality, the content may be AI-produced.
According to the McGill University’s Office for Science and Society, some identifiers include bizarre and inconsistent hands and fingers, an unnaturally “smooth” appearance and nonsensical words and lettering in the image.
California lawmakers have been attempting to reduce the potential harm that can be done by AI-generated disinformation and misinformation. Assembly Bill (AB) 2655, AB 2839 and ACR 219 are all pieces of legislation intended to make it easier to recognize AI content.
AB 2655, which was introduced by Asm. Marc Berman and Asm. Gail Pellerin, will require social media platforms to flag or even remove deepfake images related to elections.
Deepfakes are AI-generated images or videos that uses data to realistically depict whatever imagery the user inputs into a prompt. These can show fabricated images of candidates engaged in criminal activity or videos of candidates giving a speech they never gave.
AB 2655 was approved by Gov. Gavin Newsom on Sept. 17.
On the subject of deepfakes, AB 2839, once again authored by Pellerin and Berman, will ban the use of election-related deepfakes in TV commercials, robocalls and political mailers. This bill was also approved by the governor on Sept. 17.
ACR 219, also known as the California Social Media Users’ Bill of Rights, was introduced by Asm. Josh Lowenthal on June 20.
If adopted, it would create a framework for a set of digital protections for social media users. Some of the 10 “rights” listed in ACR 219’s text include the right to “be free from content that a reasonable person would conclude could cause substantial physical or emotional harm, especially to children,” to have easy access to reliable and accurate information in regards to elections, to have the default security settings upon creation of a social media account be at the highest level and to “expect that social media platforms will study and reduce as much as possible the negative effects that their algorithms and AI tools might have in causing harm to users, especially to young people.”
ACR 219 is currently in committee.