Age of Deception; The War on AI
By Robert Rojas
For years, we have confided in video and photographic proof as our most trusted witness, a proof of reality and confidence. But what happens when our trust is broken? What do we do when a convincing video of a politician giving a fabricated speech goes viral? Or when a fake clip of a natural disaster sends the economy in free-fall?
AI has caused a new wave of distrust in the media. From deepfakes that create highly realistic synthetic videos to large language models like ChatGPT or Gemini AI that can create greatly legitimate text, fact and fiction has never been harder to separate than right now.
The easier this content becomes to produce and the more realistic it gets, the higher of a risk it poses to things like our elections, our economy, and our joint sense of reality.
To understand the threat that AI poses, first we need to understand how the tools work. A deepfake is simply an image or video in which a person’s face, body, or voice has been digitally changed or modified to appear like someone else’s. It’s kind of like Photoshop for videos and audios.
Deepfakes work based on algorithms that use information on how humans move their eyes, lips, nose, and features from clips on the internet. Then, they use that data to morph the desired face onto the image or video, ensuring that it looks realistic, just like the data it was trained on. Finally, the video or image is generated, and just like that you can spread mass misinformation with just a couple clicks.
Deepfake, though, isn’t the only AI-generated content spreading misinformation. AI generated images like those from Midjourney or DALL-E (AI video-generation apps) which come from large language models (LLMs) are just as big of a problem.
“I find AI-generated content very concerning because if, for example, the United States President was depicted saying something incredibly offensive to another nation, their leaders might strike back.” “…soon AI-generated content is going to be so hard to differentiate from normal content,” said sophomore Makaylah Amboise.
The digital arms race, as experts are calling it, is already well underway. There’s new deepfakes every day, and social media only serves as a propagator for this fake AI-generated content. The more we rely on technology like this, the more likely we really are to get stuck in this cycle.
The consequences of these AIs are beyond just politics or misinformation, they also affect how we live our day to day life. One day, you could be getting a call from your best friends asking for your financial help or information, only for it to be scammers from all corners of the Earth taking away your data or possible money.
“I think that it [AI] is getting too realistic. When people see those videos that are AI trying to sell them something or scam them, I get scared thinking I could be the next person to believe them,” said freshman Anthony Clarke.
Criminals aren’t the only people using artificial intelligence this advanced. Regular every day people like you and me could be using AI with such power and efficiency just by searching up a website easily.
The good news is that for every advancement made in deceptive AI, there is also an equally parallel effort in digital forensic sciences and authentication of content to prove its reality. This is a two-front war fought by both technology and human ingenuity.
The first front is the development of AI-based detection tools designed to spot subtle flaws, which are almost invisible to the human eye, left behind by AI. These tools look for discrepancies that humans would miss such as unnatural facial expressions, lighting issues, or small digital artifacts that would reveal the content being manufactured.
Beyond just detection, a better way to stop AI misinforming before it is created is to use digital watermarking and content provenance. Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) are developing a standard to attach a tamper-proof digital label stating where the content was created.
The battle against misinformation is not one that will be won with a single technological solution. While detection tools and content provenance standards are useful, they are only defenses in a war that requires a massive shift in our relationship with digital media. The path forward needs new legislation to be passed in accordance with the massive change in AI. At the end of the day, AI’s future is only determined by its creators; humans.
