Skip to main content
Thoughts from David Cornelius

Category

AI Dooms Earth!You've probably heard warnings or seen movies about how AI-enabled robots take over and threaten the human race. The idea is that they are either seeking self-preservation (which includes removing the possibility they'll be shut down by their creators) or see humans as an inferior species that will ruin the environment in which they need to survive. Personally, I have a hard time seeing how AI could get the upper hand since all we have to do is turn it off. Have you ever screamed at the TV, trying to warn the people in the story to simply pull the plug, wondering why on earth they are missing this obvious solution? On the other hand, I read a novel several years ago that showed how AI manipulated emails to convince people to beef up security at an off-shore data center to the point that humans actually couldn't get to the off switch!

There are many scenarios such as this that seem fantastical to some but serve as a cautionary tale to others.  I'm sure as we build increasingly "intelligent" machines that access our historical tendancy for power, there will be necessary guardrails enacted to ensure that they work for us--and not the other way around.

Whether or not it ever gets to that point, there are already many nefarious uses of AI that have been used for several years of which we need to be aware and to protect against. These include selling services, swaying public opinion, promoting a political party, gaining illegal access to networks, stealing identity, committing fraud, and even exploiting relationships to mention just a few. The tools to generate pictures, videos, and replicate voices are quite inexpensive and readily available now. They are marketed as time-saving or cost-cutting tools for  entrepreneurs to aid in their marketing or product development efforts; I experienced this while putting together the presentation, generating fake images, and voice-overs. While this is certainly a boon for busy people on a budget, it also means that others can use these same tools to fool people into believing something that isn't true.

I decided to write this blog to document and catch up with some of the bad uses of AI. And, ironically or predictably, I used AI to research it. What I found is much more terrifying than AI trying to take over; it's example after example of how PEOPLE are finding creative and elaborate ways to steal, manipulate, and gain the upper hand over other people. We can't just turn it off--it's already well entrenched in every facet of society. 

The following is an incomplete list of things I found in just a few hours of reading.

Astroturfing

One particularly large effort of manipulation uses reviews of products or comments on posts from hundreds or thousands of "real" people. This technique is called Astroturfing (another explanation). Using AI chatbots, it's estimated that 20% of Twitter trends in 2019 were created using coordinated fake accounts. This is not a new concept but in the past, it has used real people; for example, in the 1990s, young people were paid to promote cigarette smoking. It's so much easier now, though, and companies can remain relatively anonymous while spreading disinformation rapidly.

Prompt Injection

While researching content for this blog, I came across this term for the first time. "Prompt injection is a Generative AI security threat where an attacker deliberately crafts and inputs deceptive text into a large language model (LLM) to manipulate its outputs, exploiting the model's response generation process to achieve unauthorized actions such as extracting confidential information, injecting false content, or disrupting the model's intended function." This summary was generated by AI and I'm not even sure I fully understand it! I guess this would be most worrisome for someone producing an AI-enabled app that has legally binding offers and acceptance functionality. However, Microsoft's Bing Chat has been hacked and there are ways to jail-break ChatGPT. Then there's DAN.

Model Stealing

Model stealing, also known as model extraction, works by querying the target model with samples and using the model responses to forge a replicated model. This requires serious hardware but could save the thieving organization millions or even trillions of AI "tokens" in LLM training time.

Fake Personas

A security startup almost hired an engineer that didn't exist: the candidate's resume looked good, tests passed, but there was something about the video that didn't look right... 

And More

There are so many more examples and areas where AI is being used in ways to steal, corrupt, influence, manipulate, infiltrate, or just destroy. I didn't have time to follow up on any more of these suggested areas of AI misuse:

  • Relationship exploitation: AI chatbots designed to extract personal information through emotional manipulation
  • Data poisoning: Corrupting training datasets to influence AI behavior
  • Adversarial attacks: Input modifications that fool AI classifiers
  • Market manipulation: AI-driven trading bots spreading false information

Looking at this list, I was getting depressed and just couldn't read any more about how this new amazing technology is being used by fellow humans against ourselves. The scary part is how sophisticated these attacks have become--and how quickly they have escalated in complexity. The technology is making fraud both easier to execute and harder to detect! 

Red Flags

There are signs that we should be watching for in daily life to help guard against becoming another victim:

Communication Red Flags

  • Conversations that feel "too perfect" or eerily tailored to your interests
  • Social media profiles with generic photos and limited posting history
  • Phone calls with slight audio delays or unnatural speech patterns
  • Emails with perfect grammar but odd contextual mistakes

Content Red Flags

  • Videos where lighting doesn't match across face and body
  • Images with impossible shadows or reflections
  • News articles without clear author attribution or source verification
  • Social media trends that appear suddenly without organic growth patterns
  • Cross-reference suspicious content across multiple sources
  • Use reverse image searches on profile photos
  • Verify identities through multiple communication channels

Professional Red Flags

  • Job applicants with portfolios that seem inconsistent in style or quality
  • Freelancers delivering work suspiciously fast for complex tasks
  • Academic submissions with writing styles that don't match previous work
  • Code submissions with inconsistent commenting or naming conventions

Financial Red Flags

  • Investment advice from accounts with no verifiable track record
  • Market predictions that seem too confident or specific
  • Cryptocurrency projects with AI-generated team photos
  • Customer reviews that follow similar templates or phrasing patterns

What Can We do?

In addition to watching for red flags, there are steps we can take to reduce the risk of threats--the first of which is to be continually learning about new types of threats and how you can protect yourself. Here's a list to get started:

As a developer providing an AI tool:

  • Implement rate limiting and behavioral analysis on user inputs
  • Use multiple AI detection tools rather than relying on single solutions
  • Monitor for prompt injection attempts in user queries
  • Validate file uploads for adversarial patterns

For Personal Protection:

  • Enable two-factor authentication on all accounts
  • Be skeptical of unsolicited contact, especially with urgent requests
  • Verify news through multiple independent sources
  • Use privacy settings to limit data collection for AI profiling

Perhaps we ARE all doomed after all--not that AI will take over but we will be the cause of our own demise! (That would be much more embarassing!)

As for me, I think I'll just turn off the computer and go camping. The mountains don't lie.

I'd rather be camping

 

Add new comment

The content of this field is kept private and will not be shown publicly.