Skip to content

The AI Rant

Posted on:November 5, 2024 at 03:22 PM

The greatest trick the devil ever played was convincing the world that he doesn’t exist

  • Charles Baudelaire

This post is a pessimistic rant on my growing concerns about the direction of our increasingly intelligent world. Sure, AI might help a bunch of us make trillions in the coming decade, but the consequences on humanity could be severe, and it’s essential that we’re actively aware as this evolution unfolds. I’ll keep things non-technical — my anxieties, after all, are all non-technical.

A brain with unlimited knowledge and bandwidth

As of today, AI is an assistant with access to the entirety of the internet’s knowledge. If you can wire it up with a robust text-to-speech and speech-to-text setup, it could literally become an all-purpose companion — taking commands, brainstorming, making conversations, and much more. I don’t know why no one has cracked this well yet. Imagine:

  1. You could instruct it to read an entire book and then teach you interactively.
  2. It could analyze your chat history to interpret the dynamics of your relationships.
  3. By reading hundreds of psychology books in seconds, along with all your messages and meeting transcripts, it could reveal insights about yourself that even you don’t realize.
  4. With enough context about your behaviors and conversations, it could craft a strategy to influence you — a marketer’s dream.

Yes, model creators have introduced safety guardrails, but those can be bypassed — because guess what — they can’t control how these AI models work, lol. The creators can only “teach” them certain things - yet, with a little ingenuity, people have already found ways to “fool” these models.

Doesn’t this personal, almost intrusive aspect of AI, feel a little scary?

Media and apps

Right now, AI largely exists as chatbots —- what harm could they possibly do? Well… even now, I can bet that there are already apps that “think” as you use them. Isn’t that fucking creepy? Like, think of a brain embedded in the app, monitoring your every action, analyzing your behavior, and constantly strategizing to pull you deeper or convince you to buy a premium upgrade. I’m probably oversimplifying, but the tech ecosystem, of of today, is already capable of making apps think (I’ll save the technical breakdown for another post).

With large language models (LLMs) being so widely accessible, the level of deception and manipulation on social media has become downright filthy. People are creating videos of celebrities saying obscene things they never actually said, and AI-generated graphics have saturated the internet to the point that so many ads now look eerily similar. These days, I question every controversial video I watch -— how can I be sure it’s not AI-generated?

The political implications of AI driven media can be catastrophic. Imagine:

  1. Entire fabricated “news reports” featuring AI-generated anchors could spread misinformation with unprecedented realism, swaying public opinion on a massive scale.
  2. Bots or scripts outbreak on social media, not just liking or commenting, but “learning” your preferences, political views, and emotional triggers to manipulate you subtly over time, perhaps even polarizing entire communities.
  3. Automated targetted cyber bullying and blackmail

Again, this is all 100% possible as of today. One just needs an evil mind that is driven enough to implement these dark ideas to reality.

Creativity

This is personal, but I’ve genuinely lost interest in writing beautiful code since I started coding with AI. The thrill of programming used to be about getting completely lost in crafting my best work — just me, the code, a cup of coffee, and loud music in my headphones. Sure, I can still try to get into that flow, but now there’s always an AI suggesting “just-good-enough” solutions, pulling me out of the creative zone. It’s no longer just me and my work; there’s this other dude constantly whispering optimal but uninspired answers. It saves time, it gets things done, it even gets the product to market faster -— but is it fun? Am I really a “creative” programmer anymore? I don’t know. The sad part is, I don’t even have a choice. Like steroids in bodybuilding, if you want to compete, you have to use them—or risk falling behind. Copilot is a programmer’s drug.

I used to feel the same way about writing. The process of finding the right word, struggling to be concise, chasing mastery of language. Now? I just throw my raw thoughts into a draft and let AI polish the language. These days, I literally prompt it with things like, “this sentence doesn’t flow; what’s wrong with it?” It’s ironic, really — I’m sitting here critiquing AI with the help of AI.

AI is to a programmer what anabolic steroids are to a bodybuilder.

Yes, I’m lazy, but then again, aren’t most people? In a way, that’s what technology has always promised us—to indulge and encourage our laziness.

Conclusion

The only real hope lies in our ability to stay optimistic about humanity — to trust that we won’t let ourselves down. There are people doing groundbreaking work to harness AI as a force for good — but the risks remain, and they’re too vast to ignore. Five years ago, I would have found it hard to believe we’d reach this point so quickly.

What we’ve discussed here barely scratches the surface of generative AI as it exists today. Beyond this, there’s Artificial General Intelligence (AGI) —- a whole new frontier that could be exponentially more revolutionary and, equally, exponentially more catastrophic.

AGI by 2027 is strikingly plausible

  • Leopold Aschenbrenner

The future may be closer than we think, and whether it’s one we can control is still an open question.


PS: The last line is completely AI generated. “The conclusion isn’t ending well. Can you add a romantic line to end it well?” Lol.