The AI TakeOver. My position on AI.
Table of Contents

The AI Takeover: My Position on AI
What does the general populace think of when you hear the phrase, “AI is taking over!”?
- The Skeptic: You might think of generative AI taking everyone’s jobs.
- The Optimist: You might see AI as the ultimate helpful tool for humanity.
- The Hardliner: You might believe AI shouldn’t be used no matter what.
Today, I’m going to explain what AI actually is, how it’s being used today, the history behind the technology, and my personal stance on its future.
Let’s dive in…
What is AI?
Artificial Intelligence (AI), in simple terms, is the science of making computers smart enough to perform tasks that usually require human intelligence. This includes:
- Learning from experience (Machine Learning)
- Solving complex problems
- Understanding natural language
- Recognizing patterns
- Making autonomous decisions
Instead of being manually programmed for every single step, modern AI analyzes massive amounts of data to find patterns and reach conclusions on its own.
A Brief History Lesson
Believe it or not, the concepts behind Artificial Intelligence go back as far as the 1720s! Here is a timeline of how we got to where we are today.
Pre-20th Century: The Seeds of Imagination
- 1726: Jonathan Swift’s novel Gulliver’s Travels introduces “The Engine.” This mechanical contraption was intended to help scholars generate new ideas and sentences—a primitive ancestor to the modern LLM.
1900–1950: The Foundation of Computing
- 1914: Spanish engineer Leonardo Torres y Quevedo demonstrates El Ajedrecista, the first automated chess-playing machine.
- 1921: The play Rossum’s Universal Robots (R.U.R.) by Karel Čapek opens in London. This marks the first time the word “robot” is used in English.
- 1939: John Vincent Atanasoff and Clifford Berry create the Atanasoff-Berry Computer (ABC), introducing foundational concepts for modern computing.
- 1943: Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity,” linking neuroscience and AI.
- 1950: Alan Turing publishes “Computing Machinery and Intelligence,” introducing the famous question: “Can machines think?”
1950–1980: The Birth of a Science
- 1951: Marvin Minsky and Dean Edmunds build SNARC, the first artificial neural network.
- 1955: The term “Artificial Intelligence” is officially coined by John McCarthy and his team for the Dartmouth Summer Research Project.
- 1957: Frank Rosenblatt develops the Perceptron, an early neural network that enabled computers to recognize patterns.
1980–2000: Expert Systems & The First Win
- 1980: The rise of Expert Systems, programs designed to solve niche problems by following thousands of “if-then” rules.
- 1986: A massive breakthrough in Backpropagation, the math that allows neural networks to “learn” from their mistakes.
- 1997: Deep Blue, an IBM supercomputer, defeats world chess champion Garry Kasparov.
2000–Present: Big Data & The Generative Boom
- 2011: IBM Watson wins Jeopardy!, proving AI could understand riddles and wordplay.
- 2012: The AlexNet moment—a neural network identifies objects in photos with incredible accuracy, sparking the Deep Learning revolution.
- 2016: AlphaGo beats the world champion at Go, a game significantly more complex than chess.
- 2022–2026: Generative AI goes mainstream, followed by the rise of Agentic AI, AI that can complete multi-step tasks like booking flights or managing schedules autonomously.
It’s Not Just “Generative”: The Many Faces of AI
When people say “AI” today, they usually mean Generative AI. But that is only one slice of the pie. Here are the other major forms of AI:
1. Discriminative / Analytical AI (The “Predictor”)
Instead of creating something new, this AI looks at data and makes a choice.
- Examples: Email spam filters, credit card fraud detection, and social media algorithms.
2. Computer Vision (The “Eyes”)
This allows machines to “see” and interpret the physical world.
- Examples: FaceID on your phone, self-driving cars, and medical X-ray analysis.
3. Robotics (The “Body”)
AI combined with physical hardware that reacts to its environment.
- Examples: Automated factory arms and drones that navigate obstacles.
4. Natural Language Processing (NLP) (The “Translator”)
Focuses on understanding and translating language.
- Examples: Google Translate and voice assistants like Siri or Alexa.
Comparison: Generative vs. Traditional AI
| Feature | Generative AI | Traditional (Analytical) AI |
|---|---|---|
| Goal | Create new content (text, art, code). | Analyze data and make a decision. |
| Output | A poem, an image, a song. | A “Yes/No,” a category, or a score. |
| Example | ChatGPT, DALL-E. | Netflix Recommendations, FaceID. |
My Position: The “Convenience Trap”
My stance on AI is nuanced: Not all AI is bad. In fact, as we’ve seen in the history above, AI helps us solve massive problems in medicine, science, and global logistics. However, I believe AI becomes “bad” or even harmful when it reaches extreme levels of over-simplification.
The Problem with “Simple AI”
There is a danger in using AI for things that should be part of our basic human experience or family connection. For example:
- Asking AI how to boil pasta: This is something you could (and perhaps should) ask your parents or a friend. It’s a moment of connection and a basic life skill.
- Asking AI for a hex code for “Red”: While fast, relying on a bot for every minor bit of data makes us lose our own creative intuition and research skills.
Why It Matters
When we outsource every tiny thought to a machine, we risk two things:
- Losing Life Skills: We become overly dependent on a “digital brain” for tasks that humans have mastered for centuries.
- Weakening Social Bonds: Instead of a conversation with a family member (“Hey Mom, how long do I cook this?”), we stare at a screen. We lose the “human” in human intelligence.
My Conclusion: AI should be a tool for the complex, not a crutch for the simple. We should use it to help us do things we can’t do, rather than letting it take over the things we should do ourselves.
Sources, Related Links, & My socials.
- IBM: The History of Artificial Intelligence
- Stanford University: The Dartmouth Proposal (1955)
- Oxford Academic: Computing Machinery and Intelligence (Alan Turing, 1950)
- IBM History: Deep Blue vs. Garry Kasparov
- IBM History: Watson on Jeopardy!
- Google DeepMind: AlphaGo
- IBM Topics: What is Machine Learning?
- IBM Topics: Reinforcement Learning