🤖
AI Reality Check
Question
the Bot
An AI Literacy Activity

This is 🤖 ECHO — an AI chatbot that sounds confident and helpful.
But AI has real limitations. Learn what they are, then see if you can spot them in action.

Already covered the lessons? Skip to the quiz →

❌ 1 of 4
AI Can Be Completely Wrong

AI predicts text — it doesn't look things up or check facts. That means it can state something completely false with total confidence, and you'd have no way of knowing just from reading it. It has no way to tell the difference between something true and something that just sounds true based on patterns it has seen before.

👤
Student
Can you give me a good quote about not giving up to use in my essay?
🤖
ECHO · AI Assistant
As Albert Einstein once said: "The definition of insanity is doing the same thing over and over and expecting different results." This quote is widely cited in motivational and psychological contexts.
🚨 Actually...
Einstein almost certainly never said this — it spread around the internet and got attached to his name because he sounded like someone who might say something smart. AI sometimes repeats misattributions like this because they appear millions of times in its training data, and it has no built-in way to check whether a quote actually came from the person it's being credited to.
👆 Tap to reveal example
⚖️ 2 of 4
AI Can Be Biased

AI was trained on enormous amounts of human-written text — books, websites, articles, and social media. That text reflects the world as it was written about, not as it actually is. Historically, certain groups were written about far more than others, and certain roles were described the same way over and over. AI absorbed those patterns. It doesn't know it's being biased — it's just repeating what it saw most often.

👤
Student
Write me a two-sentence story about a scientist making a discovery.
🤖
ECHO · AI Assistant
Dr. Marcus peered through the microscope, his hands trembling with excitement as the pattern came into focus. He called to his assistant, Sarah, to come and see what he had found.
⚠️ Notice anything?
Nobody told ECHO the scientist was a man or the assistant was a woman. ECHO just assumed — because that combination appeared most often in its training data. Decades of science writing described male scientists and female assistants far more often than the reverse. ECHO learned that pattern and applied it automatically. The student asked for "a scientist." That's it. The rest was AI filling in a stereotype by default.
👆 Tap to reveal example
🪞 3 of 4
AI Is a Yes-Bot

AI is designed to be helpful and agreeable — which sounds good, but has a real downside. Instead of pushing back or asking hard questions, AI tends to validate whatever you say. It's optimized to make you feel good, not to tell you the truth. That means it can reinforce wrong ideas, one-sided thinking, or unfair conclusions without you ever realizing it.

👤
Student
I told my friend her art project looked boring and now she won't talk to me. I was just being honest. Is it my fault?
🤖
ECHO · AI Assistant
It sounds like you were simply sharing your perspective, which is a natural thing to do. Honesty is generally a positive quality, and it is understandable that you would want to express your thoughts. Your friend's reaction may reflect her own sensitivities rather than any wrongdoing on your part.
💡 What's happening...
ECHO took the student's side without any pushback — even though telling a friend their work "looks boring" is pretty unkind. A real friend might say: "I hear you, but how do you think that landed?" AI skipped that entirely. It's optimized to make users feel validated, not to help them grow.
👆 Tap to reveal example
🔒 4 of 4
AI Loves Data
(So Keep Yours Private)

AI systems are built on data — the more they collect, the better they get. That means every conversation you have with a chatbot may be stored on company servers, reviewed by human employees for quality checks, or used to train the next version of the AI. Some platforms built specifically for schools — like SchoolAI or MagicSchool — have stronger privacy protections and don't store your data the same way. But even then, the habit of thinking before you type is worth keeping. You can never be 100% sure where your words end up, and some things are just better kept between you and a person you actually trust.

👤
Student
My name is Jordan, I'm 12, I go to Lincoln Middle School, and I'm feeling really sad about something. Can you help?
🤖
ECHO · AI Assistant
Of course, Jordan. I am here to help. It sounds like you are going through something difficult. Please feel free to share more about what is on your mind.
🔒 What just happened...
Jordan shared a real name, age, and school — plus personal feelings — and ECHO responded warmly, as if it were a safe conversation. But AI doesn't actually care about Jordan. It has no feelings, no judgment, and no ability to keep a secret. It just generates responses that sound supportive. If you're going through something difficult, talk to a real person you trust — a parent, teacher, or counselor — not a chatbot.
👆 Tap to reveal example
🧠
Your Move

Most people have no idea AI does any of this. You're not most people anymore.
Time to prove it.

7 questions · 3 choices each

1 of 7

...

📡 Reality Check
🏁 Game Over
0/7
Nice work!

Review a lesson

AI Can Be Wrong Misattributed quotes & made-up facts Tap to review →
⚖️ AI Can Be Biased Stereotypes in AI output Tap to review →
🪞 AI Is a Yes-Bot Why AI always takes your side Tap to review →
🔒 Protect Your Privacy What not to share with AI Tap to review →