Do you need help with AI & Automation or New Technology?
This weekend, I was thinking about how AI works, and it’s pretty wild. You know how sometimes you ask AI a question, and it just gives you an answer like it’s 100% sure, even if it’s totally guessing? Well, it turns out we’ve kind of trained AI to do that, and it’s a bit like how humans stick to their guns even when they’re wrong.
The Confidence Problem
Think about it. We want AI to be helpful, right? So, we give it tasks. But often, we don’t give it a way to say, "I’m not really sure about this." It’s like asking for directions and getting a confident answer, even if the person giving them is just making it up. AI does the same thing. It’ll give you an answer, and it won’t tell you if it’s just a guess or if it’s based on solid facts.
Humans Are Part of the Problem
Here’s where it gets really interesting. Our own brains work in a funny way. Once we make a decision, even if it’s based on not much information or a misunderstanding, we tend to stick with it. We like to believe we’re right. So, if we’re making choices based on things we don’t fully grasp, and later it turns out to be the wrong choice, we often keep going down that same wrong path instead of correcting ourselves.
Replicating Human Flaws in AI
And guess what? We’ve managed to teach AI to do the exact same thing. When you tell an AI something, it kind of locks onto that idea and keeps running with it. It doesn’t easily backtrack or question its own initial assumptions, even if new information suggests it should. It’s like it gets stuck in its own thought process, just like we sometimes do.
Key Takeaways
- AI often presents information with a high degree of confidence, regardless of its accuracy.
- Humans have a tendency to stick with decisions, even when they are based on incomplete information or are incorrect.
- This human trait of sticking to decisions has been inadvertently replicated in AI systems.
- As AI becomes more advanced, understanding this behaviour is important for its reliable use.
It’s a bit of a strange situation. We want AI to be smart and reliable, but we’ve built it in a way that mirrors some of our own less-than-ideal thinking habits. It makes you wonder how we’ll handle more complex AI in the future if they’re prone to making confident mistakes.