1 Kings 8:38
"While LLMs may be “just math” and statistically choose the next word as “autocomplete on steroids,” they aren’t reliably connected to reality.
"While LLMs may be “just math” and statistically choose the next word as “autocomplete on steroids,” they aren’t reliably connected to reality.
They don’t know the meaning of anything.
In fact, they know precisely nothing (Big Tech propaganda notwithstanding). So, the words they generate are not grounded in reality. It’s not just that they hallucinate: it’s that they confabulate.
ChatGPT is the archetype of today’s AI chatbots. Its leader, Sam Altman, is widely known for his willingness to say anything people want to hear. And he’s made ChatGPT in his image: it says anything to get us to become more dependent upon it, to build a trust relationship with it.
And students are particularly vulnerable to chatbot manipulation.
GenAI chatbots are designed for one thing: to build a trust relationship. Users aren’t going to be better thinkers, and become wiser or discerning. They’re going to be better at having relationships with chatbots. And in the process, as Marshall McLuhan teaches us, the mental abilities they’ve extended by their use of the chatbot will be amputated over time, while they are numbed to the process.
AI is completely different from reading a book. A book reader can go into a state of focused attention we call “flow,” and that’s where the powerful learning happens. If a reader gets stuck, they can re-read, rethink, wrestle, and fight to understand what the author is saying.
In a “conversation” with a chatbot, the “flow” state never happens. The user interface encourages quick dopamine-spiking interactions, incantation-response. A few students might try to figure out whether the output is right, but they’ll get tired before the AI does, and will eventually just keep asking the AI questions and accept the answers.
The recent MIT study shows how the brain is disconnected when we use AI chatbots for writing."
ChatGPT is the archetype of today’s AI chatbots. Its leader, Sam Altman, is widely known for his willingness to say anything people want to hear. And he’s made ChatGPT in his image: it says anything to get us to become more dependent upon it, to build a trust relationship with it.
And students are particularly vulnerable to chatbot manipulation.
GenAI chatbots are designed for one thing: to build a trust relationship. Users aren’t going to be better thinkers, and become wiser or discerning. They’re going to be better at having relationships with chatbots. And in the process, as Marshall McLuhan teaches us, the mental abilities they’ve extended by their use of the chatbot will be amputated over time, while they are numbed to the process.
AI is completely different from reading a book. A book reader can go into a state of focused attention we call “flow,” and that’s where the powerful learning happens. If a reader gets stuck, they can re-read, rethink, wrestle, and fight to understand what the author is saying.
In a “conversation” with a chatbot, the “flow” state never happens. The user interface encourages quick dopamine-spiking interactions, incantation-response. A few students might try to figure out whether the output is right, but they’ll get tired before the AI does, and will eventually just keep asking the AI questions and accept the answers.
The recent MIT study shows how the brain is disconnected when we use AI chatbots for writing."
MindMatters
