2024 was supposed to be the year AI assistants became genuinely useful in everyday life. Instead, we got something more interesting: the year AI became deeply weird.
I'm not talking about the chatbots themselves—those have gotten impressively capable. I'm talking about how we're using them. A colleague recently told me he argues with ChatGPT about his therapy sessions. Not using it to reflect, mind you. Actually arguing with it about whether his therapist's advice was sound. Another friend asks Claude to roleplay as her deceased grandmother to help her process grief. A developer I know has his AI assistant write his standup updates in increasingly absurd voices—last week it was a film noir detective.
This isn't what anyone predicted. The discourse was all about job displacement and whether AI would replace writers or programmers. We spent months debating whether these tools were "truly intelligent." Meanwhile, people just started... using them. And using them in ways that have nothing to do with productivity.
The pattern I keep seeing: AI tools work best when we stop trying to make them do our jobs and start using them as thinking partners. Not as replacements for human judgment, but as infinitely patient sounding boards that can take any shape we need in the moment.
A writing student uses Claude to argue the opposite side of every essay thesis, strengthening her arguments. A game developer generates hundreds of terrible game ideas until the patterns reveal one good one. A parent practices difficult conversations with ChatGPT before talking to their teenager. None of these uses appeared in any product roadmap.
The productivity crowd is still obsessing over whether AI can generate perfect code or flawless prose. They're missing the point. The real value isn't in the output quality—it's in having something that will engage with half-formed thoughts at 2 AM without judgment, that costs almost nothing to query a thousand times, that can hold contradictory positions without getting defensive.
This creates a strange new category of tool. Not quite a search engine, not quite a colleague, not quite a therapist. More like a cognitive mirror that talks back. Sometimes it reflects your thinking clearly. Sometimes it distorts it in useful ways. Sometimes it just agrees with whatever you say, which is less helpful but oddly comforting.
The companies building these tools seem as surprised as anyone. They keep adding features for enterprise workflows and API integrations while people use them to simulate conversations with historical figures or workshop the phrasing of a difficult text message. The gap between intended and actual use has never been wider.
What happens when thinking out loud becomes free and unlimited? When the cost of exploring a bad idea drops to zero? When you can externalize your internal monologue and it responds? We're only beginning to find out.
The weird part isn't that AI can do impressive things. It's that we're discovering what we actually want isn't impressive outputs—it's endless, patient, judgment-free engagement with our messy, unfinished thoughts. Turns out that's more valuable than any perfect first draft.
#AI #technology #ChatGPT #machinelearning