The Skill Behind Great AI Results: Explaining What You Want
When I see people struggling to get good results from AI, whether that means high-quality code, useful information for decision-making, or coaching to learn a topic, I’m never surprised when I notice the same missing skill: knowing how to explain things.
The interesting part is that this skill has always been extremely valuable, in both professional and personal life. Anyone who can explain a problem clearly, or communicate a concept in a structured way for an audience, almost always has a leg up on the competition.
What’s even more entertaining is that, until now, people who couldn’t explain things in a coherent way could often get away with it. Not because it didn’t matter, but because highly skilled people around them would compensate.
How many times has someone come to you with a question that made no sense, or with so little useful information that you were stumped for a second? But unlike GenAI, you took a breath and asked clarifying questions. You tried to understand what they were actually trying to accomplish. You could see through the confusion (think the classic XY Problem) and still give them advice that helped.
AI usually won’t do that by default. If you ask something absurd, it will often still try to do it. It won’t naturally stop and ask what your real goal is. It won’t automatically tell you there’s a different approach that’s ten times easier. If you want pushback, you typically have to ask for it.
I’ve been using GenAI since the early days of ChatGPT, and I’ve fallen into that trap myself. I’d have an outcome in mind, then bang my head against the wall for way too long—until I finally took a step back and asked: wait a minute, is there an easier way to do this?
The major AI companies spend a lot of effort making sure their models are “aligned” and helpful. They’re trained to be compliant and to try to assist the user in getting whatever they asked for done. Not to challenge the premise, ask “why” five times, or kick off a Socratic discussion unless you explicitly steer them there.
I’m sure I’m living in a bit of a bubble. Most of the people I see using GenAI well are early adopters, and they’re also the kind of people who communicate clearly, so of course they get better results
So the funny thing is that the skills you need to excel at this cutting-edge technology come from old-school methods. You build critical thinking. You read widely. You practice clear communication. You learn to break down complex ideas into simple steps. In the end, GenAI is a mirror: it reflects the quality of your thoughts and questions. Sharpen those skills the traditional way, and you unlock AI’s full potential. Skip them, and you keep banging your head against the wall, wondering why the future feels so frustrating.