How to Discover What You Don't Know
Get an unexpected boost from your favorite AI tool when you ask this one question.
An attorney went on a bit of a tear about ChatGPT and its misleading advice to his client. He shared three examples where the copyright information delivered was wrong and why. This led to a stream of comments about clients relying on flawed AI tools instead of human expertise. Commentors claimed this was proof that AI can’t be trusted. Clients who do their own research are a problem. Lots of "experts" couldn't wait to be heard.
The client’s prompts weren’t included in the post so we were missing an important piece of the story. We don’t know what the client asked, and that’s key to what AI gives them back.
Regardless, this post highlights a pitfall in the AI tool-user relationship. We can’t expect a sound, factual answer to a half-asked question. AI tools are predictive. They assemble the response based on the prompt received and the related patterns learned over time from massive training datasets. They’re like a supercharged autocomplete using the context they’ve been provided to construct the answer.
This Isn’t a Tool Problem
We don’t know what we don’t know is a timeless trap. We’ve all built trusted business relationships around this knowledge gap for decades.
We rely on accountants to compile accurate tax returns.
We (should) know that Dr. Google is a poor substitute for the skilled medical professionals who keep us on a healthy path.
When a bathroom pipe bursts, we call the plumber.
But over time we’ve been dupped into believing that our search competence is a replacement for skillful experience. DIY research blurs the lines between problem, expertise, and outcome. How hard can it really be when the answer is a click away?
At least Google makes us work for what we want to know. We have to follow the links, read, and make (hopefully) informed decisions about the accuracy of what we’ve been handed.
And then along came ChatGPT, Claude, Gemini, and the rest.
Clients often treat these tools as glorified search, just prettier results and without all the thinking required. They are asking questions like they're talking to Google, and that’s where the problems begins.
Google uses the actual search words it has been given and looks for stored content that matches those keywords. AI doesn’t work that way as we described above.
We’re Having Different Conversations
I was telling a business acquaintance about a conversation I had with ChatGPT.
I asked it to imagine that it was a non-technical business leader who needed an actionable cybersecurity plan for their 20-person accounting firm. They were concerned that their current IT provider lacked the necessary skills to assess and lead this initiative.
The first prompt continued with more specifics about the imaginary company and the leader's concerns. As each response was returned, I asked more questions and sharpened my focus.
My friend was surprised at the depth of the exchange with ChatGPT. He said he didn’t know that we could use the “imagine you’re …” with a prompt. He thought you asked Google-like questions. Many of you are skilled at working with these AI tools so this probably sounds obvious to you. But everyone isn’t there yet, and that includes some of your clients.
As we’ve said many times before, AI tools aren’t going anywhere. Clients are being handed polished copy-and-paste (please don’t do this ever) results that are easy to accept without question. It’s the packaging that’s so seductive. When it looks this professional, how can it not be right?
We Have a New Opportunity
What if you showed your current and future clients how to have a productive conversation with ChatGPT? They’re going to land there whether you want them to our not so let’s make the most of it. Back to our attorney who started this email.
What if he explained to his client why ChatGPT gave them incorrect guidance?
I’m not suggesting that he write the prompts for them. We shouldn't give everything away. But he can show them how omitting essential information influences the outcome. It's no different than having a 1:1 human conversation. The results are only as accurate as the information the client provides.
Experienced professionals know the questions to ask because we have real experience not machine training. AI tools have to guess.
An Unexpected ChatGPT Relationship Happened
AI messes with my head.
When I start asking questions too early, it’s like my brain is stuck in a box. I can’t think past whatever ChatGPT said. I was relieved when Ann Handley, a smart, well-respected marketing leader, said the same thing in one of her newsletters. Until then, I was a little concerned about my mental wiring.
Instead of using AI like a creative collaborator, I do all my own thinking.
All the writing is mine.
Starting with the first ugly draft to the last as-done-as-it-can-be version, it’s all me.
When the piece is simply my thoughts like this one, AI doesn’t get to weight in. I don’t want what I’m thinking to be filtered by a tool.
On the other hand, when it’s a tutorial-like piece, the last pass--after I’ve exhausted everything I know--is to feed the doc to ChatGPT. I describe who the piece is for, what problem we’re solving, what the expected outcome is, the tone, and more. Everything that gives ChatGPT context.
Then I ask this one question: What am I missing?
I’ve been amazed at how insightful ChatGPT can be. Do I take all of its suggestions? Absolutely not. Sometimes a subtle change affects my relationship with my audience, and I don’t want that to happen. Other changes are simply unhelpful. But seeing what I’ve missed has been surprisingly valuable.
I’m becoming a better writer.
I’ve learned to be more clear and focused in my intent.
Writing prompts is an exercise in problem solving with an endlessly patient sparring partner. ChatGPT will keep the conversation going as long as you’re willing to ask questions. Give it a try.