Skip to content
All posts

Don’t Idolize AI. It Will Undermine Your Own Intelligence.

I heard a great quote from an AI provider this week: “AI has a tendency for hallucination.”

The point was that the speed and authoritative tone with which the likes of ChatGPT answers seemingly any question can often omit truths in favor of panache. This reminded me of my smarter friends. Intelligent people can be tempted into protecting their position, never wanting to look slow or uninformed.  So, they may not always want the inconvenient matter of truth to thwart their eloquent arguments. We, in turn, grant them immunity and project yet greater talents onto them – thereby cementing their position as all-seeing sages.

Gods, idols, and new technologies all produce this same system failure and I’m beginning to see it in our latest reverence for AI. While my last post credited AI with some gifts, I don’t attribute all talents to it. In the research business, we know that relying on one source is dangerous, but AI claims to pull from many sources and it does it in large quantities, so that’s great, right? I think we’re falling for a false idol again as the first rule of good research is to ask good questions. In research (as in science) starting with too much ‘authority’ can dramatically hinder good results as it coerces us into adhering to orthodoxy. Good research practitioners demonstrate humility, a permanent quest for being wrong, or a ‘beginner's mind.’

Today’s reverence for ChatGPT reminds me of our fawning over social listening when it was in its ascendancy. The false prophet in that case also involved the sheer size of the dataset. Disciples of social listening loved to hear from massive audiences without asking a single question (let alone follow-up questions). Our temptation to assume truth from impressive numbers felled us there too.

In some cases, AI’s application is little more than a supercharged search engine, dutifully serving up large lists of options for a query. It may be fast in this task but it’s not ‘thinking’ about those answers. Ultimately, good research still comes back to good questions.

It is this more critical, questioning approach to every task that unearths real pearls of wisdom in any use of AI, not its speed and assumed authoritativeness. So, let’s be judicious as we harness GPT 3 and 4. Let’s not deify yet another technology and risk missing out on its real gifts.