Human beings shouldn’t use LLMs to find answers

LLMs are not natural to humans. Therefore we should shift our understanding of dynamics between us and AI models

Adilet Daniiarov

1/8/20242 min read

girl in purple and black long sleeve shirt holding black pen writing on white paper
girl in purple and black long sleeve shirt holding black pen writing on white paper

Human beings shouldn’t use ChatGPT to find answers

Why having a history-of-humanity worth of knowledge source in our pockets we fail to leverage even a part of it?

This is a second part of my series of posts trying to understand interplay between humans and AI.

LLMs like GPT are immensely complicated models that represent an incredibly smart compression of a (digitised) knowledge possessed by humanity.

Why are we so bad at extracting a real value from it even when the medium of communication with the models is turning into things that are natural to humans like voice.

I think part of the reason is human beings are quite bad at articulating what they want. We suck at putting our inner desire for something into material world.

Even in simple things. Remember the last time you wanted to order something for lunch? Or pick a restaurant? Or pick a movie to watch? Or choose your profession? You feel you might enjoy some kind of an option with certain characteristics.

You FEEL what you want. You never KNOW what you want.

The food ordering app has all the filters in the world. Google maps has all the restaurants in your city. You can find a description of any professional career. Yet, you have a tough life making a choice.

As you see friction lies in not even conveying something we need to AI but in being able to even extract what we need from ourselves.

Should we instead of asking an LLM for an answer let LLM come up with questions to us that would simplify our job of extracting what we really need.

Every answer starts with a question. A good question is far more powerful than an answer because a very good question always leads to multiple answers or to an even better question that subsequently leads to an answer that is far more valuable than an initial knowledge. Or generation of a new knowledge.

So overall, it seems a lot of uncovered innovation lies in efficiently and effectively understanding humans needs and transmitting it to the medium understood by AI.

Very likely there is a lot of technology that is better at understanding human beings than human beings themselves. Can you tell if your pulse is normal? No. Your Apple Watch does it in a matter of seconds and even has a historical data for comparison. Should the AI ask you regarding your heart rate? No, it better talk to your Apple Watch.

Will we eventually start understanding ourselves better?

Anyone can start now. Try to convert your thoughts and ideas into words. It’s incredibly hard. But you may find thoughts you had no idea about. Like I did while writing this post