Discovering our Thinking in times of AI
If AI will be unavoidable, our thinking will have to embrace it
Image prompt ChatGPT: Create an image about discovering our thinking in times of AI. Do not write any text in the image. It is an uplifting image of what critical thinking, knowledge and AI hallucinations evoke in you
AI can make some tedious tasks faster by automating the analysis and generation of content, providing the output in our most preferred media (text, audio, video). People use AI in their work tasks, saving a great amount of time and even gaining access to capabilities that were previously reserved to domain experts. This brings great benefits and has generated a reinforcing loop in offloading more and more tasks to AI.
With such a high level of automation it is easier and easier to offload more and more of our mental activity to the AI, giving trust in its guidance. But AI hallucinations have proven to be harder to recognise than we think.
With the rise of the Internet came a flood of data, misinformation and fake news. This is only one side of it but it is not one to be ignored. Relying on the Internet has given a tremendous power to people but these types of problems have caused huge suffering, the quantity and quality of which we are still trying to figure out. And maybe we never will because we will have to keep up with the present and more persisting issues.
Now we are opening the door of our digital spaces to these tools that all, more or less, are at least suffering from transparency issues over data used for training and questionable handling of users’ data.
They are smart tools, claim the AI enthusiasts. It is undeniable they are useful. However, the difference between a smart software and a stupidly smart one becomes blurred or harder to detect in AI systems. When these AI tools fail, they fail with confidence and investigating their thinking sometimes is harder than with people (when AI tools actually allow themselves to be investigated in their thinking).
Confidence can be persuasive for humans, leading to misplaced trust in AI capabilities and AI results at face value. This is nothing new if we recall the incidents of people believing the GPS more than what they see.
What kind of thinking is necessary for people now in the light of these new tools gaining increased popularity? If AI cannot be trusted but cannot be ignored, what should be the approach to articulate your own thinking around what AI tools tell you?
Questioning AI generated answers pushes human capabilities on a path of constant analysis, detection and exploration of issues. Basically it pushes people to adopt many of the practices involved in critical thinking.
Critical thinking is now, more than ever before, a required skill in order to construct anything of a relevant level of complexity combining these newly discovered capabilities.
Considering the strong imbalance of computational power between a person and an AI program, it is necessary to leverage these AI tools in order to be effective and efficient in controlling and reviewing the AI generated content itself. But in the end, humans need to acquire the ability to manage and verify the knowledge AI provides.
Despite all the training data, a human decision is necessary to properly handle AI abilities. This means the human should define what is right. But how can we do that if we can’t keep up with how AI thinks? Our thinking must embrace new ways to handle these new flows of information, these new interactions.
It is ironic that we train these tools on tons of data for them to have to be retrained again by us, right when we think they are smart already.
You never stop learning, indeed.