Proposing a new theory on Human-AI interaction based on my frustration with AI coding tools. I must perform two tasks every time AI writes code:
- (A) Comprehension: understand the solution
- (B) Validation: simulate or reason over the solution to make sure it is correct
When collaborating with humans, either in person or on online forums, (B) Validation is solved by trust, long before I start doing (A) Comprehension. This happens when we choose to talk to the experts instead of the newbies on the team. This also happens when we rank the StackOverflow solutions by peer approval and choose to read the solution with the green checkmark. I see it as societal-level strategy for conserving our energy. This is probably the Darwinism explanation of trust.
But when collaborating with AI, (B) Validation must be performed manually each time, and (B) Validation often requires (A) Comprehension upfront. So I often ended up wasting a lot of time understanding an advanced solution that turns out to be wrong.
I want to call this problem "AI fatigue".
In the world of human-AI interactions, "recognition over recall" is increasing difficult to achieve - how can we recognize a solution without understanding it first? If AI is operating at an intelligence level approaching our own, understanding becomes the bottleneck. I refuse calling this "singularity" because the solution from AI is still based on data distilled from human-level intelligence. Fully understanding the solution should still be within reach for us, though the equitable distribution of that power of understanding remains a topic for another day.
Did you experience AI fatigue? Any ideas on how to solve this? Is it possible to verify a solution without fully understanding it? Didn't someone suggest P ≠ NP? Will AI eventually output 42, stupefying the best of us? How will evolution respond to the cognitive pressure imposed by AI? Let me know your thoughts.