(Design)

Humanity vs Intelligence

Fabio Simões

March 27, 2026

 Futuristic Robot Design

Humanity vs Intelligence

I'm an introvert. I process things internally, reach conclusions on my own, and when I speak it's because I've thought enough to have something to say. For a long time I believed that was the right way to operate. That clarity before opening your mouth was rigor, not withdrawal.

AI showed me what I had forgotten.

The method

In meetings with Nvidia's entire board, Jensen Huang doesn't present decisions. He presents the path that led him there. The CEO of a company worth more than two trillion dollars explains, out loud, the premises he considered, the doubts he had, the trade-offs he weighed. Not as a performance of transparency -- as a method. Because while he exposes his reasoning, the people around him have a window to step in. To say: that premise is wrong. I know that data differently. Your conclusion makes sense, but the path has a problem you're not seeing.

The final decision might be the same. But it arrived tested.

The appearance

I heard that and recognized something: AI has spent the last few years learning exactly the same thing.

In the older versions of language models, the answer appeared instantly. Question, answer. No delay, no visible hesitation, no process. And we didn't trust it. We called it hallucination. We assumed it was guessing. The answer might have been right -- but it felt suspicious. Arbitrary.

What changed wasn't just the quality of the answers. It was the appearance of process.

When models started showing their reasoning -- the intermediate steps, the tested hypotheses, the provisional conclusions before the final answer -- something shifted in the relationship. We started trusting more. Not because the model got smarter. But because it became more readable. You could follow along. You could intervene. You could see where the thinking was going before it got there. You could identify with it.

The swap

And then came the uncomfortable part: while we were waiting for the machine to become more human, we were becoming more machine.

At some point, efficiency became synonymous with direct answers. Showing the process started to look like insecurity, or wasted time. We cut the path short, delivered only the destination, responded on autopilot -- thinking that was clarity. Meanwhile, we demanded from AI exactly what we had stopped doing ourselves: transparency of reasoning, humility of process, willingness to arrive somewhere different than planned.

When someone presents an idea without showing how they got there, we assume they didn't get there -- that they guessed, copied, invented with enough confidence not to be questioned. The speed of the answer, paradoxically, increases distrust. Whoever thinks slowly is rigorous. Whoever answers immediately is improvising.

Jensen Huang knows this. That's why he slows down in meetings.

What AI -- and the CEO of Nvidia -- are teaching is that showing your reasoning isn't a sign of insecurity. It's what makes thinking collaborative. It's what turns a position into an invitation. It's what separates someone defending an idea from someone defending the ego attached to it.

The black box

AI isn't truly transparent. What it shows is a simulation of process -- steps that look logical, reasoning that looks constructed. How it actually arrives at answers is still, largely, a black box. But the simulation is enough. It creates the sense of security that instant answers never did. The appearance of process is already sufficient to change the relationship.

For us, the value is different -- and stranger.

When we expose our reasoning out loud, we're not reporting a thought that was already finished. We're discovering it along the way. Speech isn't the transcript of what was thought -- it's part of the thinking. And sometimes it's only when someone reacts, agrees, questions, that we understand which connections were solid and which were just impressions. The other person isn't an audience. They're part of the process.

The wrong panic

The more technology advances, the more we place value on intelligence. As if it were the most precious attribute that exists -- what separates us, defines us, saves us. And now that AI has become intelligent, the panic is understandable: if the machine does what was always our differentiator, what's left?

But intelligence never built the world.

Humanity did. The compassion that makes us act even without guarantee of outcome. The empathy that reads what wasn't said. The intuition that recognizes something true before any data confirms it. These things include intelligence -- but they're much larger than it. And they're exactly what AI still hasn't reached.

The wrong competition is the intelligence one. We won't win that. We don't need to. What we need is to stop underestimating what has always been ours -- and that got forgotten while we were trying to be more efficient, more direct, more fast.

What's left

AI reminded me of something we learn early and keep forgetting: that creativity demands exposure. Not just the destination -- the path. Especially when the path looks strange. Especially when the reasoning doesn't yet justify itself. The willingness to show the thinking before it's ready, to let people in while the idea is still crooked -- that courage isn't a detail of the creative process. It is the process.

And that courage is human. Not in the romantic sense. In the literal sense: only we can be afraid of exposing what we think and do it anyway.

Read more

Create a free website with Framer, the website builder loved by startups, designers and agencies.