
My daughter is a college freshman. Last year, like many students her age, she sat down to write her college entry essay. She outlined it carefully. She workshopped it with a writer friend of mine and chose, intentionally, not to use AI. She wrote it herself.
She was accepted to her dream school. A few weeks later, I received an email from someone in admissions. They commended her for writing her own essay. She even received a partial scholarship because of it.
What struck me first wasn’t just pride, though there was plenty of that. What struck me was that a young person sitting with her own thoughts, exploring them and shaping them into language, had become notable. Not exceptional, not extraordinary. Simply worth calling out.
That moment clarified something for me: We are losing the craft of being curious. We are beginning to reward the process of thinking because it’s becoming more rare.
We live in a time where answers arrive instantly: instructions, explanations, summaries, strategies. Knowledge is abundant and increasingly convincing, even when it’s wrong. The danger in accepting those answers is not only the risk of endorsing misinformation, but believing something is “right” because it sounds right, without knowing why.
When we skip the journey in order to save time, we lose something subtle and important. We lose connection to our own thinking. We lose judgment. We lose the felt sense of authorship that comes with wrestling with uncertainty and arriving somewhere earned.
AI didn’t create this erosion, but it does accelerate it. And I’m starting to see what that loss looks like at work. It shows up in work that moves fast but feels thin—decisions that are hard to defend once you scratch the surface, teams that produce a lot but struggle to explain why one idea is better than another. In a manufacturing and distribution company with thousands of frontline employees, leaders rolled out AI tools broadly but skipped training. People were producing more but couldn’t explain how they got there, or whether the outputs were actually right. As one executive put it:
“people still need to understand what they’re doing at the beginning and end of these processes”
I learned the value of understanding long before AI entered the picture. When I was in art school, my first two years were spent drawing the human body. Bones. Muscles. Proportion. Again and again. We were explicitly told not to stylize. Not yet. We had to understand what was under the skin before we could distort it, improvise and develop personal style. That discipline didn’t slow us down; it made our work stronger.
In many organizations today, the order is reversing. AI makes things easy so we accelerate output in lieu of understanding. It smooths the edges and fills the gaps. When we stop asking the next question, when we don’t go down the rabbit hole or take things apart to see how they work, we lose the ability to know what good actually looks like.
Generative AI has arrived at exactly the moment when curiosity feels weakest. Attention is fragmented. We’re overstimulated. We’re rewarded for speed and polish. It’s never been easier to skip the messy middle step of actual thinking. And yet, I believe AI might be one of the best tools we have to reignite curiosity.
AI lowers the barrier to experimentation. It lets people try things they never would have attempted before. We’ve seen this most clearly in organizations that treat AI as enablement rather than replacement. In one case, frontline teams were encouraged to use AI simply to surface friction in their day-to-day work—what felt slow, redundant or unnecessary. The point was learning how work actually worked.
But great outcomes are not guaranteed. One leader put it simply in a recent conversation: “AI is an enablement tool, not a replacement.” Not about skipping the work, but expanding what people can try, test and learn their way into.
If AI is treated like an answer machine, it will make good on collapsing the thinking journey. However, if treated like a curiosity amplifier or a thought partner, it can extend it. I feel the difference has little to do with the tool itself, but how work is designed around it.
Leaders who are truly watching the horizon are paying attention to how people are thinking. They are not just looking for adoption and efficiency; they are looking for ideas and imagination. They ask how an idea came to be and make space for experimentation that doesn’t necessarily immediately have a payoff.
One leader described it as building space for people to “exercise their fail muscles—low-risk places to try things, fail, and learn.”
The contrast is striking in organizations where experimentation technically exists but scaling is nearly impossible. Several leaders described a growing bottleneck—one where people can prototype freely but layers of approval, risk review and governance shut down ideas before they can mature, stalling innovation and the impulse to experiment at all over time.
I don’t think the future of work is about choosing between speed and depth. I think it’s about what we’re willing to lose sight of in the name of efficiency. My hope isn’t that AI makes us faster; it already does. It’s that it helps us become more curious again—about how we think, what we build and why.
That won’t happen by accident. It will require leaders to model curiosity, make thinking visible and design work that rewards understanding.
The future of work isn’t something being thrust upon us—it’s being shaped by the choices we make right now. What do we reward? What do we rush? What do we protect?
Curiosity can’t be assumed anymore. It has to be cultivated. And taking time to think has to matter again.

