Zero-gravity leadership in the Age of AI

March 18, 2026
·
8
min read
Mark Newhouse
Guest Author

Zero-gravity leadership in the Age of AI

March 18, 2026
·
8
min read
Mark Newhouse
Guest Author

Leadership is being reshaped as AI rewrites the conditions under which organizations operate. Grounded in conversations with senior executives and informed by SYPartners’ research and client work, this perspective surfaces the capabilities leaders must now build to navigate change. The full point of view is available as a downloadable report.

When NASA began sending humans beyond Earth’s atmosphere, it quickly discovered that excellence under terrestrial conditions did not automatically transfer to orbit in zero gravity. Astronauts who relied on what “felt right” became disoriented. In space, instinct is no longer an authority. Orientation must be relearned, and confidence must be recalibrated.

The situation NASA faced is not unlike the one facing organizations in the age of AI.

For decades, leaders have operated under relatively stable physics. Information has been scarce and expensive to synthesize. Expertise accumulated slowly through experience. Roles and workflows were structured and linear. Strategy unfolded over long cycles. Human capability defined the upper bound of output. AI alters those conditions. Information is now abundant and synthesized instantly. Tasks can be decomposed and run in parallel. Quality and cost curves can shift simultaneously. Experience-based authority competes with probabilistic outputs generated in seconds. And yet, most leadership models still assume traditional physics.

The risk is subtle. Much of what has made leaders successful to this point remains relevant. Strategic clarity, decisiveness, accountability and ethical grounding—none of that disappears. But in an AI-shaped environment, those same strengths can become counterproductive if applied reflexively.

  • Decisiveness without experimentation can lead to premature scaling.
  • Confidence without questioning can encourage overtrust in flawed outputs.
  • Planning without optionality can yield rigidity in a probabilistic system.
  • Delegation without firsthand engagement can create blind spots in rapidly evolving workflows.

SYPartners has seen that the competencies that follow are those leaders need to master so their organizations not only endure in the zero-gravity equivalent of AI, but thrive there.

Apollo 11 astronaut Edwin Aldrin prepares for weightless conditions / NASA

From Creative Problem-Solving to Radical Imagination

Because AI alters the underlying rules that govern how professions, businesses, and markets operate, leaders must act with Radical Imagination: detecting rules and conventions so entrenched they are largely invisible, exposing and challenging them, and filling the expanded spaces in their wake with previously inconceivable ideas.

Radical Imagination is also a matter of scale. Leaders must be able to think in terms of exponential change rather than incremental improvement—the difference between 10% gains and 10× shifts in speed, precision, reach or cost. Doing so requires a fundamentally different frame for opportunity and risk.

Radical Imagination is not only strategic or technical. It is also deeply human. As AI reshapes the nature of work, leaders must help people make sense not only of changing roles but of new identities. Leaders will need to help their people grapple with what it means to be a programmer, researcher or manager—and create new ways of contributing and growing when the bulk of one’s time and talent is suddenly available to be directed in new ways.

Crucially, Radical Imagination does not mean reinventing everything. The strongest leaders are both imaginative about what needs to change and clear about what must be sustained. They understand the difference between what is durable and what is perishable, keeping current strengths and interests front of mind in a rush toward the unknown.

Developing Radical Imagination

Leaders with the skill of Radical Imagination often adopt the following practices:

  • Constraint-surfacing — Challenging implicit assumptions and norms to expand solution spaces.
  • Futurecasting — Starting from a possible future and reasoning backward to understand what would need to be true to reach it.
  • Scale reframing — Shifting mindsets from incremental improvement to exponential possibility and risk.
Jet shoe experiment on air bearing facility at NASA Langley Research Center / NASA

From Periodic Pilots to Continuous Experimentation

In the age of AI, what technologies can do is revealed through experience. Models evolve, capabilities shift, and interactions between humans and machines produce second- and third-order effects that cannot be fully anticipated. In this environment, leaders cannot rely solely on analysis, expertise or precedent.

The capacity to experiment becomes essential—both at the individual and enterprise levels. Learning through testing becomes more important because it is the primary means of managing the tension between speed and risk.

In AI-enabled environments, the temptation is either to rush forward on the basis of possibility or to hesitate in the absence of certainty. Experimentation enables leaders to avoid both extremes. They use experiments to move quickly and responsibly—gaining evidence, integrating insight and adjusting direction as new information emerges.

They also do this personally. Leaders experiment themselves as hackers and creators, discovering through direct engagement the possibilities and potential pitfalls that inform their vision and the goals they establish for others.

At the same time, they make experimentation legitimate for the organization. They signal what kinds of risk are acceptable, reward learning rather than just outcomes, and ensure insight is integrated rather than fragmented or localized. Without this leadership, experimentation either stalls or proliferates without coherence.

Developing Continuous Experimentation

Leaders capable of Continuous Experimentation often build the following practices:

  • Open-ended discovery — Exploring tools and technologies without predefined use cases to reveal novel capabilities.
  • Hypothesis-driven planning — Defining work in terms of unknowns to be tested rather than tasks to be completed.
  • Experimentation engines — Enabling experimentation as an ongoing way of working.
  • Proof-point scaling — Embedding solutions validated in testing into large-scale workflows and systems.
The flight directors console in the Mission Control Center in Houston during the Gemini 5 flight NASA

From Planned Execution to Dynamic Orchestration

In the age of AI, value is rarely created within neat boxes. It emerges at the seams between workflows, technologies, teams and partners across the organization and beyond. AI accelerates this by automating handoffs, unbundling jobs into tasks, and enabling work to move in parallel rather than sequence.

Managing work in this context is not about enforcing coordination against a fixed plan. It is about dynamically shaping how work flows—less like conducting a prewritten piece of music and more like leading a jazz ensemble: establishing shared principles, tempo and direction while allowing for improvisation, emergence and real-time recombination as conditions change.

Dynamic Orchestration is a leader’s ability to design and manage multithreaded work: breaking complex initiatives into modular components, distributing those components across functions, teams and machines, enabling multiple work streams to progress simultaneously, and integrating outputs and learnings from parallel efforts in real time.

Because capacity, expertise, and decision-making authority are increasingly distributed, Dynamic Orchestration often depends on the ability to shape outcomes without strong formal control. Leaders must increasingly manage by principle rather than protocol, mobilizing alignment through shared intent, trust and clarity of outcomes rather than positional power.

Developing Dynamic Orchestration

Leaders who thrive at Dynamic Orchestration often adopt practices such as:

  • Outcome-based design — Planning work and workflows by starting from intended outcomes rather than inherited processes.
  • Orchestrated operations — Actively running work across multiple concurrent streams, ensuring people and AI systems are coordinated in real time as work unfolds.
  • Coalition activation — Mobilizing peers and partners to act in concert, often without formal mandate.
Apollo 11 astronaut Neil Armstrong, the first to walk on the Moon / NASA

From Formal Decision-Making to Proactive Discernment

In the age of AI, judgment is no longer concentrated in formal decision-making moments. It is embedded in the flow of work itself. Systems now generate analysis, draft communications, recommend actions, simulate scenarios and act through agents—all requiring regular assessment and decision-making.

Proactive Discernment is about interrogating how work is shaped. This is less like approving a final recommendation and more like continuously calibrating direction as conditions evolve. The leader’s role is not simply to make the call, but to question inputs, test outputs, define boundaries and ensure that intelligence—human or artificial—is applied responsibly and coherently.

Proactive Discernment needs to be present in many aspects of a leader’s work. Three in particular are especially important:

  • Calibrating the contributions between humans and AI. Determining what should be delegated, what requires uniquely human judgment, when autonomy is appropriate and when escalation is necessary.
  • Assessing information sufficiency and accuracy. Evaluating the quality and potential bias of source data, the reliability and usability of outputs, and whether additional analysis adds clarity or simply creates noise.
  • Judging for quality and originality. Interrogating whether AI outputs reflect genuine creative intent, align with strategic objectives, or meet standards of craft and distinctiveness.

Because capability now outpaces experience, effective discernment cannot reside only at the top. Leaders must model rigorous engagement, design workflows that embed oversight and cultivate judgment in others, especially earlier-career talent. Proactive discernment becomes less a private virtue and more an organizational discipline.

Developing Proactive Discernment

Leaders who want to further develop Proactive Discernment can build the following practices:

  • Strategic questioning — Intentionally framing high-leverage questions to guide work from analysis to judgment.
  • Output interrogation — Critically examining outputs for bias, reliability and contextual fit.
  • Delegation-boundary setting — Regularly assessing what should be handled by humans versus machines.
  • Strategic-taste cultivation — Building the power of one’s sense of quality or distinctiveness related to any critical choice for the business.

This article is an abridged version of a paper that emerged largely from conversations with senior executives across the SYPartners network. These discussions explored how AI is reshaping work, value creation, and organizational performance, and how those changes are redefining what’s required of leaders. The perspective is also informed by our ongoing signal gathering, client work, and internal reflection.

Written by Mark Newhouse, with substantial contributions from Nikki Cicerani, Kendra Cooke, Takuo Fukuda, Jonathan Kerrs, Nicolas Maitret, and Andrew Vaterlaus-Staby. Additional perspectives were provided by Jason Baer, Sabrina Clark, and Alberto Means.

What do you think?
If this sparked something, we’d love to explore it with you.
Let's talk
Mark Newhouse is a Partner in the New York office of SYPartners.

More MOMENTUM