The Flux by Epistemix

Designing for Uncertainty with Mickey McManus

Season 1 Episode 22

In this episode of The Flux, host John Cordier sits down with innovation pioneer Mickey McManus to explore what it means to design for uncertainty in a world where complexity is the norm. Mickey draws on decades of experience from leading Maya Design and shaping decision-support tools for DARPA, to helping build the Future of Learning team at Autodesk to share insights on how human-centered design, systems thinking, and simulation can empower better decisions when the stakes are high and clarity is scarce.

Together, John and Mickey unpack stories ranging from battlefield strategies in Iraq to fifth-grade prototyping workshops, illustrating how the right tools, environments, and mindsets can turn chaos into creativity. They dive into the importance of creating interaction physics in digital tools, the role of cognitive diversity in teams, and how simulation both technological and social builds resilience and foresight.

Whether you're a designer, strategist, educator, or technologist, this episode offers a compelling lens on how to build for adaptability, foster innovation, and empower decision-makers in an age of flux.

Welcome to The Flux, where we hear stories from people who have asked what if questions to better understand the world and talk about how data can help tell stories that impact decisions and create an intentional impact on the future. This is your host, John Cordier, CEO at Epistemix. In a world where the flux capacitor from back to the future does not yet exist, people have to make difficult decisions without always knowing how the future will play out.

Our guests are people who've taken risks, made decisions when uncertainty was high, and who have assisted decision makers by using data and models. We hope you can turn lessons from our podcast into foresight, so you or your organization can make better decisions and create an intentional impact for others.

Today we’re on with Mickey McManus. Mickey, how’s it going?
It’s going well.
How are you doing?
I’m doing well. A bunch of travel going on as of late, as I’m sure you have too. You’re currently in Boston, headed to Pittsburgh later tonight?
Yeah, a little bit of everything.
Nice.

For those who are tuning in, Mickey and I were able to briefly see each other last week in San Francisco. We bumped into each other at a side event where Mickey was the MC at the TechCrunch event. The cool thing about having Mickey on this podcast is that when I first started Epistemix, I was working out of the office space that Mickey had designed for his company, Maya Design. I think it was on our second call I was sitting in one of the curved rooms with the whiteboard.
Yeah, and I was like, “Are you… I think I recognize that.”
Yeah.

So maybe some background. One of the big things you’re known for is Maya Design. You want to talk about Maya a little bit?
You bet.

A little background: back in 1989 a million years ago three researchers and professors at Carnegie Mellon, Joe Ballay, who was building the design program there, along with Peter Lucas and Jim Morris, founded Maya. Joe built the design program over a number of decades. Some of his students were Tom and David Kelley, who later went on to create IDEO when they were students at Carnegie Tech. He was super deep into design the idea of form and function, and how you create a product whether it’s digital, physical, something you hold in your hands, or a space you’re in that becomes a window through which you reach out and do what you want to do. How the product fades away and becomes a superpower or a prosthetic to extend your reach.

Then there was Dr. Peter Lucas, a cognitive psychologist studying under Herb Simon. You’ve probably read Sciences of the Artificial. He also worked with JJ Gibson, one of the founders of cognitive psychology. Peter was really interested in how people think, understand complex systems, and extend their reach from a behavioral or cognitive science standpoint.

Jim Morris came from Xerox PARC, where he worked on the Alto and Star computer systems precursors to what Steve Jobs later saw and used to develop the Lisa and the Macintosh. Jim really understood engineering and how to put things together.

The company was called Maya, which stood for “Most Advanced Yet Acceptable.” The idea came from Raymond Loewy, an industrial designer who came to America and helped invent product and industrial design. You’ve probably used something he designed a Coca-Cola bottle, a Studebaker, or maybe you’ve been on the 20th Century Limited train. He would find wild technology in one domain and bring it to others.

Maya was a two-part idea. One part was to tame complexity and prove that human-centered design in a world of unbounded, malignant complexity could actually be a business. That was the Maya Design consultancy and lab. The second part was to pursue fundamental research into human-information interaction. A few years later, we bought Jim out and he returned to become Dean of Computer Science at Carnegie Mellon, where he helped build the Human-Computer Interaction Institute.

I wasn’t one of the founders I joined in 2001 as CEO and later became one of the owning partners with Pete and Joe. We focused on taming complexity. Part of our business was work-for-hire. As chips were getting embedded in everything and we started heading toward a trillion-node network, all that hidden complexity started to get in everyone’s way. Instead of extending people’s reach, it made things more difficult.

Another part of our business was deep research into how people make complex decisions. We did a lot of work for DARPA about 30% of our work. It’s often referred to as the Department of Strategic Surprise. We helped them understand how leaders make decisions in noisy, complicated environments where you can't really pin things down. It's always a mix of art and science, trying to level up from data to information, to knowledge, to wisdom.

That work led to building a system that was wildly successful. We sent our people to Iraq and Afghanistan to help deploy it. We sold the company we built around that system, Maya Viz, and it became part of General Dynamics Mission Systems. It’s now used by Space Command, the Coast Guard, and for disaster relief. We learned a lot about how to build a system that lets end users what we called “end-user scripters” build new tools we hadn’t even imagined. It was very generative.

John: On our podcast a little while ago, we had someone talk about working with decision-makers and said that if you take 100% of the effort, the first 48% is understanding the challenge, the middle 4% is the modeling or engineering work, and the last 48% is building consensus around the strategy. What’s your take on that? Does that framework fit the way you worked with end users the people extending their reach and gaining those superpowers?

Mickey: That’s an interesting way to frame it. I think there are two parts to my answer.

A lot of the time, you're building a mental model of what’s happening out there. We worked with retired generals and commanders. One general, for example, was responsible for taking the Green Zone in Iraq the palace area where Saddam was supposedly keeping weapons of mass destruction. He told his commanders, “We’ve heard about the Republican Guard, it’s going to be complicated. Just drive in and see what kind of resistance we’re facing. Get in, get out. I just want to see if there’s a path to get a supply line into the Green Zone.”

There were theories that it would take months, even years. People worried about dirty bombs, sarin gas, who knows what. But the commander drove in, drove out. Then a little further in, and back out. Then all the way through and out another way. Eventually, they were all over the Green Zone and weren’t meeting resistance. They passed the palace, and someone said, “Can we take the palace? Can we keep it?”

The general was expecting a months-long siege, but overnight, they had the palace. Sometimes, you’ve got to poke the hornet’s nest to see if there are any hornets home. You have a mental model of what the challenge is, but until you act, you don’t really know. That’s where the OODA loop comes in Observe, Orient, Decide, Act. If you can act faster than your adversary can process what’s happening, you gain the advantage.

We saw this a lot. You need to poke the system to see what’s really going on and iterate faster than your competitors, whether that’s another army or a global pandemic.

There’s also the difference between planning and the plan. You can have a deliberative plan for the next pandemic how you'll coordinate with mayors, law enforcement, hospitals. But when the real thing hits, you’ll have to adapt. It’s helpful to simulate those plans in advance. Fred Friendly Seminars did something like this on PBS back in the ’70s. They’d start an episode with a scenario like, “There’s an outbreak in New Jersey. What do you do?” Then real leaders mayors, police chiefs, hospital heads would role-play responses. Just the act of simulating helped expose norms and stress points.

Those simulations help build muscle memory. You can put a plan on a shelf and adapt it when the real thing happens. Optionality matters. We built an AI system with SRI who also made Siri for a DARPA program called Command Post of the Future. It had an AI agent named Mechanical Murph Murph for Murphy’s Law. Its job was to break things in the simulation. Because even in simulations, things need to go wrong.

If I take a step back, one of the most profound things we learned was this: no complex system starts as a complex system. It starts simple and evolves under pressure. We simulated an entire year every six weeks. Monday was January. Wednesday was June. Thursday was November. We tested 24 hours a day, and the system was always under evolutionary pressure.

We had people behind the scenes doing Wizard of Oz simulations pulling levers like the Wizard of Oz. We tested it to failure, then rebuilt it. We called it the “double helix” one strand was business doctrine, the other was technological potential. We pulled in ideas from CMU labs like SAGE and VISTAGE. But it was grounded in real needs.

One commander said, “I can tune in to all my subordinates’ desktops, but they can’t see what I’m doing.” That sparked a conversation. Half the commanders agreed to share their desktops for a test. The teams with “top sight” who could see their commanders’ work won twice as many battles.

They could say, “He’s looking at ambulances. Why?” And maybe the commander was worried about IEDs. Ambulances can get places other vehicles can’t. So someone goes, finds license plate numbers, and shares them. That kind of insight allowed better collaboration.

We made the system more of a slippery slope toward sharing, not an all-or-nothing switch. And we tested new features every six weeks. Special ops teams would sit behind the glass and say, “Here’s how the regular Army does it. We do it differently.” We’d say, “Okay, let’s try it in the next test.”

General Corelli came to one of our simulations and said, “That’s better than the billion-dollar system we have now. Can I deploy it in three months?” We said, “It’s made of chewing gum and duct tape.” He said, “I want it in three months anyway.” So we locked people in a room, shoved pizzas under the door, hired QA people, and shipped it.

It was never turned off. It became a program of record. And because we made it easy for end-user scripters people like Bob in accounting we enabled innovation. Users could build tools in minutes. Those tools could be shared and reused. We made it all information-centric. If you shook something on the map, it shook on the timeline and budget, too. One version of truth.

John: When you’re building systems like this, there’s the planning phase, but also the chaos that comes after first contact. For people who might not be technical, how do you make tools usable in the moment?

Mickey: A really good tool feels like an extension of your body. A hammer has two ends you know how to use it without a manual. But when you add features and hide them in menus, things get magical instead of physical. You lose interaction physics.

If you ask 10 people how to make one page in a Word doc landscape, you’ll get 10 different voodoo answers. That’s magic. Physics is predictable. Like gravity. You spill coffee, it falls. You can’t patch gravity. That predictability is crucial.

In our systems, we created a few core physics. Like, if you shake something on a map, it also shakes on the timeline. If you visualize it differently, you don’t lose the data. We stuck to those rules, even when tempted to break them for convenience.

Think of a glass of water and an iPad. I can stack one on the other. The designers didn’t plan that, but it works because physics doesn’t change. That’s what we aimed for. Solid, consistent interaction rules that users can trust and build on creatively.

Start with an appliance push button, get toast. But let users pull off the cover and customize it. Make the complexity optional.

John: That’s helpful. Earlier, you mentioned the Future of Learning Team at Autodesk. When you were there, did you apply those same physics principles to inspire creativity?

Mickey: Completely. At Autodesk, I was a Fellow in the CTO’s office and helped build the Future of Learning team. The CEO at the time, Carl Bass, decided to give away all our software to students high school and college. Now, something like 15 million kids, six years old and up, use Tinkercad. But they can also use the same tools James Cameron used to make Avatar, or that Skidmore, Owings & Merrill and Arup use to design major buildings.

So now students have access to these powerful tools, but how do we help them use them effectively? We built a learning engine. Say you’re building a coffee cup in a CAD tool where would you start?

John: I might start with the base. Maybe the shape?

Mickey: Exactly. You’d draw a sketch maybe a flower shape. You’d set the diameter say, two and a half inches. Then what?

John: I’d extrude or loft the sides to make a cylinder.

Mickey: Right. Maybe you'd loft it from the flower shape up to a circle at the top. Maybe it’s five and a half inches tall. I can watch you use the sketch command, mirror, repeat, loft. Maybe now you realize it’s solid you need to hollow it out. You copy, shrink, subtract. That’s Boolean.

Then you chamfer the edge to round it off. I can see all that. I can see if you used presets or adjusted parameters. If you export for 3D printing, I know you made a physical part. So now I know what you can do not just based on what’s written on a resume.

That’s what we built. If you set your goals like "I want to become a product designer" it generates pathways to get there. You’re two skills away from unlocking five more jobs? Here are five projects to do that. Here are teammates who know those tools and you can subscribe to their work. We called it the Learning Engine. Parts of it are now rolling out in Autodesk tools.

John: Fascinating. So part of this is inspiring creativity getting people to go, “Whoa, I made this!” But last week at TechCrunch, every company was suddenly an AI company. Do you think these large language models actually help with creativity? Or are they just trendy?

Mickey: I have strong opinions, because we’ve been doing generative design and learning for a long time. Tools like Dreamcatcher were about encoding goals, constraints, and obstacles then using high-dimensional models to explore the design space and propose options.

Right now, I see two metaphors for AI. First: left brain vs. right brain. The left brain is the analytical side classic machine learning for predictions, automation, optimization. We’ve had that for a while.

The right brain came into the picture in 2017 with the paper Attention Is All You Need, which introduced transformers. That led to large language models. It allowed you to look at how every word relates to every other word efficiently and in parallel across massive data.

Now we’ve got foundation models, frontier models. Language, images, code they’re all being treated as languages. So when you prompt a model to generate a cubist painting of dogs playing poker, it navigates this vast knowledge graph through Picasso, through poker, through kitschy black velvet paintings of Elvis. It diffuses something new. That’s more like the right brain associative, generative.

The second metaphor is Thinking, Fast and Slow Kahneman’s work. The left brain is slow thinking symbolic reasoning, rules, logic. The right brain is fast intuitive, creative. The most exciting models combine both. Like AlphaFold, which won the Nobel. It used both fast diffusion models and deep symbolic representations to figure out how proteins fold solving a decades-old challenge.

So yes, this will unlock creativity. And business models. Because what used to take ten years of training writing code, doing design anyone can now try. We’ve democratized that. It doesn’t matter if OpenAI shuts down tomorrow. Or Google pauses Gemini. The creative genie is out of the bottle. For the next 30 or 40 years, we’re going to see people do astonishing things.

John: If we fast-forward 30 years, what do you think is the next frontier? Once all these tools become accessible, where will humans take them?

Mickey: One area is cognitive management. We’ve built tools for time management watches, calendars, alerts. But we have no cognitive management tools. Nothing that says, “John, it’s 4 p.m. in Boston. Your decision-making ability is tanking. Maybe you shouldn’t schedule this meeting now.”

What if you walk into a meeting, and the room senses everyone’s depleted and says, “You guys should be walking in a forest right now”?

We’re strip-mining cognition. We treat attention as a resource to be exploited. But cognition should be a human right. We need tools that are personalized my brain works differently from yours. One of my interns at Cornell, Netta, is working on this. She has ADHD and dyslexia but had to read 400 research papers. She does it in the morning, when she’s fresh. I, on the other hand, love reading at night. We’re totally different.

We also need better simulations. Every person now has the power to change the world or destroy it. We need sandboxes, like SimCity, where we can model systems. We need to explore non-linear consequences. There’s a huge opportunity for individual and system-level simulations. I’d love to see a flourishing of SimCity 2.0-type tools.

John: That brings us to our final two questions. First, we ask everyone to describe a “What the Flux” moment something in your life or history where you look back and go, “If only we knew then what we know now.”

Mickey: I was reading Building SimCity, which might be the best book I’ve read recently. It tells the story of a teacher working with kids in L.A. to build cardboard towns and learn about urban development. She got invited to run a simulation on the East Coast. On the plane, she bumps into her brother Frank. Frank Gehry. The organizers didn’t know they were siblings. He was invited as an architect, she as an educator. It’s this beautiful moment of systems design, education, and creativity coming together.

I’d love to go back to that moment and pull energy from it. What if we had taken systems thinking seriously and taught it to six-year-olds, like she did?

At Maya, we created LUMA Institute, which has taught over 100,000 people how to put people first using human-centered design. Every summer, we volunteered to teach it to inner-city fifth and sixth graders in towns like Elizabethtown, PA. These kids would learn how to identify real problems in their community then design solutions. They’d sand wood, build prototypes, test with real people.

At the start of the week, they’d be unsure. By the end, they’d say things like, “You let us make stuff.” That broke my heart. Because in our system, we beat creativity out of kids. We treat them like passive consumers. And then when they’re 35, we expect them to suddenly be innovative at work.

That’s my What the Flux moment. I’d go back and change how we raise and educate kids. We need to let them build.

John: That’s a perfect note. Final question: for someone in high school or undergrad who wants to get into data-driven decision-making, build SimCity 2.0, or work in decision science where should they start?

Mickey: I’d recommend two things: FIRST Robotics and iGEM.

FIRST Robotics was created by Dean Kamen and Woody Flowers. It introduces systems thinking through hands-on robotics competitions. There’s Lego League for younger kids, and more advanced competitions in high school.

iGEM the International Genetically Engineered Machine competition came out of MIT and focuses on bioengineering. I attended the iGEM Grand Olympics in Paris a couple years ago. Inner-city kids from Puerto Rico were engineering E. coli to break down explosive materials in polluted lagoons caused by past military testing. The winning team was from a public school in Atlanta. They built a diagnostic tool to detect precursors to heart attacks in blood samples.

You can’t build complex organisms or robots without systems thinking. Nature is the most glorious system of all. And we still don’t understand it. Every day we discover genes aren’t what we thought. Some are made of RNA, some create dozens of proteins, some modulate behavior in strange ways. The old dogma DNA to RNA to protein is outdated.

It’s humbling. And that’s the point. Choose something with glorious complexity and dive in. It will force you to grow.

John: That’s awesome. Mickey, thank you so much for taking the time to jump on the podcast.

Mickey: Thanks for having me.

John: I think this is an important one for younger folks, definitely, but also for organizations trying to foster creativity. Sometimes you’ve just got to let people build things.

Mickey: Yeah. Final note three book recommendations. One is Building SimCity. You already ordered it. Second is The Nature of Technology by W. Brian Arthur, from the Santa Fe Institute. Third is Scale by Geoffrey West, also from Santa Fe.

Scale looks at hidden scaling laws in cities, organisms, and organizations. The Nature of Technology breaks down what we mean by “technology” and how it evolves. All three are about systems and they’re profound in different ways.

John: Love it. We’ll put links in the transcript. Thanks again, Mickey.

Mickey: Thanks for having me.



People on this episode