
The Flux by Epistemix
Welcome to The Flux - where we talk data, decisions, and stories of people asking the what-if questions to create an intentional impact on the future.
The Flux by Epistemix
The Tipping Point for Agent-Based Modeling with Rob Axtell
In this episode of The Flux, John Cordier interviews Rob Axtell from
George Mason University, where he leads the largest graduate program
in agent-based modeling (ABM) globally. Axtell shares his journey into
complex systems modeling and how the field has evolved since the
1990s. He explains how George Mason’s Ph.D. program in
Computational Social Science is shaping the next generation of experts
who go on to roles in government, research, and the private sector.
They discuss the power of agent-based models to simulate real-world
dynamics, from consumer behavior to macroeconomics, highlighting the
increasing availability of data and computing power that allows ABM to
compete with traditional models used by institutions like central banks.
Axtell emphasizes the need for more empirical grounding in ABM and
the potential to build large-scale, highly detailed models, including the
exciting possibility of simulating entire economies.
Axtell also touches on the importance of modeling social complexity at
the individual level, the challenges of past limitations in data, and the
unique potential of ABM to provide a more accurate picture of systems
like financial markets.
For those new to the field, Axtell offers practical advice on getting
started, emphasizing the value of tools like NetLogo as a gateway to
ABM. Whether you're a student, researcher, or data enthusiast, this
episode provides a deep dive into the cutting-edge applications of ABM
and its future impact.
00:00 Welcome to The Flux Podcast
00:18 Meet Rob Axtell: Expert in Agent-Based Simulation
01:07 Overview of George Mason's Computational Social Science Program
01:45 Career Paths for Graduates
03:34 Rob Axtell Journey into Agent-Based Modeling
05:58 The Evolution and Impact of Agent-Based Models
08:37 Applications and Future of Agent-Based Modeling
11:35 Challenges and Opportunities in Agent-Based Modeling
14:06 The Importance of High-Fidelity Models
16:31 Policy Implications and Real-World Applications
29:41 Technical Advances and Future Directions
36:44 Advice for Aspiring Agent-Based Modelers
39:09 Conclusion and Final Thoughts
John: Welcome to the next episode of The Flux, where we talk about data, decisions, and the stories of people asking "what if" questions to better understand the world and create an intentional impact on the future. As always, we hope you can turn hindsight and lessons from our guests into foresight, so you can uncover how to make better decisions and create an intentional impact on the future. Today, we're on with Rob Axtell of George Mason University, where he runs the largest graduate program in agent-based simulation on the planet and gets to work on some really cool stuff. Rob, glad to have you on today.
Rob: Thanks for the introduction, John. Happy to be here talking to you guys about ABM and other things. As John said, at George Mason, we've been running a Ph.D. program in computational social science for about 15 years. One of the most important aspects of that is agent-based modeling and its related areas like social networks. At any given time, we have about 50 Ph.D. students, and over the last 15 years, we've graduated close to 100 students with degrees in CSS.
Let me just add that at George Mason, we also have the Center for Social Complexity, which is the research arm supporting our educational programs. The Center has a more theoretical focus and does a lot of big-picture research. For students interested in policy, we also have a computational policy lab that works on small-scale government projects or exploratory one-off studies. So between the research centers and the degree program, that's the work we do at George Mason.
John: What do most of the students go into after they complete one of those programs? Is it mainly government work, or are you seeing a mix of government and commercial opportunities lately?
Rob: That's a great question, John. Over time, it ebbs and flows. Of the 50 Ph.D. students, a non-trivial fraction let's say 15 to 20 are part-time students who work somewhere in Washington, maybe at one of the big agencies or for DOD. We also have several military officers who come through our program and go on to teach at the Naval Postgraduate School, the Naval Academy, or similar places.
Part-time students typically return to their home institutions. But for the full-time students, it's been a fairly stable mix over time. Roughly one-third go into government roles, either directly or as contractors. About one-third move into research roles, whether in academia or think tanks places like the RAND Corporation or MITRE. Then about one-third go into the private sector, including startups focused on things like social networks or machine learning. So it's a pretty even split: one-third government, one-third research, and one-third private sector.
John: Before we dive deeper into the program, complex social systems, agent-based modeling, and some things you're excited about, do you have a personal story about how you got into agent-based simulation or modeling complex systems?
Rob: I've been around so long that I was there a little bit at the founding of ABM. Maybe the more relevant question is how I got into the study of complex systems. There's a very clear answer to that. In the late '80s, I was a natural scientist with an interest in policy and economics. When the microcomputer revolution happened and we all migrated from mainframe Fortran compilers, I got very excited about the burgeoning work on nonlinear dynamics, chaos, the Mandelbrot set, and all the cool graphics happening then.
That led me to the Santa Fe Institute. As a graduate student around 1990, while at Carnegie Mellon, I attended SFI's first Winter School on Complex Systems. I think it might have been the only Winter School because they didn't have a building yet, and it was held at the University of Arizona in January. It was a fantastic session. Murray Gell-Mann spoke about scaling laws for cities, Jeff West talked about quantum field theory, and other founders of complexity like Mitch Feigenbaum and Michael Fisher gave great talks.
From there, after finishing my Ph.D. in '92, I went to Brookings, where Josh Epstein and I began working together. We thought we were doing artificial life for social science modeling social systems the way artificial life was being applied to biology. In fact, our first model, the Sugarscape model, was originally just called "artificial social life" because we thought there would only need to be one model. We didn't realize there would be many different models you’d want to build. That was the beginning for me with computing and ABM. Back then, we didn’t even have the term "agent-based modeling"; we called it artificial life-type modeling.
John: When you see students come to the program, are there common models that give them their "aha" moments? Since the early 1990s, what have you seen get people into agent-based simulation or complex systems modeling?
Rob: Yeah, there's definitely a subset who have read the Sugarscape book and see how they could apply those ideas to their own interests. Sugarscape serves as a point of departure not because it gives them the answers, but because they can see how to use the ideas.
The Schelling segregation model is also very foundational. It helps people think about why populations aren't homogeneously distributed across space or time, which draws students interested in inequality issues. Josh and I also did work on the spontaneous formation of classes, which is along similar lines.
These days, though, there are so many ABMs across different domains that students often come in having seen a model related to their own field like taxation, fisheries, or environmental science and they want to gain the skills to build their own.
John: Are there fields where you’ve seen more people trying to apply agent-based modeling in the last 10 years?
Rob: That’s a good question. It also ties into how we designed our program. Initially, we tried to span the social sciences politics, sociology, anthropology, geography so we had someone from each field who could advise students. Over time, some fields have had more interest than others.
Being in Washington, we get a fair number working on international relations and politics. We've had people in anthropology and geography, and quite a few in economics and finance. Sociology has been a little less common for reasons I’m not totally sure of. It’s partly self-selection: people come to us because they already want to do work in certain areas.
John: Given your background in finance and economic modeling, is there a subfield or new area you're excited about people exploring, or that you’re exploring yourself?
Rob: Fields with a lot of data are naturally attractive because the data help us figure out which models are best. Right now, there’s a huge transformation happening around consumer behavior data what people are purchasing, consumer sentiment, etc.
There are commercial and government datasets available. Building better models of consumer behavior is an important and rapidly growing area. These models are also valuable for macroeconomists who want to understand spending patterns, durable goods purchases, credit usage, and financial asset investments.
Agent-based macroeconomics, which was small and speculative in the '90s, is now much more empirically grounded and capable of handling large-scale data. That's a big and exciting change.
Rob: It’s something that's been a major shift. We can now build much larger, more high-fidelity agent-based macro models that can genuinely compete with the traditional DSGE models used by central banks like the U.S. Federal Reserve or the Bank of England. Those older models typically use very low-dimensional numerical approaches with one very complicated "representative" agent. Only recently have agent-based models gained the empirical and computational power needed to challenge those.
John: Is the scale of agent-based models something you believe has historically held them back from broader adoption? Ten years ago, people were already talking about machine learning, and now there are tons of machine learning-specific graduate programs. Besides scale and better connection to data, what else has held ABM back, and what’s changing now to create a brighter future for complex social systems modeling?
Rob: Yes, definitely. Historically, we didn’t have enough computing power or data to build strong agent-based models. Now, we’re finally overcoming those barriers. But there’s another factor too competition. For ABM to gain traction, it has to outperform the existing mathematical or statistical models, and those traditional models have had decades of refinement and investment.
Agent-based macro models, for example, are only now starting to be able to match or beat the DSGE models in some areas. Historically, ABMs weren’t competitive because of data and computing limitations. Now that those are less of an issue, we’re seeing rapid progress.
John: On this podcast, our listeners come from a variety of fields. If you were giving a 30,000-foot view, what types of questions are particularly well-suited for agent-based modeling compared to more traditional models?
Rob: Great question. First, in the social sciences, most researchers prefer models that represent individual actors people, firms, etc. Traditional models often use averages or "representative agents," which can miss a lot of important dynamics.
For forecasting, things like neural networks might be fine you don't always need to know how the sausage is made. But if you want real understanding of how systems work, you usually need models with individual behaviors. That naturally pushes you toward agent-based modeling.
That said, you don't always need to model every individual. Sometimes it's enough to model types or groups. But my personal view is: if you have enough data and computing power, why not model every individual? Especially in fields like epidemiology, where disease spreads through person-to-person contact, having a one-to-one representation of individuals can be crucial.
We're at a tipping point now. As data and computing improve, there's less and less reason not to build fully detailed models.
John: If the ABM community is at something of an inflection point, are there commonly held beliefs either inside or outside the ABM community that need to change to fully take advantage of the new possibilities?
Rob: Yes. Outside the ABM world, many people especially in natural sciences and computer science are comfortable working with averages. That can work fine in those fields, but in social sciences, averages often hide critical information.
For example, income distribution is extremely skewed. Incomes aren’t centered nicely around a mean; a few people at the top massively skew the numbers. So the mean isn’t even that meaningful. A good model needs to capture the entire distribution including outliers because those outliers can have enormous impacts.
Agent-based models are uniquely capable of capturing that full heterogeneity. That’s a big strength.
Within the ABM community itself, I’d say we need to move from toy models like Schelling’s segregation model toward highly empirically grounded models. The field is making that shift, but it’s still in progress. With synthetic populations and detailed datasets, we can now build models that are genuinely realistic, not just illustrative.
John: Going back to your earlier point have you seen real-world cases where decisions were based on averages and went badly wrong? Where a better ABM approach could have helped avoid major consequences?
Rob: Definitely. One clear example is the 2007–2008 financial crisis. Before the crash, many commentators said things like, “Sure, there are some risky subprime mortgages, but they’re a small fraction of the market. No big deal.” They were reasoning based on averages.
What they missed was how those risky mortgages were bundled with good ones in mortgage-backed securities. When the subprime loans started to fail, they brought down the entire system, not just the "bad" part. The average mortgage might have been fine, but the structure of the financial system made it highly vulnerable to failures at the margins.
That’s a classic case where understanding the tails of the distribution and the interactions between agents would have made a huge difference.
John: The name of our podcast is The Flux, inspired by the flux capacitor from Back to the Future. We like to think about "what the flux" moments times when the world is about to change dramatically. Are there areas today where you think we might be on the verge of one of those moments, where we really should be running more simulations to understand future possibilities?
Rob: I think everyone feels that way right now. The world in 2024 feels very different from 2004 geopolitically, socially, and economically. Simulations could definitely help.
The DOD runs lots of simulations around conflict outcomes. I don't have a security clearance, so I can't speak to those in detail. But even just in areas like elections think about how much attention is paid to forecasts like Nate Silver’s we really should be running many more simulations, not just relying on one or two models.
Same thing with financial stability. We do some stress testing of banks today, but when it comes to large-scale, system-wide shocks where everything is interlinked I think we still know too little. More simulations would definitely help.
Across the board, policy decisions are often made without much computational modeling. In engineering, big projects often require simulations before they’re approved. For policy, we don’t have that standard yet but we should.
John: In some cases, it seems like you need a little signal to get a lot of people paying attention like COVID and Neil Ferguson’s model early on. Are there early warning signs today that could or should kick off a wave of new modeling efforts?
Rob: That's a really interesting point. Honestly, if I knew what the next big trigger would be, I'd be out there making investments!
Herbert Simon, who I worked with a bit during grad school, used to say the hardest thing in research isn’t doing the work it’s choosing the right problems. You never really know what’s going to matter most until after the fact.
With COVID, once it was clear things were getting serious, people quickly pivoted models over to the pandemic. But even just three months earlier, that pivot wouldn't have seemed obvious. So recognizing the right moment is very hard. It depends a lot on context and right now, the world is pretty jumbled.
Rob: Maybe a different way to put it is that once a problem becomes visible enough like COVID did then the modeling community can jump into action. But predicting which problems will rise to that level ahead of time is incredibly tough. It’s a lot about context and timing.
John: We’ve talked with Josh Epstein and others about things like inverse generative modeling, large language models informing agents, and agents informing LLMs. Also, when running parameter sweeps, how to understand the range of outcomes from the same model. Are there any other technical advances, either at George Mason or elsewhere, that you’re excited about and think the modeling community should be paying more attention to?
Rob: Great question. I fully support the work on inverse generative social science. It’s very hard to do, though. It’s like trying to automate the scientific discovery process itself. In history, people like Newton and later Einstein built models to explain observed phenomena but it took huge intellectual leaps. Automating that process is still a long way off.
In agent-based modeling, a big bottleneck is figuring out the right behavioral rules. Social scientists typically parameterize rules and then use data to estimate the parameters. That’s not so different from microeconometrics you have a model of behavior and use data to estimate it.
But if your model structure is wrong if the underlying assumptions are wrong then no amount of parameter tuning will fix it. That's a fundamental limitation.
As for LLMs, personally, I don't think they're the silver bullet for ABM. LLMs are great at qualitatively explaining things but not necessarily at predicting individual behavior. If you gave each agent its own LLM, you might get interesting results, but you wouldn’t necessarily understand why it's behaving a certain way. And interpretability is crucial if you're trying to build models for policy or social science.
LLMs can hallucinate and aren’t always grounded in empirical data. Maybe someday, if we have infinite computing power, each agent could have its own neural network but it’s not obvious that would help us that much with the core goals of modeling.
John: Two final questions. First, if you had to pick one area where agent-based simulation might be most impactful over the next 10 to 20 years, where would you focus?
Rob: Speaking from my own expertise, I have a very definite answer: building ultra high-fidelity, one-to-one scale models of entire economies.
Brian Arthur and Mitch Waldrop talked about this idea back in the early '90s an "economy under glass" where you could simulate all the agents in a real economy like the U.S., not just a small sample. Imagine an economy with 330 million individual consumers and 6 million firms, all interacting in realistic ways.
Maybe we don't start with the U.S. Maybe it’s better to start with a smaller country like Latvia or Liechtenstein. But the idea is the same: build a full-scale synthetic economy.
Today, when policymakers consider changing something like the corporate tax rate, they’re guessing about the downstream effects. If we had a full synthetic economy model, we could run experiments virtually in minutes, not years. It would be a game-changer for policy, economic forecasting, and understanding social dynamics.
Now, my libertarian friends would probably worry that governments could use such models to exert too much control. But honestly, policymakers already try to do these things they just do it poorly. With better models, we could at least base decisions on a clearer understanding of how systems might actually behave.
John: Final question. If you were talking to a high school student who's interested in getting into agent-based simulation, what would you recommend as a good on-ramp?
Rob: I encounter this a lot. One good starting point is NetLogo. It's a fantastic tool for beginners very user-friendly, open-source, and it has a rich library of models across natural and social sciences.
NetLogo is based on the StarLogo tradition from MIT, designed to help kids learn programming and modeling concepts. It’s powerful enough to do meaningful small-scale research projects.
The only caution is that if you become too comfortable with NetLogo, it can become a limitation. For larger-scale or higher-performance work, you’ll need to move into languages like Python, Java, or C. That’s why, at George Mason, we teach undergraduates using NetLogo, but by the time students reach my graduate course, I encourage them to move beyond it.
Still, for high school students or anyone starting out, NetLogo is a great on-ramp. It’s a fantastic way to get your hands dirty and start thinking like a modeler.
John: Cool. Rob, thank you so much for jumping on. There’s a lot of good context here for people.
Rob: Enjoyed it, John. Thank you so much for having me. I really appreciate it.