
The Flux by Epistemix
Welcome to The Flux - where we talk data, decisions, and stories of people asking the what-if questions to create an intentional impact on the future.
The Flux by Epistemix
The Role of Sentiment and Social Dynamics in Public Health Models with Philippe Giabbanelli
In this episode of The Flux, we explore how modeling trust, belief, and social sentiment can unlock deeper insights in public health. Our guest, Philippe Giabbanelli, shares how agent-based models are evolving to capture not just behaviors, but the opinions and social dynamics that drive them like vaccine hesitancy or trust in institutions. We discuss the challenges of representing complex human psychology in simulations, the importance of sentiment in shaping real-world outcomes, and how these models are being used to support decision-making in uncertain, high-stakes environments. If you’ve ever wondered how data can reflect what people think, not just what they do, this conversation offers a compelling look into the future of predictive modeling.
Welcome to The Flux, where we hear stories from people who have asked what if questions to better understand the world and talk about how data can help tell stories that impact decisions and create an intentional impact on the future. This is your host, John Cordier, CEO at Epistemix. In a world where the flux capacitor from back to the future does not yet exist, people have to make difficult decisions without always knowing how the future will play out.
Our guests are people who've taken risks, made decisions when uncertainty was high, and who have assisted decision makers by using data and models. We hope you can turn lessons from our podcast into foresight, so you or your organization can make better decisions and create an intentional impact for others.
Hey there. Welcome to The Flux, where we hear stories from people who have asked “what if” questions to better understand the world and talk about how data can help tell stories that impact decisions and shape the future. Today we're with Philippe Giabbanelli. Philippe, very glad to have you on the show.
Philippe Giabbanelli: Thank you for inviting me. Happy to be here.
John: Of course. Some of our listeners are very technical, others not so much. So let’s start with the less technical side. Why don’t you tell our listeners a little bit about your story and what some of your current roles are?
Philippe: Sure. I’m currently the Director for Modeling and Simulation in Health and STEM at Old Dominion University, which is in Norfolk, coastal Virginia. I’m part of VMASC, which is a large research enterprise focused on modeling and simulation. I focus particularly on health, and health can mean a lot of different things to different people. You could be looking at healthcare, or you could be looking within the body more on the biomedical side. I do all of that within the portfolio, but personally, what I’ve worked on the most over the last 15 years or so is public health. I try to think about how people behave, and how we can improve the environment to make healthy decisions by default. So it’s about improving decision-making tools for policymakers, for community members, and also for modelers like myself giving us better toolsets so we can be more responsive when the need arises.
John: Thinking about the decision-maker, how have tools for decision-making changed from, say, 2000 to now?
Philippe: That’s a long time span 24 years now. I think you’re aware of this too, but for the audience, system science has really taken off. I remember going to an event in Pittsburgh years ago Pat Maberry put it together with colleagues at the Institute for System Science and Health. It was one of the few events where you’d see people from public health being introduced to agent-based modeling and system dynamics. Now, we have more structured ways to test what’s in your mind. At the end of the day, we all make decisions, but the question is: what tools do you have for making them? Is it a gut feeling? An Excel spreadsheet? Moving stuff around in PowerPoint? Or do you have a tool where you can externalize your mental model, see the implications, identify the data you don’t have but need to collect, and then begin producing estimates and gauging confidence? The ability to externalize, test, and guide data collection that’s a key shift we’ve seen over the last 24 years.
John: So the ability to test mental models is something decision-makers are able to do now. We’ve seen things like “what if” questions: What policy should we implement? What happens if 5% of the population has a decreased willingness to get a flu vaccine? What are some of the mental models you’ve seen policymakers commonly ask about?
Philippe: Policymakers ask a lot of “what if” questions. They typically come with interaction packages, but they don’t always express these as mental models. Instead, they ask the questions, and it’s up to facilitators or modelers to help externalize the mental models so we can answer those questions. For example, they might ask, “How can I reduce and prevent obesity in children? Should I focus on food? Physical activity?” There’s a large portfolio of possible intervention levers, but they’re unsure which to pull. What we can do is externalize the mental models of experts in that field. Urban planners, epidemiologists aggregate them, find the relevant data, and then run simulations to assess each potential intervention. But more importantly, it’s not just about answering one “what if” question. We need to look at intervention packages comprehensively. Public discourse often looks for a magic bullet, but in complex systems, doing one thing often doesn’t move the needle. There’s a saying in systems science that the effect is greater than the sum of the parts. It’s the synergies between interventions that matter. When you’re able to externalize people’s models and answer multiple questions, you can show them that the winning combination isn’t any one thing, it's how the pieces work together.
John: Absolutely. At Epistemix, we talk about stacking interventions. When you stack the right ones, you create the conditions to enable real change. So, getting tactical, when it comes to visualizing stacked interventions, what techniques have you found most effective for technical modelers or data scientists to communicate back to decision-makers?
Philippe: It depends on several variables. We’ve published articles about what tools people tend to use, and maybe it’s not surprising that the best tool isn’t always the one being used or that there’s no universally best tool. If your stakeholders are policymakers, you need to look at their current software ecosystem. You might have a great tool that’s powerful and well-suited to the task, but what if it doesn’t integrate with their ecosystem? What if it’s yet another separate platform they have to learn? What if they need to move their data around or face a steep learning curve? In many cases, I’m not focused on the “best” scientific visualization tools. I’m looking at their existing systems, available data, mental models, skillsets, and then proposing a tool that fits.
For example, when we worked on obesity prevention in British Columbia, we dealt with senior decision-makers in the health ministry. We asked what tools they were already using and then proposed one that was graph-focused. That made sense because interventions form a kind of network a graph. You intervene in one place, and there are ripple effects. We designed the tool around that, with visualizations to help them see those ripple effects. But then we had to ask: Did it work? The answer was no, not really.
The issue wasn’t whether the tool could do the job it was whether people could use it efficiently. So we conducted usability testing. We observed how long it took them to complete tasks, how confident they were in their answers, and whether their answers were correct. They could complete tasks with the software, but it took too long. So we pivoted to text.
Despite the appeal of beautiful visualizations, they don’t work for everyone in every context. Text is a more universal medium. We spent about three years creating narrative explanations so people could understand the models, how they were built, and what they suggested.
John: Is that text coming from the original “what if” question? How exactly are you using text?
Philippe: At first, the text was just to explain the model structure. Think of a model as a sizable network 300 constructs and 900 relationships. That’s a lot to convey. We had to figure out how to break that down into bite-sized chunks for GPT models so they could generate fluent, accurate explanations without adding or omitting key information.
That was about three years ago. Then we moved toward building complete reports. A good report isn’t just a bunch of unconnected sentences. You need paragraphs, logical groupings, and transitions to tell a coherent story. We presented a study in Pittsburgh at the Conceptual Modeling Conference on how to create these groupings and transitions. Then at the MODELS conference in Linz, Austria, we presented a tool that combines visual and textual elements.
You start with a high-level summary like an executive summary. If you’re a policymaker, that’s often all you want at first. You click on something that interests you or type a question, and the tool expands that section. You get a longer narrative alongside the corresponding part of the model. You can also navigate the model directly, click on parts of the graph, and get the relevant text. It’s a three-view interface: summary text, expanded narrative, and model visualization. We also experimented with automatically generating illustrations using DALL·E and Stable Diffusion to make the report more engaging. We’re probably about halfway to a fully automated system.
John: That’s incredible. Making models accessible to decision-makers has been a challenge for decades. If those papers are publicly available, we’ll make sure to link them in the podcast notes. So let me take a step back. What got you into modeling and simulation in the first place?
Philippe: Good question, and not one I get often thank you. During my master’s and PhD at Simon Fraser University in British Columbia, I worked with Joseph Peters in the School of Computing Science. We were looking at how agents behave virtual entities that model people in a computer. I was fascinated by how we could model thinking. Data-driven agent-based modeling was just starting then. I was interested in combining machine learning with agent-based models how you could automate agents to think like humans.
Then I connected with Diane Finegood, who had a visionary take on obesity prevention using systems thinking. She’d be a great guest on your show, by the way. That’s how I got into public health modeling. I brought the modeling and simulation tools, and together we worked on obesity, hypertension, binge drinking mostly eating behaviors in the early years. Over time, we expanded the portfolio to other conditions, but the core approach remained the same: externalize mental models, gather data, validate the models, and give people access to them throughout the process.
John: When you’re building more complex models over time, what’s most effective in getting people to that “aha” moment, where they understand the leverage points or how the model reflects behavior they recognize?
Philippe: Good question. First, I should say we don’t always add complexity incrementally. Sometimes we start with a large model and cut it down. There’s a concept called KIDS vs. KISS “Keep It Descriptive” versus “Keep It Simple.” If you build a toy model and add one thing at a time, you might never reach an optimal result. Also, you can’t keep going back to your stakeholders every time you tweak the model.
So we start by comprehensively mapping the problem space. We build a large conceptual model, then trim it based on the data we have. The goal is to be data-guided, not just to build what feels intuitive. Once we have a trimmed model, we check if it answers the original questions.
As for “aha” moments, they come at many stages. Sometimes, building the model is the learning opportunity. For example, when we built a suicide prevention model for children and adolescents, we found that data was abundant for individual mental health but lacking when it came to environmental or societal factors like neighborhood violence or peer dynamics. That’s a big realization. Public health aims to improve environments and norms, not blame individuals, but the data often doesn’t support that goal.
John: Have you encountered friction when extracting mental models from decision-makers and translating them into simulations?
Philippe: Yes, especially in facilitated processes. Sometimes people are invited but don’t want to be there. They might believe their expertise can’t be externalized “I’ve been doing this for 25 years, you won’t get it by asking questions.” Others may find the process inconsistent and question its validity. That’s why consistency and validation are so important.
We wrote a book with Gonçalo Pollice this year about fuzzy cognitive maps. The first chapters are all about how to externalize mental models what to look for, how to avoid issues like power dynamics in group settings, and why we prefer one-on-one interviews. Group settings can be efficient, but you risk dominant voices taking over. Individual interviews are time-consuming, but they produce richer data. Still, consistency and clear criteria for participant selection are essential if you want the models to be representative.
John: If you could redesign the model-building process, how would you do it?
Philippe: Time permitting, I’d always go with one-on-one interviews. They’re richer. I’d also insist on clear criteria for who gets included. That way, you can claim some level of generalizability. Many participatory modeling projects don’t clearly state how participants were selected, and that’s a problem. We published a survey on this a few years ago. Transparency and consistency are key.
John: One thing we’re always asking is, how do we make agent-based modeling as mainstream as machine learning? Any advice for those in the ABM community or people new to it on how to make it part of everyday work?
Philippe: That’s a great question, and one I care deeply about. First, you need accessible tools. NetLogo has done a great job here. It’s not for every use case, but it’s opened the door for many health, geography, and urban planning professionals. You can upload models to platforms like CoMSES, reuse parts, and build a community.
Then there’s the education side. When I was a tenure-track faculty member at Miami University, I taught simulation and thought a lot about how to integrate it into the curriculum. It’s tough because machine learning is trendy, and students think it leads to jobs. A course titled “Agent-Based Modeling” doesn’t attract students because they don’t know what it means. Branding matters.
One way around this is calling it “Predictive Modeling,” but then people expect machine learning. We also need courses that build appreciation for modeling not just technical skill. People need to understand what modeling can do, what questions it can answer, and how long the process takes. And we need to demystify how models have shaped major decisions like those during COVID so people understand their value.
John: Are there any “killer use cases” for agent-based modeling ones that could really disrupt or transform industries?
Philippe: I stick to health, so that’s my lens. Modeling has already had major impact. For example, needle exchange programs decades ago were based on modeling studies. During COVID, many decisions around university operations online classes, closures were guided by models.
One area I think is especially important is suicide prevention among youth. There aren’t many ABMs in that space, especially not for children. We need more, particularly because ABMs allow you to model subpopulations and heterogeneity. If you’re trying to ensure a policy is equitable, you need that level of detail.
John: You mentioned earlier that it’s not just about agents looking different, but thinking and behaving differently. Have you seen good models that include sentiment as a driver of behavior?
Philippe: Yes, especially in vaccine modeling. Many ABMs include beliefs, opinions, or intentions; sometimes the terminology varies, but the idea is the same. We’ve modeled how incorrect beliefs spread and how to change them. The harder part isn’t detecting false narratives, it's changing them. We can spot misinformation on platforms like Twitter, but countering it is like playing whack-a-mole. Real disruption comes from changing environments and social norms so that misinformation doesn’t take root. That requires trust in institutions and political will.
John: Two more questions. First: What’s a “What the Flux” moment from your career, a moment where better foresight could have changed a decision?
Philippe: Great question. It’s easier to answer in hindsight. Before events, even experts don’t know what will happen. Afterward, everyone seems sure why it happened. That said, during COVID, some decisions could have been better. For example, prioritizing vaccine distribution by age was logical; it saved lives among the elderly but it wasn’t optimal for reducing overall transmission. Older people typically have smaller social networks. Vaccinating more socially active groups might have reduced spread more effectively. But based on the data at the time, the decisions made sense.
Another example is the six-foot distancing rule. It was a simple message for public health, and simplicity was necessary, but it may not have been the most accurate or effective guidance. Still, in their shoes, I might have done the same.
John: Last question. For anyone interested in getting into agent-based modeling high school students, undergrads, career changers what’s the best way to start?
Philippe: I’m glad you asked. If you don’t need credentials, the Santa Fe Institute offers excellent online courses. YouTube also has some well-structured tutorials. If you have programming experience, especially in Java, Nate Osgood from the University of Saskatchewan has great videos on AnyLogic.
If you’re in health or urban planning, NetLogo is a good entry point. There are many tools, so it depends on whether you need certifications, what your background is, and how deep you want to go. Not everyone needs to become a modeler. Sometimes it’s more important to build awareness and appreciation for what modeling can do. Understand the process, the timeline, the iteration that’s key.
I’m also curious to see how platforms like Epistemix or Thread evolve. They’ve produced great public health work and could become another accessible entry point into the modeling space. I hope every tool finds its audience.
John: Absolutely. Well, thank you so much for being on the podcast. This was a great conversation. Hopefully our listeners learned something new about modeling, decision-making, and the value of translating mental models into accessible tools.
Philippe: Thank you for having me. I’ll be happy to share some resources for your listeners as well.
John: Sounds great. Thank you.