The Flux by Epistemix

Navigating Complexity: Timothy Clancy on Modeling and Policy-Making

Epistemix Season 1 Episode 10

In this episode of The Flux, hosted by John Cordier, CEO of Epistemix, we hear from Timothy Clancy, a researcher from the University of Maryland's START program. Recorded at the Complex Social Systems Society of the Americas Conference in Santa Fe, New Mexico, Clancy shares insights from his career, which spans government work and applied studies on national security. They discuss the evolution of modeling in policy-making, the challenges of visualizing complex data, and the importance of building trust with policy-makers. Clancy emphasizes the need for simplicity in communication, the enduring nature of integrating soft skills with technical models, and the crucial role of understanding organizational dynamics to drive impactful decisions. The conversation also touches on specific models he has worked on, including those related to countering ISIS and studying violence and instability, illustrating the importance of robust, scenario-driven simulations in informing policy.


00:00 Introduction to The Flux

00:47 Meet Timothy Clancy

01:03 Timothy's Background and Work

01:45 Challenges in Policy Making with Data

02:58 Visualization and Communication in Modeling

05:29 Timothy's Career Shift and Focus

07:00 Emerging State Actor Model

12:40 Modeling Sentiment and Radicalization

20:35 Counterfactuals and Policy Making

23:08 Navigating Policy Making Institutions

31:41 Conference Insights and Conclusion


John: [00:00:00] Welcome to The Flux, where we hear stories from people who have asked what if questions to better understand the world and talk about how data can help tell stories that impact decisions and create an intentional impact on the future. This is your host, John Courtier, CEO at Epistemics. In a world where the flux capacitor from Back to the Future does not yet exist, people have to make difficult decisions without always knowing how the future will play out.

Our guests are people who've taken risks, made decisions when uncertainty was high, and who have assisted decision makers. Transcripts provided by Transcription Outsourcing, LLC.

So welcome to the next episode of The Flux. We're on with Timothy Clancy. and we're sitting here at the, Drury Hotel in Santa Fe, New Mexico, as part of the Complex Social Systems Society of the Americas Conference. so Timothy, thanks for joining us on the podcast [00:01:00] today.

Thanks for having me today. Yeah, absolutely. Where did you come in from? 

Timothy: I come in from the University of Maryland. I work at the National Consortium for the Study of Terrorism and Responses as Terrorism, which is more simply known as START. 

John: Okay, cool. how'd you get connected to the Complex Social Systems Society and the folks who are here today?

Timothy: long journey through a earlier career in government, being interested in complex systems as a government, trying to help inform policy makers of the systems they live and interact with. Led me down the road, to studying complex systems. And some of my colleagues came out to this conference a few years ago and were very supportive of it.

And I do work that's more applied looking at problems of national security and I wanted to get a perspective from some of the best minds I could. So I wanted to come out here and stress test what we're working on to, to, expose it as it were to the questions of others. 

John: Cool. this space of working with policymakers and with data and models like, early days, Has it changed much from the last 10, 15 years or so, or same challenges are coming up when you're trying to work with data?

So 

Timothy: about 15 [00:02:00] years ago, I was working in the Pentagon at IBM and we would bring in folks to the Pentagon and onboard them. And I said, if you're working at this level, 48 percent of your problem is getting everyone to agree on the same problem. 4 percent is whatever fancy method you're using, agent based system dynamics, these modeling methods.

And 48 percent is getting everyone to agree on the same solution. So it really is the soft skills still remain extremely important. And when you're dealing with policy makers, the way we use these models, I think, is to enable either of those things, using a model to help explain a problem and get consensus on what is actually causing the problem.

or using a model, whatever method you're using, there's many out there to look through options and look through policies and say, Hey, this may actually make it worse than it is now. This may be worse than doing nothing at all, or this may be a good option, but it may have. implications and consequences. So it's really using the models to help educate, getting everyone to the same point of agreement on the problem and then getting concurrence on the solution to pick.

So one of the 

John: things that we've [00:03:00] seen time and time again, when working with clients, policymakers, even big companies, visualizing the results of those models to make it useful for a decision maker. Have you seen any good tricks or tools come up in that side? 

Timothy: It's always very challenging because we tend to overestimate how simple is and we still make it too complex.

So if I were to give any advice, I would always say, make it as simple as possible and as quick to grasp as possible and understand that these policy makers, you may be the 15 they've talked to, you have a half hour and they've seen these. So you've got to convey what you're getting across very quickly.

And the models are very powerful for visualizing and displaying. But to get that first, getting everyone to agree on the same solution, you have to show them in a very simple way how that benefits going to be. And that may be as simple as infographics. Once they get interested, you can do a deeper dive and show them more of like the movement of the agents or the policy graphs of a system dynamics models or something more of that visualization.

But they do want to see how the problem is going to be solved simply [00:04:00] so they can understand it. Then they appreciate going deeper with greater layers of visualization as a double click. 

John: Yeah, without a doubt. one of the books that I recommend all of our data scientists to read is Getting Out of Model Land.

Timothy: Yeah. it 

John: came out recently. and it talks to that exact point. Like it, there's so much time and effort that goes into the model building process itself. But at the end of the day, You're going to have a limited time span with that person that's going to have to make a decision, and they just need trust, some credibility, and 

Timothy: And this is, if I may, where I when we talk about visualization in the models, I go back to the team, and we tend to focus who's on the modeling team, data scientists, analytics, and so on.

I ask people to reach out their team and say, who's your champion within the organization? Who's a trusted sponsor who can tell you what that particular policymaker they don't want to see graphs They don't want to see charts or they do want to see graphs, right? And so build into your team explicitly non technical people who are really good at communication People who are really good at storytelling people who may understand Whatever [00:05:00] institution or agency or fortune 50 company you're working at what's the culture like what do they like and can serve?

As that local guide and help you because you do put all that time in the model And you can lose it in five minutes if you lose the interest of a stakeholder. So bring in those folks on your team, get them familiar and comfortable, and then have them help you and be willing to let go of that really fancy chart or simulation or visualization that looks really cool.

But you got to have a PhD to understand what's going on in it. 

John: Yeah, totally. you're in the Pentagon doing some modeling, some data science work. With policymakers, what came next? 

Timothy: Oh, well, this was 15 years ago. I ended up going to Afghanistan for two years and working as a civilian accompanying the force.

So I still worked for IBM, assigned a secretary of defense, and then from there loaned out to a mission commands in the U S military. So we did counter roadside bomb. We did a simulation of the retrograde out of Afghanistan. I'm looking at how all the materials and pieces would move and where the choke points might be.

And that really led to my [00:06:00] career shift into focusing on. using models and simulations as a way to reduce violence and instability. I got bit by the bug. So I left IBM, left the consulting career, shifted more into the models, but still retaining this sort of practical working experience of working in environments.

And always in my mind and the people we work with when we're building a model, it's to solve a problem and that problem has stakeholders. And what do those stakeholders need to know to solve that problem? So we always start with what is the purpose? Who is the audience? What bias are you bringing in and making that very clear up front and gearing it that way?

John: Cool. so in the agent based modeling world, this guy, Josh Epstein came up with the Epstein rebellion model and most people at this conference are like, Oh, yeah, that's a great one. You have, the cops, you have the insurgents and you have just the normal people and some normal people become insurgents.

Some insurgents become a normal person based on the presence of cops. Yeah. they kind of look at it, in those three kind of lenses. But, your approach probably a bit [00:07:00] different. 

Timothy: it, so we have a model called the Emerging State Actor Model.

It's, it was originally designed for counter ISIS modeling. So it was built one of the first models to look at ISIS in Syria, Iraq. And it models an entire country. So it's not abstracted. It's looking at the terrain, the population centers. We model those population differences, Arabs, Shia, Arab Sunni, Kurdish Sunni.

We get those ethnographic differences. And at a high level, it's modeling how four playable sides, the state actor and the non state actor, and then a foreign group supporting the state actor and a foreign group supporting the non state actor. They're making choices over 10 years. And what they're working through is that population.

So that population in the rebellion model. We model very explicitly with these different ethnographic, and their sentiments of how they viewed all these different actors and their actions. And you can do everything from non violent peaceful resistance, protests, all the way up to full irregular warfare.

So we're looking at what ISIS was doing when you have combats between, Armored groups and terrorist [00:08:00] attacks and air power. We don't do conventional based warfare. So what's going on in Ukraine and Russia right now would be a little outside the scope, but we have this range and what we're really trying to do is show policymakers how over the long run, 10 years, how policies can seem to be better before they become much worse or can seem to be worse before they become much better, and then also creating a, understanding of.

What is worse than doing nothing at all? Because that's always an option is to not intervene and with the models We're able to show and I won't get too deep in the techniques, but we can show certain and we'll test everything But what goes on the news what opinionators are talking about? We'll test those policies and say this one here It's worse than doing nothing at all But then because you have a simulation and this is really where I think policy makers appreciate the power of simulation is you can begin to combine different policies into portfolios.

You can test them at different timing windows and sequences. So instead of ABC, you do B, wait a little bit, then a, [00:09:00] then C, you can get into waiting factors and it really gives you a much crisper understanding of the kind of strategy and what you might see because a lot of times things may look worse before they look better.

And policy makers are gun shy of that sort of I'm responsible for this. So educating them in advance, Where things may be more difficult or there's a very popular idea out there, but the following it's just going to make things much worse Yeah 

John: So we try to think in here's interventions under a certain set of scenarios And try to flip the thinking for a policymaker or that executive decision maker to say What are the conditions that create more optimal outcomes?

and have you gotten to the point where policymakers are thinking in We just need to create the conditions, and that's what their mindset is going towards, or is it still a, we have to show them the conditions, and then, they start to get into a new way of thinking about the decisions they're making, how they play out over time.

Timothy: Yeah, it really gets tricky too, because the different policy makers were like, prefer different versions of that same thing, depending on their education [00:10:00] path. As you said, 10 years ago, this was very new. Some are much more sophisticated. You can dive in a little deeper, but a lot of times we're doing scenario forecasts.

We're not trying to say here is the one answer. It's more like when you look at a hurricane map on a weather channel update or something like that, it's got, here's the path and here's the width over time where it might go, right? So we lay the scenarios out and say, here's where it might shift left or right and how you might change that shift when it comes to it.

We try and stay away from making a prediction that Thursday, five years from now, at this point, this will happen. Okay. But more saying, here's a scenario that if you follow these steps, these are the indicators you should see along the way that it's working. And it gives them something to test, almost like a flight simulator or a digital twin of a human society that they can follow along.

And also, if it's veering off path, you need to make a course correction or something like that. So we use these, These width and breadth of scenarios and many scenarios to explore this space so they can they begin to understand those implications or permutations. Yeah, 

John: I don't know of anybody who's cracked the code on [00:11:00] how to explain all those different scenarios and a package tight way for policymakers.

But. hopefully, we can get better at that as a, as 

Timothy: I said, 48 percent at the end is getting everyone to agree on the same solution. And it's really a soft skill and it, I don't know if it's necessarily a technical solution that can be solved with the cooler package. If you look at the companies and I won't name one, there's a big one with a P in front of it that are very successful in this space.

They spend a lot of time and effort building those relationships. They're in the places, they're building the relationships. So I don't think this is a problem that can be solved, or it can be aided by having a better mousetrap. But that mousetrap, I use the analogy of the jazz band again. You can't have a good jazz band with just five saxophone players.

You want one saxophone player, trumpet player, trombone, good drummer, maybe a piano, maybe a guy with a banjo if you're doing bluegrass, but you need those varieties. And if your team doesn't have someone who can help convey and convince, and they may not understand a single thing about the model, they can't even do enough math to balance their [00:12:00] grocery checks, but they understand how to communicate.

And that is so powerful to have. Now, you don't want to, I'm very clear. You don't want to sacrifice the principles of your model. You don't want to lead people astray. You need to maintain that integrity. But if you're building a jazz band, And it works. Then you get invited to build an orchestra and that's where you begin to scale.

And then you can bring in all this richness and you can build capability. See, a lot of times with these models, it's not just, here's a cool idea and do this, but like, how do you make this an enduring capability in this institution so they can have people inside their own organizations that have this capability, that understand this, and you become the trusted agents within that now you're maybe more guiding from the side than lecturing on the stage.

John: Absolutely. So within some of the models you've been talking about, is the level of detail all the way down to the individual person is sentiment, something that's also changing across those 10 years. 

Timothy: so I do what I call life cycle of violence and instability. So we have one model that is really tailored to the individual and how an individual [00:13:00] radicalizes to commit a public mass killing, a terrorist attack.

On the other end, we have this model and that's the terror contagion hypothesis. And that is. That is individuals in society. It's fairly abstract. it's a simpler model overall, but it's designed to look at these, how radicalization works. At the other end is the emerging state actor model, which is modeling a country and it has entries for every single person in that country.

Now, they are not tracked as agents of individual that you don't name a person and follow their life, but we know at any point in time, how many people they are in a given area, what their genders are, what their age cohorts are. We cross section that and from age and gender, and area, we know what social identities are the workers, educators, social, activists retired.

We have models facilities that we track down to the geography where the facilities and services are located. So if there's a terrorist attack or a missile strike, you can degrade those and these do change the sentiment. So in the emerging state actor model, every single tick. It is a calendar day.

and that tracks well, at least what we're doing with the [00:14:00] military. They like knowing day by day stuff. So these sentiments, they, and I can get into a little bit more of how sentiment formation works, but it is tracking daily changes, but it's reflecting it because sentiment itself changes over time.

But you're capturing this, and you're capturing it between the different populations. again, the ISIS, Arab Sunnis may have a different sentiment to these actors than Kurdish Sunnis. Or you look in Ukraine, Ukrainian speaking Ukrainians and Russian speaking Ukrainians may have different sentiments than Russian speaking Russians.

Myanmar is the same way. All of these, and if you are a business, you may have customer base. You know, customer segmentation has been around a long time. These models give you the way to take your customer segments and understand, This population views us in a certain way and this one very views us in a very different way, right?

And they're all mixing together But you can understand and tailor your strategies to say where do we need them to be to sentiment wise? And if you're in a competitive environment now, you're looking at a multi Actor, where do we need to move so that we can change the sentiment of perhaps our competitors?

Customers to bring them to us or prevent [00:15:00] our customers from going to them, right? So it's just applying these 

John: same principlesthe common 

Timothy: terrain is the human element. Yeah. That's the one thing that benefits these types of models is that humans under, and there's cultural differences and all that you've got to capture.

But where humans are a common underlying element, these models can often give insights beyond their original scope. So the terror contagion hypothesis was originally radicalization to public mass killings. But we've used it for radicalization that leads to misinformation and disinformation. We've used it for looking at cyber hacktivism, how to,populist waves of cyber attacks.

So it, you gain something insight fundamental from these models and it informs you for other phenomena that similar may, may not be exact. It can be similar. And this is hard for the scientists because they want to be like, Oh, it's got to be exact. And you get models for 

John: purpose. And that's it. Exactly.

But 

Timothy: really for the policymakers, they need to understand. Sort of the structural physics of the sociomath. What's going on in this human population? They don't need to know the numbers. They need to know that sentiment is a lot of times based off [00:16:00] who people look to as self similarity. Can I see myself in them?

Do I find them prestigious or notorious and are they coherent? And that's common wherever we look. We see that's a common thread tying together. Grievance, specifically when we're looking at the models we do for violence, grievance is a very common underlying sentiment. And if I can tell a policymaker that when you generate grievance, that comes back over time to create these negative outcomes and you have to balance your decision making on this may seem better now, but it's going to create kind of a reservoir of grievance that's going to come back 18 months, five years, 20 years later.

And so getting the policy makers have to trust that the models are built right. But if they can understand the insights of the role of grievance towards violence or radicalization, if they can understand how sentiment works on self similarity and that we as a Western, I'm speaking here, United States, aren't going to necessarily reach the people we want because they, we are not self similar to them, so we have to find an ally or a partner who is and work through them.

These are the insights that the policymakers [00:17:00] can use when evaluating strategies and how the interconnections 

John: play out. When you're working with a policymaker or some executive in an example, or there's this model or whatever else, how do you like ease them into it? Is there like a little story or like an example of a model that you're like, you know what?

Every time I tell this story to a policymaker, they seem to latch on and get what we're doing. 

Timothy: I may have an advantage in that area and that I'm working at violence and instability. And usually the problem I'm working on is one they're already very concerned about. So I started studying ISIS was still doing my PhD and building a model on ISIS before they broke out in Iraq.

So when they broke out in Iraq, you're the guy. not the only guy, but one of someone who had something to offer. And with these kinds of things that the questions are, why did it happen? What can we do about it? What should we do about it from happening again? They are already invested in the problem now for broader problems And even in past that initial problem I [00:18:00] do use analogies and metaphors a lot and I know scientists sometimes we like that But we will describe the terror contagion hypothesis and we'll use a crude analogy To make people understand and relate it to something that they may understand.

So we use the analogy of a sneeze. There's these cultural scripts that spread around. They're like germs. They radicalize certain people and when that person commits a public mass killing, it's like a cold as a sneeze that then spreads more cultural scripts and continues. Now, it's, we're not saying it's actually a germ, but that helps them get in the frame of okay, I get the dynamic you're talking about.

and so the analogy in medical, All models are wrong, or some are useful, is a common quote. Analogies and metaphors are the original models we had in language to describe complex phenomena that we couldn't really explicitly. So if I say, a bird in the hand is worth two in the bush. That's a model of a fairly complex thing, but if I say that you immediately know that everyone gets it and I can root that, and then from that I can use that basis of understanding to go into more of what they want to know from the policy.

John: Yeah. 

Timothy: Yeah. 

John: That totally makes sense. [00:19:00] we started in epidemiology on the epistemic side, and we're trying to move into this sentiment in the population, spread of information, misinformation. , we view everything in the world as some form of an epidemic or contag. Yeah. We use contagion. Yeah. Yeah. And it's like an epidemic is an inherently disease oriented.

An epidemic is something that spreads across a population or emerges in a population. And there are 

Timothy: certain factors of whatever the unit of transmission is that will make it more virulent. To use the analogy more, severe when it hits. And we study that now and with the terror contagions. We can distinguish types of the Columbine style school shooting from the great white replacement theory contagion Which was the Norway and New Zealand and the synagogue.

We can type these contagions and we find markers in the actual Historical record so we do another thing with policy makers is you got to get outside the model itself and show that your model is Connecting with something real world. So when we describe [00:20:00] One of these markers that makes a contagion more or less, we will do it in the model and we have the cool charts, but then we will pull from the actual record and say, this is what it looks like.

Here's an example. Here's some evidence from the historical record, which builds that these models are hard to validate, but you can build confidence and trust in them that what you're talking about is the versimilitude is close enough to the real world that it makes sense for them to make a 

John: policy off of it.

And a lot of the policy makers, they're looking for great. This was able to reconstruct the past. I'm worried about something that's going to happen. How do I know? And it's about what are you connecting to in the real world that provides the signal? 

Timothy: Yeah. And this is something, so we actually, all our models are built with counterfactuals in mind.

So from the very beginning, we'll say, can we recreate the historical record? And then can we recreate a historical counterfactual and just get the simple example of ISIS. We modeled the Arab Spring through the rise of ISIS and the breakout in the Anbar offensive and the conflict. That was modeled.

But then we said, what if no one had intervened? No foreign entities had intervened at any point [00:21:00] in that conflict. That's a plausible counterfactual. The model produced a result. We take it to the experts. They go, okay, this seems about right. They feel good. And of course it was worse than what it was, in the historical, but that gave us a policy space to say, this is literally doing nothing at all.

Yeah. And this is what we know was 100 and 120, 000 individuals of different coalitions, this much air power. Now we can test your policies and say, is it. Worse than doing nothing at all, the counterfactual, or is it better than what we know historically? And from this, we then go forward and use counterfactual futures.

So we did the, you, the, hypotheticals for some military work. We did a hypothetical ceasefire in Ukraine versus Russia, not saying that should happen. I'm not in politics here, but just to say what would happen in a post conflict stabilization. And it was a 10 year simulation and we did a future counterfactual that said, I think the scenario was the line of control gets frozen.

Russia has occupying troops as a police force in the DNR and the LNR and what would happen and the future counterfactual is in three years Russia [00:22:00] withdraws the troops per agreement or they stay. And so future counterfactuals give you the way to examine some of these contingencies that may come up.

And to me, your model can't just recreate the history because the history is subject to contingencies. It has to be able to create plausible counterfactuals in the past, plausible counterfactual in the future. Because you really. You want to create a model that doesn't just recreate history because history is what we're trying to change in a way you want to be able to explore the whole space.

So you need a model that's robust enough and strong enough that can get into these kinds of variations where All sorts of outcomes can arise, and then you can trace back, to understand why they arrived, why they came about, and then you have to build confidence and trust in those. Yeah, 

John: one of the things you mentioned earlier was being able to like, once you've run these scenarios out, being able to see which trajectory you might be on.

To then possibly course correct, look at another set of interventions that might be better under a different set of conditions. it's a really complicated thing. for these policymakers to, to grapple with. And so there's this aspect of [00:23:00] modeling community, policymaking community, the empathy between the two.

I don't know if you've seen a lot of tension between that. Oh, there's 

Timothy: absolutely. And so this is getting a bit meta, but a colleague of mine, Robert Lamb at the start, he's my boss, but before I knew him as my boss, he had this great concept. There's a model you're building of the problem. But as scientists and as people influencing policymakers and even policymakers themselves, you need a model of the policymaking that's above your model to show if I'm working the DOD, how does the DOD work?

How does transition of officials, the change of you get in a new principal deputy secretary or assistant deputy secretary may have a different agenda. So you really need a model for the problem. And then a model for navigating the stakeholder institutions that are going to solve the problem. And sometimes our most powerful insights is how to arrange those stakeholders to build that.

Again, it always gets back, getting that consensus and then making it sustainable because change happens in these policy makers. And if your solution can't survive [00:24:00] the next person coming in, then it probably won't be it because very few things are ever solved in two years And but you have this churn of officials and how do you build that capable?

We had this problem when we're in Afghanistan, which was very market because we were trying to create solutions with some of the battle damaged vehicles But every six months a new officer would come in sure and what we found was helpful And I don't want to extend this too much, but If you build a baseball team and then you have a new player come in, that new player is going to fall in on the existing culture, the existing way of doing things.

They're adopt. So we built up these habits and mechanisms so that whenever someone new came in, they would say, Oh, this is how things are done. I'll adopt it. And it wasn't like transition documents. It wasn't like a volume we handed. It was just a cultural package of doing things. That's a little trickier with policymakers because they're the boss.

They want to be deciding, but you can still build. Enduring capabilities that sort of they can see the benefit from when they arrive and realize they don't want to tinker with that. And that way it survives the transition. 

John: And that's exceptionally insightful for [00:25:00] anyone modeling community trying to branch out with the policymaker.

So that's a really important point. One of our, one of our board members, he was doing, fluid dynamics based simulations, working mainly with the automotive industry. you say the meta model above what you're actually solving for. So they were running all the simulations that any car that's built today, the digital twin of it, how they test it through the digital wind tunnels and everything else.

when they were trying to calibrate those things, they actually had to build a model of the wind tunnel itself. Oh, yeah. And they were able to see, oh, the way that some of these wind tunnels are constructed, they cheat the results of how the car's going to perform in, the real world. And so it's like being able to take the model, cool, this is how you're viewing the world, how it works.

The decision making level above that exceptionally important. 

Timothy: my background prior to simulations was lean six sigma problem solving process, problem solving and the consulting firm I have now we do what are called digital twins of the social system of a company.

So if you think about, you've got your model, the prom, your model, your wind [00:26:00] tunnel, but your model of your organization, how it works and what that helps you do is say, look for this idea. To be accepted and adopted. It's not just building new wind tumbles. It's building concurrence. It's changing the way we work It might be a enterprise transformation This is getting now more into the fortune that private sectors opposed to government but helping them understand how they need to change themselves A lot of companies are like, Oh, that's a cool idea.

I'm going to adopt it. I'm like, you're not good at adoption, right? You've adopted 10 things. And they, it's a cycle of every three or five years we're trying something new. That's usually a non indicative of a problem with the method. It's an indicative of the organization itself has some sort of, so when, we did work, with a large,network infrastructure and they wanted to build a new software, 15 million to solve a certain problem.

And they asked us to do the system structure. And when we did it, We saw that the problem wasn't the technology. It was the incentives and disincentives that between managers and leadership and workers. And we said, the technology solution doesn't matter here. If you don't change these incentives and disincentives, you need to adjust this model [00:27:00] above the model in order to get better adoption of the technology, whether that's your existing platform or you're feeling gung ho to go out and spend 15 million.

But if you spend 15 million and don't solve this problem, you're going to have the same results. So this is why I keep going back, getting them and they may be right.

Reluctant to have a model of themselves, but sometimes that's how the enduring capability is built because problems come and go. But if you've helped them become better themselves and people's careers are reflecting on this, we saw this in lean six Sigma where people would advance as the director and then you get to VP and they're moving this forward, building models that help, organizations and this could be a government agency or a mission based group or even a fortune or business department.

That model, then they want to have an endurance, it's a digital twin of themselves. Not a digital twin of a manufacturing thing, but a digital twin of the human environment that you can test how will this new policy of incentives work. And they begin to use it as a regular tool of how we manage ourselves.

And that can be very powerful. Again, it's a little bit more distant than solving ISIS [00:28:00] necessarily, but you've got to figure out how these things work. If you're going to bring your model in eventually to some of these other problems. 

John: Yeah. the person who most identifies with that, there's some change management leader or is this like an ops or strategy type role that you 

falls onto because it's a cultural change. That's tough. 

Timothy: So and this is going back in my history, but the criteria is they've got to be veteran enough that they're known and trusted already. They've got to have a network of relationships they can call upon where they can pick up the phone and work the work that my phones, the slack or whatever they're using these days, discord, they got to be able to work the relationships.

They need to be in an area that is close enough to financial that they can help show the financial benefits. And then within that set of context, usually operations. Maybe change management. close to the executive. We don't try and go to the absolute top executive because that gets so political and they get so torn by the day to day.

You want someone beneath it, but close enough that their top cover is that C level person who says, for this, I've pinned the star on this person here and they have access to those [00:29:00] resources. But those criteria of, again, those social criteria are almost more important. I take someone who is the director of, Bathroom maintenance, but had that veteran seniority.

Everyone will pick up the phone when they call they have the relationship Hey, I helped you out 10 years ago. Help me out here That is super it's hard to underestimate how powerful that can be Operations and change man. We can help with that. we can help with that You've usually got people but that the thing you can't recreate is that insider capability now If you have a scenario because we have some of these where it's an outsider coming in trying to shake things up We'll create a partnership Again, it doesn't always have to be one person.

Sometimes a small team that subdivides and conquers will be better. So you'll have someone who's the outsider that's brought in to make change and be the, disrupt things. You match them with an insider and you and again, this is playing with those structural dynamics that if you modeled out, you could very easily show why this works.

Sometimes you just do it by experience and things like that. But,it's circumstance matters. One size won't fit all, but that's you have to tailor it to what's going on. Yeah. this fits [00:30:00] the. 

John: The 48 percent alignment, 4 percent model, 48 percent alignment. Exactly. I like how you did alignment.

That saves a lot of words. That's a simpler In some ways,is that the Is that like the key message of your team you get brought in like that's the sweet spot 

Timothy: we that's the secret sauce I think the suit is this the way we get brought in is literally this is a problem We haven't been able to solve for years and we've tried over and we need someone to help us like I wake up in the morning whether it's with the University or Mike What is the hardest problem I can work on today?

so like Public mass killings and terrorism counter ISIS. Like these are hard problems. Let's tackle the hard problems Companies and again, it talks to your why do stakeholders care if you're working on their hardest problem they care, right? They've got they may not believe you but they care about that problem and now you have to get build their confidence But you don't have to convince them the problems worth working on right?

And so We that but the secret sauce is definitely the change management and the social You dynamics behind the tech and the technical method has to be [00:31:00] solid. You have to with these models. If you've never done it before, you're gonna find somebody in I. T. Who did a class 10 20 years ago and has read a book or two, and they're gonna try and attack your model.

So you gotta be. And that's the difference is you. You've got to be flexible enough for have on your team. Someone who can talk to executives. And then the executive says, Oh yeah, Joe over here is my PhD in some mathematics that is going to rip apart your model and if he doesn't believe it, I don't believe you now go spend two hours convincing Joe.

So you got to have that ability to shift and go talk technical and go in and get through the weeds and build confidence in Joe or Sally or whoever because they're going to talk to the leader. So that's why you need a team with these different roles in it or you have to be able to shift yourself. 

John: Yeah, 

Timothy: definitely.

John: any. Thing that you're looking forward to being here in New Mexico at this conference. Any talks that feature 

Timothy: in these are all complex systems. and they're all systems that deal with humans. So being at the computational social science conference that deals a lot with complex systems, I'm very excited about that.

I don't, where I work in the [00:32:00] lab, we have data scientists, we have, we don't have a lot of people studying complex systems or violence as a complex system. And it's That band metaphor. I need people who are studying complex systems that I mean, looking at the agenda, there's some amazing talks going on today of vastly.

And sometimes the insights we get come from unexpected places, different disciplines. So cross discipline. And so that's why I enjoy coming here is I don't want to hear, the things I already know. I want to be challenged on the things I don't know, and then see how I can bring that into my effort.

And then also getting the feedback where people who are looking at your work and saying, Hey, maybe this is a little adjustment. That's always great to hear too. 

John: Cool. Timothy, I appreciate you jumping on the podcast and taking some time outside of the conference. So I really 

Timothy: appreciate the opportunity.


People on this episode