SDC Seminar: Defining and Using Evidence in Conservation Practice

Video Transcript
Download Video
Right-click and save to download

Detailed Description

This is the Science and Decisions Center Seminar Series presentation of Defining and Using Evidence in Conservation Practice with Nick Salafsky, Director, Foundations of Success. Nick focuses on sharing new work on the development of shared evidence libraries based around theory-of-change pathways for key conservation actions. Nick draws on insights of evidence-based practice from different disciplines to define evidence as being “the relevant information used to assess one or more hypotheses related to a question of interest" Nick concludes his presentation with a discussion of how to better promote and enable evidence-based conservation in both projects and across the discipline of conservation.


Date Taken:

Length: 00:56:20

Location Taken: US

Video Credits

Microsoft Teams: Scott Chiavacci, Nick Salafsky, Shonte Jenkins
Presenter: Nick Salafsky, Director, Foundations of Success


All right. Uh, thanks everybody for joining. Uh, my name's Scott Chiavacci . I'm an ecologist   with the Science and Decisions Center , at the USGS headquarters in Reston, Virginia. And our center is a small interdisciplinary, uh, group and we strive to promote and facilitate the use of science and decision making. And when I met Nick Salafsky , uh, a few months back, you know, learned about Foundations of Success , it seemed like a really nice fit with what Science and Decisions Center is trying to do and what we're trying to, um, share with our seminar series. And so, um, I won't get into all the details about what Foundations of Success does Nick we'll get into that in his talk, but just to give everybody a little bit of background on Nick, uh, prior to co-founding Foundations of Success , he worked with the MacArthur Foundation and the biodiversity support program.

He's also served as a member of the Board of Governors of the Society for Conservation Biology . Nick holds a Ph.D. in environmental studies and M.A. a in resource economics from Duke and an AB in biological anthropology from Harvard. Um, he will speak for roughly 40 minutes. We will hold off on asking questions until Nick has done presenting everything. And you should be able to enter any questions in the chat or just we'll, we'll turn everybody off mute at the end to ask questions. And, um, yeah, everyone will be muted from the beginning. The plan is to have this go until two o'clock. Uh, if you have any questions for Nick after that, that are really pressing, um, feel free to reach out to me, or maybe at the end of next talk, he can give you his email address. Uh, he's not a hard person to find.

You can look up all the information on Foundations on their website. Um, so without further ado, I will let Nick share his screen and we will throw everybody on mute, including myself. And thank you, Nick, take it away. Thanks, Scott. Um, hope you guys can hear me and hope you guys can see my screen. It should be a slide that says defining and using evidence in conservation practice. Thank you for the opportunity to present this. Uh, it's, uh, always good to reach out to folks who are, uh, climbing similar mountains and working on similar things and, uh, sharing and learning from each other. So, um, I am going to, I'm not going to go on video here just to conserve bandwidth, but, um, in terms of who I am, that's me. So you can see my picture and, uh, I wear a number of different hats that are relevant to this conversation.

So my primary hat is I work for Foundations of Success or FOS. We are a not for profit organization. We've been around about 20 years. Uh, we are based initially in the U.S. but we have now morphed into a global collective with branches or affiliates in Europe and Latin America and elsewhere in the world as well. We are a conservation group, but we don't implement conservation projects. We try to make other people's conservation projects better. So we help people of all sizes to design, manage, monitor, and learn from their work. We're also part of something bigger called the conservation measures, partnership or CMP. CMP is not a legal entity. It's a community of practice. It started over 15 years ago. And representatives of some of the groups that you see here got together for a half day to talk about performance measures and learning.

And we found we had so much to talk about. We turned into a regular thing to CMP is now an ongoing thing. It has a bunch of different leading conservation NGOs that you see here. Uh, we've been adding a bunch of the private foundations that are involved in conservation work, such as the mortar foundation or Mickey mouse, uh, Denise Disney's, animal science and environment. And then we've also bringing in key government agencies, both state agencies and federal agencies as well, including the us fish and wildlife service NOA and USAID. At some point it would be really fun to talk about maybe getting, um, USGS involved in this as well. Uh, so that's a conversation we could have at some point. Anyway, CMP is not a legal entity, but it's a forum by which people can get together and share and do things of interest. Our primary product is something called the open standards for the practice of conservation and I'll abbreviate as the conservation standards and what these are a set of best practices for designing, managing, monitoring, and learning from conservation work.

There's nothing radically new here. If you're familiar with Deming plan check, do act, or as I'll show you even the scientific method, um, we're borrowing a lot of those concepts, but we're trying to do it in a standardized fashion and give people the tools to do conservation in a common way, but most importantly, so they can learn from one another. And then the last hat I wear that's relevant to the conversation is I'm involved in managing Miradi software, which is a tool to actually help conservation teams implement the conservation standards and use it in their work. So I want to start with a quick pretest to this webinar and maybe in the chat box, or just even find it on a piece of paper in your head. I want you to think about three questions that I'm going to ask you. So first true or false evidence is the relevant information use to determine whether a conservation action will lead to desired outcomes. So maybe take a moment and just write down on your, in your head or on a piece of paper, whether you think that's true or false second question true or false, you should always review high quality evidence before deciding on a course of action. Think about that question.

And then the third question, free response. Where would you go to find available evidence about a conservation action you are considering using? So maybe think a little bit about the answer to that question as well. Okay. So that's the pretest. So what I'm going to share with you now is a work that was published about a year and a half ago in a paper called Defining and Using Evidence in Conservation Practice . It's in the journal, um, conservation science and practice, and it's free, it's open source, so you can download it and you can search for it, get the DOI. These are all the authors. So you can see, we pulled together a really interesting mix of people who were both involved in conservation projects, but also people involved in the evidence based conservation movement, as well as some key academics and agency, folks and donors as well.

So it was kind of a big cross section of folks through the conservation measures partnership. We also thought as we got into it, we realized we had, we'd sort of been scooped. And, um, we actually had a couple of coauthors who, as we got into it, we realized we had along for the ride, uh, David Hume and Karl Popper. And it is always a little bit embarrassing when you're working on a paper and you realize you've been scooped by several centuries by people who've already thought about these things. So, um, both Hume and Popper has thought a lot about these things, but be that is it may we got into this because in the, in the conservation standards and in the design of this Miradi software, we knew we wanted to make evidence more explicitly into these standards. And so we thought, how hard can it be?

We all know what evidence is. And so when we're thinking about a theory of change and action, let's say we're on an Island somewhere, and we care about breeding puffins. And we have a theory that rats are eating are Puffin, nest, are doing nest predation on our Puffin eggs. We have a theory of change that if maybe we can put up barriers on boats and keep rats from getting to the islands, we actually might be able to achieve some conservation, but obviously evidence is just really what's in that arrow. Is that, is there evidence that that action will actually lead to the desired outcome? Well, as we started to get into it, we realized that it was a much more difficult than we thought so that when we tried to find evidence, we find it very difficult. And when we actually looked at how the rest of science and conservation, and even the philosophers think about evidence, it's really this big murky and messy word.

So in this paper and in this talk, I want to kind of share three things with you, define and construct a typology of the use of evidence in conservation practice develop a decision tree to guide practitioners and how to use available evidence and given conservation situation. And then talk about how we operation, incorporate evidence in conservation practice at both a project and a discipline level. And let me start with that first part. So what is evidence? So if you think of your favorite crime scene, your crime show, TV show, and we all know that the detective out there looking for evidence and evidence, even in that context could be a bunch of different things. It could be a physical item, so it could be a bloodstain shirt or our conservation example. . It could be rat tooth marks, eggshell fragments from that endangered seabirds nest. So we have aha maybe it's the rats coming after our seabirds nest, but it could also be a set of accumulated facts or knowledge about a situation.

So maybe some witness observations about the presence of a murder suspect, or maybe we see rat poop near the sea bird nesting site. It could be an assessment of the validity of the facts or knowledge. So maybe testimony about the reliability of the murder witness, or maybe the research design guiding a systematic survey, showing a higher likelihood of finding rat poop near damaged seabird nests. It could be a body of potentially relevant theory. So maybe some ballistics research about bullets or maybe research that show rats are a cause of sea bird nest predation in the region. And it could even be our confidence that an assertion about situation is true. So the jury is finding that the suspect considered committed the murder beyond the shadow of a doubt or the P value for a hypothesis you might have about your seabird nest predation. So evidence is really a whole bunch of different things.

So in this paper, we kind of waited through all that and came up with a definition where evidence is the relevant information used to assess one or more hypotheses related to a question of interest. And I want to pick up each of the bolded terms there and go into them in a little bit more detail. So starting with information, there's a whole bunch of different information out there. And when we talk about the evidence base and we talked about information scientists, they actually talk about this pyramid where you start with data, which in conservation might be information about your targets or your threats or your actions there's information, which is processed data, which is case studies all the way to randomized controlled trials, there's knowledge, and then there's wisdom at the apex of that pyramid. So you have sort of this basic data, primary studies, syntheses, and systems, and then theories of principal at the top. Some purists would argue that the top two layers of that pyramid are not evidence that they're actually knowledge or wisdom, but we've actually found for the purpose of practicality. You kind of need to think about the whole pyramid as your, as your evidence-based the information that you have accessible to you.

There's also a bunch of different types of hypotheses. So evidence by our definition is used to confirm or reject one or more hypotheses contained within each question of interest and note that we're using hypothesis both way Puffin nest predation  did where you have a null hypothesis that you're trying to then overturn, or in a Bayesean sense where you have a positive statement. This whole talk works either way. We tend to work more in the Bayesean construct because that's how conservation is. Think they have a hypothesis about something in the positive sense. So we might have the hypothesis, seabirds are successfully nesting in Eastern Bay, or we might have a hypothesis for our project. If eco tourists demand green practices, this will result in their option adoption. One of the things we find is that evidence is easier with well-formulated or better formulate hypothesis. And this is why in conservation practice, if you set goals and objectives, we talk about setting smart goals and objectives, specific, measurable, achievable results, oriented, and time bound, because the more specific it is, the more testable it is, the more you can assemble your evidence.

So how would you assemble evidence to answer the hypothesis? Seabirds are successfully nesting Eastern Bay. It's much easier to assemble the evidence for that H1B. They're at least a hundred breeding pairs of Ruby Crested, puffins who fledge an average or greater than one chick during each of the last five breeding seasons. And same with the second example, instead of just saying, if e-course eco tourist demand, green practices, it's much more easy to test a hypothesis that says if more than 25% of likely eco tourist demand, seabird friendly practices, most about boat operators will voluntarily install rat barriers. So we want to be more specific.

There are different types of hypotheses about any given conservation system, and there's a bunch of common ones and they can be in science, geek, speak univariate hypotheses. So the presence or absence of a factor, or they can be about the status or change in a status of a factor, or they can be a buyer multi-variate hypotheses, which are association between two or more factors or causation between two or more factors. So in a typical set of questions that we might ask in a conservation planning setting, we might have hypotheses about the presence or absence of a factor. So reboot Crested puffins are nesting in our Bay, or the rats are present on Bay islands about the status or change in status of a factor. So the puffins are occurring currently nesting successfully, or the brat population has increased to critical levels. And then we also get into the causation factors, rats to the primary cause of Puffin nest predation  , using a certain poison applied via protocol will forward-looking control the rat population or using a poison applied by a certain protocol, did control backwards looking the rat population.

And so when we think about evidence in conservation, everyone fixates on G and H, which is did this action work, but you can see evidence actually applies that many steps of the conservation process in terms of thinking through these hypotheses about understanding our situation. So the classic hypothesis is only one of many, all right, the second part about hypotheses. And this is probably the most difficult thing I'm going to talk about all day today is that there's different types of hypotheses and a specific hypothesis is about a specific case situation. So we might say rats are the primary cause of sea bird nest predation on all Eastern Bay islands or using a certain poison will control the rat population on all Eastern Bay islands. Those are different, but related to a generic hypothesis, which is about a generic situation that is often a composite of many specific case situations.

So for example, rats are a primary cause of seabird nest predation in small islands or poisoning will control rat populations in small islands. And we'll come back to this in a bit, but this is the really key concept, this difference between specific hypotheses and generic hypotheses. All right. The last thing I wanted to find here is types of evidence. So when we talk about evidence, we can actually assess the evidence base and, um, cases or things in the evidence base on a number of different parameters. So you'll hear people talking about, and this is where there's an infinite confusion because people use these words in all different ways. So we tried to kind of be very clear about how we define these things. So we're talking about the direction of the effect, which is the sign. So supporting or positive evidence helps build the case for hypothesis refuting or negative evidence reduces the case for hypothesis.

The strength of the effect is the magnitude strong evidence, convincingly supports or refutes a hypothesis, weak evidence only somewhat supports or refutes a hypothesis. And then the reliability speaks to more reliable evidence comes from higher quality source or evidence-base and less reliable evidence comes from a lower quality source. Evidence-base a lot of the geeks refer to this as internal validity. It's hard to sort these different things out, but let me try to give you an illustration. Let's say we're weighing the sources of evidence around a given assertion, a given hypothesis. We have one piece of evidence that has high reliability. Maybe it's coming from a systematic review or a double blind controlled study. We might have very high reliability, which might be the systematic review or low reliability, which might be an anecdote. So you have sort of the, the weight of the evidence or sort of its reliability and where is, is one part.

So that's how much emphasis we should put on that source, regardless of its conclusion, the direction says, do you put that piece of evidence on the negative side of the balance or the positive side of the balance? So that's where direction is referring to, and then magnitude I'm sorry strength refers to, are we putting this? How far from the center point, is it strongly supporting or is it only weekly supporting or is it mixed support? So how, how much, how much weight are we getting that? Or how much force will that particular piece give in terms of, of making the case? And then the last part is, and I haven't talked about this yet, but we'll come to, it is relevance, which is a criterion of whether this source even belongs on this balance. So is it, is that source of evidence actually relevant to the question?

So coming back to this, you're going to then weigh all these pieces of evidence to sort of answer what you think about your, your answer to your hypothesis might be. So again, we talk about the direction, the strength and the reliability of our evidence, as soon as those three main things. And then this relevance, which is often in the geek speak called external validity and external validity is really important when we think about specific versus generic hypotheses. So specific evidence applies to a specific hypothesis about a particular situation, observations that show all rats are present along the Eastern Bay and generic evidence applies to a generic hypothesis and is derived from consideration of specific cases. This is what Hume and Popper talk about as inductive reasoning and the reason all this matters, why relevance matters so much is there's something called Poppers induction fallacy And so let's say we have this idea that we're going to poison the rats on our Island.

And we have accumulated evidence from a hundred different sources. That show poison is very effective to control rats on small islands, but let's say all a hundred of those islands are dry islands. And we happen to be on a very wet and rainy Island. And so on the a hundred and first Island, we put our poison down and it all washes into the sea and nothing happens. It's not effective. So this is why it matters because the evidence base that you see in the world is generally about generic evidence. And you have to relate that to the specific conditions that you're aware of dealing with on your particular islands. And we'll give you a flow chart that helps you think through that in just a second.

One other part about evidence that I'm not getting into here is this concept of burden of proof. And this is the idea that the level of evidence you need is situational, depending on the consequences of the decision you're making and the relative risk of action type one air versus inaction type two air with Kent Redford. I wrote a whole paper about this a few years back. We're not getting into that this paper, but I encourage you to reference that paper. If you want to think about this, this burden of proof question. Alright, so that was a typology of what evidence is and this idea of evidence and information, um, and types of hypotheses. That's a lot of geeky stuff. And how do we actually make this accessible to practitioners? So what we try to do is develop a decision tree to guide practitioners and how to use available evidence in a given conservation situation.

So in this case, I am going to use the classic, uh, question that people ask about evidence, which is, does this action work will using X poison applied via protocol? Why will it control AKA cause a reduction desired threshold level of the rat population? And what you can see here is a little excerpt from a graphical theory of change or results chain, which shows, which is one of the languages we use in Miradi in the conservation standards to show our theory or hypothesis. If we poison the rats on the islands, the poison will effectively kill the rats. The rats will be eliminated from islands, predation on seabird nest will be eliminated, and then we'll achieve our goal of having a healthy puffins and seabird. Of course, it also depends on no new rats access to the Island. And as I said before, in that example, it also, maybe there's an enabling condition here that rainfall patterns permit the use of poison, but this is the hypothesis that we have about this work.

So this is the decision tree that we're going to use. It looks a little bit scary, but let me walk you through it step by step. So you start with a proposed action with clear goals and an explicit theory of change. So step one, in the, in the decision tree, he says, do you have a well formulated specific hypothesis about either this overall theory of change or a specific assumption, the theory of change. In other words, do we have a formulation of what we, that will rat poison actually work to eliminate the rats on our Island? And I'm about specific on my specific Island where I'm working. If yes, you go to step two, if no, you have to take the time to form a hypothesis. There's no point in talking about evidence unless you have a hypothesis. Step two is, does direct or sufficient circumstantial project evidence support your specific hypothesis.

And you can be very confident, confident, need more information or unlikely true. So let's see what this looks like graphically. Okay. So if I have a lot of locals, if we've been poisoning rats for years on our Island, or if I'm working with indigenous peoples and those indigenous peoples have traditional knowledge, that tells you a certain action is going to work. And you're very confident that it's going to work locally, then get on with the work. You're very confident. Start implementing the action at scale and monitor implementation. Let's not spend time digging up evidence for something we know is going to work. And conversely, if we know, if we have evidence locally, that poison just doesn't work on our islands, for whatever reasons, then why are we bothering with this strategy? Let's get on with some other strategy. If you're more in the middle, you're more mixed.

If you're confident, but or you need more information, then you have to go onto the next step. So the step three says, is this hypothesis critical to your action and overall project and critical as being wrong would have major consequences. We put this in as a circuit breaker because we don't want practitioners to feel like they are gonna be paralyzed and they can't move forward without getting evidence for something. At some point, you know, you, you, you don't have time to research everything. So if it's not critical, don't waste a lot of time on it. Maybe be quicker and dirtier about it. But assuming it is critical, you go to step four. So now it's step four. We're going to ask what the rest of the world know about our hypothesis. Is there generic evidence to support a generic version of our hypothesis? And if that evidence is convincing, potentially not clear or clearly refutes.

So going back to our diagram, if the rest of the world says, yeah, poison rats is pretty a pretty dumb idea. Dentally doesn't work well, then we probably want to think about something else. But yeah, if it's, even if it's convincing, even if the evidence, the world is overwhelming, let alone, if it's potential or not clear, we still have to do one more step before we can go on. And that last step is to say, is the generic evidence relevant to our site conditions and thus supports our specific hypothesis. So this is, we really have to ask ourselves the question, you know, where all those others islands, dry and ours wet, and this is not going to work. And so that leads you to four outcomes, four choices, where if you're very confident, you can then get on and implement the action at scale. If it's unlikely to be true, you're going to consider alternative actions.

And if you don't have any better candidate candidates, triaged that whole work and get on to something else, if you're confident, but so you generally think it's going to work, but if you're not totally convinced, then you might want to implement the action at scale, but invest a bit more in monitoring effectiveness. And then if you're still uncertain, you don't know whether it's going to work. Then you might want to consider alternative actions, or if not, you're going to pilot your work and use adaptive management to really close that uncertainty down over time. So you use this concept of adaptive management and learning by doing in the context where you're, um, uncertain and don't have enough information. So again, if you know what to do, get on with it, but if you don't, then you're going to kind of go into this more kind of thinking approach and using the evidence.

All right. So that's the second part of that presentation. I'll just pause for a second and give you guys a second to breathe myself a second to breathe. Sorry. I don't have I'll hopefully hold questions until the end, but, um, think about any questions you might have. Alright, let's get into then the third part of this work, which is how do we operationally incorporate evidence and conservation practice at both a project and a discipline level. So again, we have these open standards for the practice of conservation. You can see version 4.0 is just released earlier this year in 2020, these are the steps in the standards. Again, what we help people do. You come up with a situation, you come up with a purpose for your project, talk about who's on the team. You plan your goals and strategies. You develop your monitoring plan, you implement you analyze and adapt and you share what we've done as a result of this evidence work is we've clarified throughout the open standards about how both principles from adaptive management and evidence-based conservation now informed the cycle.

So if you read the cycle that the version 4.0, as opposed to three, you'll see a lot more about how we use evidence and bake it in. And in particular, when it come to this Miradi software, we've also tried to bake evidence into what the software supports. So again, if you have a theory of change that says we're going to poison the rats, but we're also going to bring in a campaign for voluntary use of rat barriers, uh, to keep new rats from accessing the islands. So there's sort of two parts to this, this strategy that we're adopting here in this theory of change. And why don't we built into Miradi as you might now lay out as a team, your hypothesis, this theory of change is your hypothesis about your strategy. And we've built in now, for example, the ability to show uncertainty in relationships and to annotate or show the evidence why, of how you're making that assumption in your specific theory of change.

So you might have looked at a non systematic review and you're citing that to sort of say why you're uncertain about whether that's actually the, those boats, those boat rat barriers   are going to work. So at least you're documenting what you've known about external evidence. We also give you a place to then start to state your objectives, those hypotheses, about what you need to accomplish. What is the notes on which you're basing those at those objectives? And what's your confidence rating. Those objectives are those objectives are rough guests or expert knowledge or external research or onsite project research. And it's okay to have a rough guess if that's all you have, but at least now we're sort of, for our key factors stating our confidence levels, standardized fashion in Miradi . And then finally, what we can start to do is we can start to track our performance in terms of both implementing our strategies and activities.

So these yellow things, and they can be completed on track, minor issues, major issues scheduled for the future. And then you can document what you learned at that review period. And then as part of that pause and reflect you also document, did we achieve the results we were supposed to hoping to have? Are they on track achieved partially she not achieved? And it's this documentation of whether this specific instance is working. That's going to start to help us as you'll see, build up the evidence-base to learn on the conditions under which a given strategy might work or might not work. So we're baking a lot into this software to support evidence and evidence based practice in the work that practitioners and teams are doing. But what's more interesting is can we do this more globally? And so what I'm showing you here is a napkin.

I was sketched out in 2008 . It was at the cons bio meetings, I think in Tennessee that were, I think in 2008. And I was sitting down with Bill Sutherland and Andrew Pullin , who are folks who work out a lot on synthesizing evidence around the world about conservation practice. And we came up with this kind of sketch that you see here, what the sketch evolved into it to the paper a few years later is a theory of change at a high level of how we transform the discipline of evidence-based conservation. So what we're ultimately about our theory of change is that we would like to have more effective conservation actions and desired conservation impacts to get there. We need evidence to be generated. Evidence needs to be accessed, and that evidence has to be used. And in particular, there's a bunch of enabling conditions for evidence-based conservation.

So at the highest level, this is our theory of change about how we think evidence can improve how conservation gets done. As you might imagine, there's actually a lot of detail here. I won't go too much into the detail, but in terms of evidence being generated, you have to have ongoing project evidence reliably and systematically documented. It has to be systematically harvested. It has to be compiled and made it accessible to others. And then that project evidence is analyzed to create generic evidence. So you're going from specific project experience to generic evidence. Then people actually have to be able to find and access the, the, the evidence they need at their timely basis. And then they actually have to use it. And then hopefully they keep start taking actions that are supported by evidence and they stop taking actions that are not supported by evidence, and then to make all this happen, furthermore, that the conservation community to be aware of it, it has to have incentives as the capacity.

There's a whole bunch of things that have to happen here in this land, science skepticism. This is the whole question of whether even evidence is actually going to be adopted at all, or whether given the skepticism about science, uh, this is all kind of barking up a crazy tree that no one's gonna pay attention to us. So we do have to think about that as well. Okay. And then what these hexagons show are, the things that are our community can collectively do in order to make this whole theory of change happen. And so what I want to share with you is just, um, a little bit of extra work. This is not in the paper referenced, but this is some additional work that sort of points where all this is headed and can predict it comes from some research we did for the Moore foundation on conservation evidence libraries.

So the role of evidence libraries are really a two parts in this overall chain. You need libraries to disseminate experience, but you also need libraries to accumulate experience. And so there's a lot we could aspire to. One of my favorite examples comes from the education world. So as part of this research for more, we reviewed dozens and dozens of different evidence libraries in different fields. One of my favorites was the, what works clearing house and education. This is put on by the us department of education. You can go online to What Works Clearing House And if you're a teacher or education educator, and you want to say, use the SRA, real math, preschool math curriculum, it's a, it's a curriculum, a tool you're expecting to use. You can actually go click on mathematics and then find these named interventions programs, product policies, and practices.

And you can get both systematic reviews as well as original studies of these, of these tools. So here's the entry for SRA real math. You can find that they actually looked at, um, here's their, some of the results they use to, to systematic high quality studies to review the effectiveness of this tool. You can see a quantitative measure of this change in mathematics achievement and the target population. It shows the conditions. So was this done in free lunch programs and poor with schools of poor kids or with different, uh, diversity's and mixes, uh, you can get the full report and these are paid 30 part reviews. This field, this, the education department spends a hundred million dollars a year creating these kinds of reviews and studies of the effectiveness of different kinds of curriculum. So there are several evidence. Libraries started to play a similar role in conservation that some of my coauthors are involved in.

So conservation evidence that bill Sutherland and company do, where you can look up different interventions, uh, the collaboration for environmental evidence, which is a network of folks around the world doing systematic reviews of conservation actions. There's a new one called evidence yet that's looking at, um, market based approaches to conservation. So these are these, these new really cool tools that are libraries that are making evidence available to practitioners who are interested in things. What we found in our papers. If you look at the different fields like medicine, education, development, policing, some of them are much farther ahead of us than we are in the conservation sector. And if we analyze them, it tends, they tend to have simpler systems with clear outcomes. And just as importantly, they have a lot more time and money to devote to this as opposed to fuzzier outcomes and less time and resources.

So if you look at, for example, the resources, each field medicine spends trillions overall. They have practitioners, they have researchers, but they have a whole army of clinical researchers and agency regulators and insurance companies who are all pouring over the research and trying to translate that into evidence practice, similar education, similar development. Back in conservation, we have some project teams and we have some biologists and ecologists and social scientists. And then we have a few unpaid synthesizes and a few evaluators. So one reason why it was a little bit behind is we just don't have the resources dedicated to this yet. That's some of these other fields, do we need to develop a cadre of clinical researchers, but here's the real challenge in conservation for those researchers. If we go back to this pyramid and we talk about data, information, knowledge, and wisdom, really what we want is case study information collected to enable evidence synthesis, which in turn is then used to inform ongoing practice.

But the problem is these syntheses and systematic reviews and those libraries I showed you, they're mostly limited to searching the published literature, which is a small fraction of the overall case experience. And so if you look at the lower levels of the pyramid, from the point of view, though, synthesizers, it's all dark matter. It's only results that are maybe in internal organizational records or field notebooks are not even documented. Did, did, did our application that poisoned the rats actually work. We don't have access to all that case, study information, all that data that would allow us to have those syntheses and, and get the wisdom over time. So we need to figure out how to do that. And we've been working on some tools that will help us get there. So this is a quick little detour, but one of the tools you need is just to standard terms to describe conservation work.

So if you think about a typical conservation project, and let's say you have a direct threat, and that's the direct threat you have, you've got a bunch of cows sitting in your stream. If one group calls that cows, the next cattle, the next livestock, the next grazing, and the next ranching, you'll never realize that you're all dealing with the same problem. If you put that into a database. And so you, you won't be able to connect and sort of see what works and what doesn't work. So particularly for the threats, the diseases, so to speak, and the actions are our list of cures, just like medicine has a list of diseases and a list of treatments. We figured we need to do the same thing in conservation. So about 15 years ago, CMP set out to develop a taxonomy of threats and actions. We discovered the IUC and red list.

We're doing the same thing at the same time. Oops. But the really cool thing was there was about 80% overlap between our two efforts. And we were able to bring them together into what was a set of unified global classifications that was all published. Uh, so it's published this paper in con bio, uh, both threats and actions, but I want to draw your attention to the actions one here. So this is the CNP actions classification, and what it is, is a list of every possible conservation action in the world. Uh, and at some level it's hierarchical, but I would challenge you to sort of thing, think about an action that wouldn't fit into this, into this classification system. And if, if you did come up with one, we could add it and we could change the classification system. So we have a list now of the diseases, but also the cures that what we have available to us, that was the first step.

What we're now trying to do is iterate between specific and generic actions. So let's, we're working on a specific action. Remember at one particular place, and we're going to Africa and we're helping ranger patrols be set up to stop elephant poaching. So we have a theory of change, says that if we have ranger patrols, we'll have effective patrols deployed. If we do that, patrols will apprehend the poachers. If we do that, the presence of the patrols will deter poachers, alpha poaching will be reduced. We'll have an increased elephant population and the purple triangles represent indicators or things that we could actually measure to see whether that specific action was working. Let's take another specific action. So now we're working in The Bahamas somewhere. We care about sea turtles on beaches. The threat is egg collection. And so we're working with community guards, effective community guards are organized.

They catch the egg collectors, their presence, detours, egg collectors, egg collection's reduced. We have increased sea turtle population. So these two specific instances are taking place in different parts of the world. Different ecosystems, different biomes, different people, different stakeholders, everything about these two systems is completely different. And yet, as you can see underneath it, there's a logic behind the action that they're taking, that you could create a generic action for a patrols and guards strategy that you see here. And what we hold out is that by if you could collect a whole bunch of these and compare them, you could start to determine the conditions under which a generic patrolling and guardian strategy might work. So for example, if the violators that you're trying to catch, come from your own internal community, then maybe having a community based guard patrol won't work because people don't want to rat on their neighbors.

If the threat is coming from the neighboring community, people would be really jazzed to do it, but then they're stopping and protecting their resources. And then if the threat is coming from armed, insurrectionist with AK-47s , the community base patrol system won't work because they're going to get blown out of the water by the firepower. So the answer is never does a strategy work. It's all an action work. It's always under what conditions is going to work. And by iterating between specific and generic and actions, we can start to come up with an understanding of what works and what doesn't work. So this is a foundation of a science of conservation, and just like EO Wilson can show you a ant on a red tag specimen, which is the type specimen and the foundation for the science of biology. We're trying to create something similar in conservation. So coming back to the challenge and conservation, those lower levels of the pyramid, that dark matter that no one is seeing, we need to see that in order to develop those generic actions, we need to additional evidence libraries.

So we need libraries to disseminate evidence, but we also need libraries to help generate evidence. And so in particular, we need a library of specific conservation actions with a spatial interface. So this is a place where people could record the specific actions they've taken. What did they do? What was the theory of change? What were the conditions under which the actions being implemented standard indicators and measures may be a map-based interface. You can see where the action took place. And then obviously you're gonna need some control on who sees the information and what's shareable and what's not. So that's the first library, that's specific information. And then we also need a library of generic conservation actions without common measures that show, uh, what that accumulated wisdom is and serves as the template or pattern for the specific work. And so we think that this Miradi software that we're building is one, not necessarily the only, but one place where we can start to collect some of that information.

So shared projects in Miradi would be examples of specific real world work that we could find building blocks are collections of generic templates or patterns. So if you click on Miradi you come to the building box page, you'll come here where we have different kinds of templates. And then this is not quite live. It'll be live in about a month, but the conservation actions and measures library, or camel, where you would be able to search by CMP action type or by complex strategies, or look for objectives and indicators, but you'll be able to search and find these generic actions and information about them. And at some point we can demo camel once it goes live. The other thing we're also developing that we're excited about is a new series in the journal, conservation science and practice, where it's going to have generic. Theories of change for key conservation strategies.

So these are for more complex strategies. And each paper is going to have a definition description of the strategy, talk about where it might be employed, have a basic, and the detailed generic theory of change for that strategy, a set of generic objectives and indicators, and then an initial review of the evidence behind that strategy. But I want to be really clear. We are not in this work trying to do the systematic reviews. We're trying to set up the evidence base. So people like Bill Sutherland or Andrew Pullin in all the CE folks, they can then have the database that they need to do those really good systematic reviews. So it's all part of this evidence ecosystem that we're trying to build. If we're going to make all this happen, we need to get people on board to support this work, to contribute, to be part of it.

And that's one of the reasons I'm out here preaching and giving presentations is we'd love to invite all of you as appropriate to join the party and be a part of this ongoing work and help us construct this stuff. So that's all I have to say. I am going to end with my post-test to come back to the pretest and then we'll open up for discussion questions. So webinar post-test true or false evidence is the relevant information used to determine whether a conservation action will lead to desired outcomes. I hope you can see it was a slightly trick question in the sense that it's partly that it's certainly one of the questions you can answer, but there are many other kinds of questions in the conservation setting for which you can also use evidence. So it's actually kind of falls in the sense that you could not just look at actions, but the presence of certain factors or the status of certain factors as well . True or false

You should always review high quality external evidence before deciding on a course of action. Hope you figured out that this was designed to be a little tricky, but if you go back to this theory of this, a flow chart here, the answer is false. If you, if you know, your actions are going to work, don't waste time on evidence. And if you know your actions not going to work, don't waste time on evidence. It's only when you're in that sort of middle ground that you really need to sort of consult the evidence and kind of go through this whole flow chart. And finally, where would you go to find available evidence about a conservation action you're considering using? Well, I hope you would think about coming to CAMEL and coming to Maridi looking for information there among other sources, as well as all the systematic reviews and all that other work as well. So that's how you reach me, um, per Scott's point. Um, you can download a paper, uh, you're welcome to do a free trial of Miradi. Um, it's a software that supports this work is evidence enabled. And let me pause there and see if there's any questions.

Thank you, Nick, for a fascinating and thought provoking seminar. I'll ask that anybody with questions, please type them in the chat or raise your hand so we can try to control, um, the order in which people are asking questions. They'll be already, uh, trying to talk over you. So, uh, Frank, you had your hand raised, do you have a question for Nick? Yeah. Um, Nick, thank you very much for a great presentation. Um, I'm with, uh, in the same scientists decision center as Scott is, um, have spent quite a number of years working in the NGO, uh, environmental wildlife community. before coming to USGS . I guess one of my questions here is, uh, in terms of using this process, uh, I'm making a distinction between short term and long term decision-making, uh, either on the part of NGOs, um, where to put their money to protect endangered species or, uh, with the example you gave with the us fish and wildlife service.

And it seems to me like a long process to go through. And, um, my question is, um, where do you see the role of NGOs in this? And given my experience overseas in West Africa, uh, with AID is that there seems to be a, um, issue with institutional memory and reinventing the wheel every, uh, I wouldn't say with every administration, but with every, maybe 10 years of people coming in and out. So if you could respond to those concerns, I really appreciate it. Thank you. Sure. That'd be happy to, and I apologize folks that I'm not sharing my video because I have to stop screen sharing and I might want to kind of show a couple of diagrams here. So I apologize for not showing my face here, but, um, uh, to answer your questions. So absolutely what you don't want to do is spend, as you all know, I mean, to be extreme $50,000 worth of research to, you know, evaluate the evidence for a, you know, a quick, quick and dirty five day or a five hour decision you're making, right?

You have to balance your investment in, in your evidence, in your accumulation of evidence, uh, with the resources and the scale of the problem and the burden of proof and all those things I talked about. I mean, that's part of the part, part of the flow chart. So I think you have to balance where you kind of do the serious evidence research versus, you know, go on gut or go on quicker decision making processes. And I think those are entirely valid. I think that's what that flow chart is getting at that said, I think where you have more serious discussions, then I think then evidence, you have to start to bring in the evidence or where things matter more. We've been spending the last 10 years doing a ton of work with USAID, helping them take the USA project and performance program management cycle and infuse it with these questions of evidence and adaptive management.

Then I encourage you to go back and look at some of the stuff that's coming out of aid now, cause they've actually made a fair amount of progress in that regard, but you want to want to spend that time at the bigger picture, right? Where you're coming up with mechanisms about how you're gonna spend hundreds or millions of hundreds of thousands or millions of dollars. And so you, and then coming into pause and reflect sessions and bringing it at the right time. And conversely, we've also used a lot of these tools. Um, there's a group in, in Australia called conservation management. They've pioneered a version of the conservation standards called healthy country planning. And they've been doing with Aboriginal groups in Australia. And then, then now working with, uh, in your groups, up in Canada as well, and they're able to take these concepts and put it into a way that communities can use them and interpret them and make them theirs and make the decisions that they need to make. So I think there is a real scope for kind of using the power of the scientific method and using the power of evidence to make better decisions, but you got to scale it to the right a level at which you're working and what you're trying to do.

Great. Thank you, Nick. Appreciate the presentation,

Right, Nick, we've got two questions in the chat. I'll read the first one from David Smith. He says along with Hume and Popper , Thomas Chamberlain thought hard about evidence and inference. As he developed a method of mobile working hypotheses in conservation, they're often multiple plausible hypotheses, that link possible actions to our desired outcomes. Can multiple hypotheses be incorporated into the theory of change decisions?

Uh, I think, uh, in theory, yeah, I think, I think we ultimately have to, right, because if, if you, you want to think about a multi-variate situation or you might want to think about, uh, if this, if this hypothesis is not working as planned, well, what's the alternative hypothesis or how do we think about things? So, yeah, I think, I think in certainly in situations, we do need to do that and be explicit about that and think about that. But, um, you know, every, every dimension you add kind of doubles or exponentially adds the amount of research and thinking you have to do so it always comes up against that practicality question. So I think the art of this is knowing when you need to think multi-dimensionally and when you can get away with simple, it's funny because I'm working on a book right now about scale and conservation and complexity.

And one of the things I don't have it quite in front of me, but everyone probably knows, um, George Box's favorite quote, quote, quote about all models are wrong, but some are useful. But if you actually read his quote, it goes on to say that the best models are those that are simplest and that the Mark of a great scientist is to make the simplest possible model. And then to have these common more complicated models is actually the Mark of mediocrity. And, um, it's, it's really interesting to think about how we find what fuse and other quote would be that simplicity on the far side of complexity and cut through the work. And actually one, one thing I wanted to say about the previous question as well, I meant to add was there was a question of how do you deal with institutional turnover and change? That's one of the reasons why these theories are changed and these kinds of ways of documenting your ideas become so important because at least you can capture in systematic ways what you've learned and what you've not, what, what the mistakes you've made and where, where things are worked and not worked. You have some prayer at least of then passing it onto your successor when you move on from a project. So I think that's part of that question as well.

And there's a question from Lydia Olander . Do you ever consider co-benefits or trade offs in your theory of change models and evidence assessment? And if so, how so that's kind of then starts to get in the question of not just multiple actions, but also multiple outcomes. And so if you have multiple outcomes that you're working towards, how do you balance those tensions? And obviously, um, in our models, we do put in several related outcomes, or we also will put in maybe conservation outcomes and try to weigh those and balance against human wellbeing outcomes. And how do you, how do you balance those things out again, in the interest of simplicity, we try to take these sort of complex systems models and take slices of them through these, these pathways, these series of change and understand that, and then use that and then expand them to sort of look at competing hypotheses or competing very outcomes as needed when we need to introduce that level of complexity.

And then of course, if you want to get really serious about it, there's all kinds of more serious decision, support tools, you know, linear programming and all the kinds of things in structured decision making that I, again, I don't think are inconsistent with this way of thinking. It's just that you need, sometimes you need to bring in more of that rigor and you have more of the data where you can bring in that rigor. And sometimes you're more, uh, looking at conceptual ideas and you may not have the quantitative rigor, but we still want you to have the conceptual rigor and to understand the theories of change that you're trying to work on in, in that kind of work. Are there any other questions Carl seems to have his hand up. Hi Nick, thank you for a really fascinating presentation. It's very thought provoking. Um, I was intrigued and looking at your initial three questions and listening to the discussion during the presentation and one topic kept coming up in

My mind, um, and it relates to the issue of uncertainty. And you, you mentioned adaptive management in the Miradi model. Um, and I have, I guess, two parts to the question. One is, could you say a little bit more about how you treat uncertainty? And then the second question relates to the importance and prioritization of evidence. Um, all evidence is not, is not equal, um, and all evidence has, has different costs associated with getting it. Um, are there methods that you've considered for prioritizing which evidence is most important in deciding on a course of action?

Those are two questions where we could probably spend an entire hour talking about both of them or either of them, but, but let me do my best to at least touch, scratch the surface here. Um, so your first question is really about uncertainty and different kinds of uncertainty. And one of my colleagues in the measuring impact project, uh, Natalie Woah, and some other folks, she has a recent paper. I'll have to look for it. I'm going to be paraphrasing her paper, but she has a really interesting set of observations in that paper about is do you, is the uncertainty going into the quest going into the work you're doing or is it certainly that's emerging after the work? I mean, there's sort of uncertainties at different levels and evidence can be used in different ways to sort of plug different types of uncertainties that you have in different uncertainty gaps.

But coming back to this work, this is where I originally thought. And I think going into this work, I would have said the conservation standards and those process steps we're going through that cycle is about doing adaptive management. But going through this evidence work, we realized it's not always about  adaptive management . You can go through the steps in that cycle and take action. And if you know, your action is going to work, you get on with it and you do the work and you're not doing adaptive management. You're just doing conservation work because, you know, you have the strategy or the approach it's going to work. If you're an uncertain about whether your actions going to work in your conditions and you don't have enough evidence, that's when you're in that adaptive management case. And unfortunately in conservation, as opposed to medicine, we're just so frequently in that adaptive magic case that we tend to think adaptive managers, everything we do, but I don't think it necessarily is. And so I think thanks to Natalie and some of this other thinking, I think this whole work about different kinds of uncertainties and understanding what uncertainty gaps you're plugging becomes really important as, as you think through the work. Um, so that was your kind of maybe, uh, an initial thought on your first question. Does that make sense?

It does. It does. And I just wanted to get a little more perspective on your thinking relating to uncertainty and as far as prioritizing which evidence, how you decide and prioritize, which evidence is most critical and most important for a decision.  .

Yeah. So there's, there's a really, there's actually a really easy answer to that question. And one that I'm very confident, confident in the answer, but as you'll see, it's not very helpful, but the, so the really easy answer to that question is I think project teams collect the least amount of evidence that they need in order to make a good decision, right? You don't want to collect more booth collecting that evidence it's expensive. So you want to find the least amount of evidence that you need to make a decision now, how you define that and how you figure that that gets a lot harder than the doing. But I think a lot of the art of this, and this is the art of hypothesis construction is the artist science. It's the art of project design is knowing when you have enough or knowing when you can get by.

And when you, when you really need to find that additional information. But I think, and so in that case, you know, maybe we get to a world of medicine where you have the equivalent of the Cochrane collaborative and you can, you know, for a given treatment, you can go online and find, or like at the department of education site, I showed you where for a given treatment, you, you go look it up and you have the evidence of one of the conditions under which it's going to work. I think we can get there. We're not close to that yet, but in the future, it be a lot easier to get the evidence you need. As we start to develop that, of course, that doesn't take into account the other question of sort of multifactorial illness and different combinations in these complex systems that we're working in. But I think we can get there, but in the short term, I would settle for find the best evidence you can in a reasonable amount of time, which I know is a cop out, but the best I can do now, that's a good, simple rule of thumb to consider. So thank you.

Any other questions? Thanks Scott for finding Natalie's paper too! That's great. Oh, sure. Definitely worth reading. Yeah. And then there was another question about one of your, your own papers that you mentioned previously. I put a link in there as well to biological conservation paper, but there's for those interested just Google scholar, Salafsky and there's, there's a lot, there's a whole library, a library of evidence, if you will. Um, so any other, any other questions we'll go Clyde. Frank, did you raise your hand again? I think it was Clyde Casey when maybe I kind of popped up. Yeah. He goes by Frank. Oh, he goes my Frank. Okay. Yeah. That's all right. He may have just been, had his hand up still. Um, well, if there are no further questions, I want to say, thanks again to Nick. Uh, this has been really fascinating and hopefully some, some folks can take this  back and kind of explain this to conservation practitioners. And, um, maybe we'll see a little bit more of this and extend this conversation into wider circles. Um, thank you everyone for joining us. And hopefully we we'll be having some more SDC seminars in the near future. Uh, so thank you again, Nick. Really appreciate your time.