This section describes concepts and stages of structured decision making, suggests a few simple ways to implement common elements of decision analysis, and notes when assistance from others with specialized training would be useful, or even necessary.
- Multi-criteria analysis helps managers make environmental management decisions requiring tradeoffs among many desired outcomes of management action.
- Hierarchically organized objectives and clearly defined criteria (especially when using qualitative measures) are the starting points for transparent decision modeling.
- The “facts” modeling side of decision analysis describes the effects of management activities on features of the environment that decision makers and stakeholders care about.
Much of the material in this section is taken from “Multi-criteria Evaluation for Ecosystem Services: A Brief Primer” by Lynn Maguire. It is accessible at https://sites.nicholasinstitute.duke.edu/ecosystemserFseevices/research/publications/.
Many texts present multi-criteria analysis at various levels (see Recommended Reading at the end of this section).Training in structured decision making is available for federal agency land managers and staff analysts (e.g., http://nctc.fws.gov/courses/programs/decision-analysis/index.html). An increasing number of federal agency consultants have such training.
This section suggests how and where structured methods of decision making, specifically multi-criteria evaluation, might help agencies integrate ecosystem services into their decision processes. It can be used as a comprehensive model addressing all planning stages up to the decision process, or it can provide insight on specific stages, such as problem scoping. It can be carried out in great depth and with full quantification, or it can be used as a conceptual framework in qualitative form. It is appropriately described as a non-monetary approach for comparing stakeholder preferences and analyzing tradeoffs. It can be an alternative to monetary valuation, or it can be used in combination with monetary valuation. Although the focus here is on ecosystem services, the methods are useful generally for decision problems in which tradeoffs must be made among multiple resource management objectives.
This section is based on multi-attribute utility analysis (MAUA).1 Because much of the material presented here also applies to other varieties of multi-criteria analysis, the section refers to the MAUA approach more generally as multi-criteria decision analysis (MCDA). And because different MCDA methods, different agency planning processes, and different ecosystem services approaches use different terminology for similar concepts and procedures, this guidebook calls attention, where possible, to corresponding terms from different practice domains.
Elements of this guidebook’s MCDA framework explicitly link to all decision analysis steps, from initial scoping to the actual decision. The key elements are
- constructing an objectives hierarchy that identifies the target services (outcomes) of concern to decision makers and stakeholders and specifying empirical indicators for those outcomes,
- creating a conceptual diagram (or means-end model) that links management actions to their likely impacts on the provision of services as captured in the indicators and forming a matrix showing how management alternatives will affect outcome indicators, and
- characterizing stakeholder preferences for varying levels of changes in the provision of the target services and stakeholder priorities among target services to support a calculation of overall value for each management alternative.
The MCDA framework can be used as a comprehensive and explicit analysis that covers all stages of the decision process, or it can be used to inform specific stages of the decision process, with other stages carried out intuitively, and often implicitly. The description that follows points out specific MCDA products short of a complete analysis that might inform the decision process.
The MCDA approach is illustrated with a hypothetical decision about management of forested wetlands on a national forest (see details). Briefly, a national forest wants to enhance forested wetlands for their own sake and as essential habitat for several at-risk species (two birds, a salamander, and a fish). Land managers are considering two alternatives to status quo management: (1) modifying water releases from upstream reservoirs to increase drought-year flows to benefit forested wetlands on the national forest, and (2) restoring a former wetland on the national forest through damming and dredging. The first alternative would affect upstream agricultural interests; the second would benefit downstream landowners whose lands are subject to flooding, as well as improve forest conditions for recreationists who fish, watch birds, and canoe.
Constructing an Objectives Hierarchy
One product of the scoping stage of a planning process is a list of objectives (e.g., desired conditions or outcomes) that resource managers want to influence by taking action. By organizing these objectives hierarchically (Figure 1), managers can link the major categories of objectives (e.g., agricultural production, at-risk species and their habitats) to more specific aspects of those categories that are important (e.g., the types of at-risk species that might be affected by management—two bird species, a fish, and a salamander). Constructing an effective objectives hierarchy is both a science and an art.
Figure 1. A possible hierarchy of objectives and measures.
Consulting the full range of potential users of forest goods and services is essential to making sure the objectives hierarchy is complete. In the ecosystem services literature, these users are often termed beneficiaries (or stakeholders). Different beneficiary groups (e.g., hunters, birdwatchers) may emphasize different goods and services and may assign different values to receipt of the same service (e.g., an increase in numbers of an at-risk bird species).
One sometimes controversial point in articulating ecosystem services as objectives of land management is deciding which services have value in themselves and which have value only for their contributions to the production of another ecosystem service. It is generally obvious that ecosystem services with use value, such as recreation or agricultural production, have value in themselves (final ecosystem services) and therefore belong in the objectives hierarchy.2 Whether or not non-use values, such as the value associated with the existence of at-risk species or their habitats, belong in the hierarchy depends on whose perspective is used to frame the decision context. Some stakeholders may value at-risk species as ends in themselves; others may not. Some stakeholders (beneficiaries) may value habitats for at-risk species as ends in themselves (and thus belonging in the objectives hierarchy); others may regard them as intermediate ecosystem services, important only for their contribution to final ecosystem services (and thus belonging not in the objectives hierarchy but in a means-ends network). The hierarchy in Figure 1 includes both use values (e.g., fishing, wildlife viewing) and non-use values (e.g., at-risk species, wetlands) among management objectives.
The objectives hierarchy specifies empirical measures that will be used as indicators for the more general objectives identified by stakeholders. As indicators, these measures are subject to a variety of considerations that apply to all ecological indicators. These considerations include clarity and precision, repeatability, potential bias (e.g., as applied by different evaluators), and use of proxy measures.
If an objectives hierarchy is going to be used as the starting point for a quantitative analysis of tradeoffs among conflicting objectives, it should be reviewed by a specialized consultant to ensure that the objectives structure accords with the assumptions necessary for such an evaluation.
Comprehensive, Not Redundant
Organizing ecosystem services objectives hierarchically can help ensure that the ecosystem goods and services reflect the needs and desires of the full suite of stakeholders. For example, if improved recreation is a general objective, the hierarchy might specify the needs and desires of fishers and canoeists. An individual who fishes and also canoes is not double-counted in this scheme.
A hierarchical organization helps managers detect omissions (e.g., other at-risk species or habitats that should be included) as well as redundancy, which can lead to double-counting in evaluation of alternatives. Figure 1 includes both at-risk species and their habitats, because wetland habitat provides value in its own right—i.e., value beyond its role in supporting at-risk species.
The objectives hierarchy in Figure 1 includes objectives important to stakeholders beyond the boundaries of federal land area addressed by the planning process. These stakeholders include (1) upstream farmers whose production may be affected by changes in reservoir management, (2) downstream landowners whose risk of flooding may be affected by reservoir management upstream and by wetlands restoration on the national forest land, and (3) members of the public who might never visit the forest, but who derive satisfaction from its existence. Making the objectives hierarchy comprehensive by including objectives important to this wider set of stakeholders facilitates creation of management alternatives that can garner widespread support. Iterations of the hierarchy during agency discussions and stakeholder engagement may provide opportunities to identify or remove services that are not critically affected by the decision and to add other key services.
Values, Not Actions
One principle of constructing objectives hierarchies like the one in Figure 1 is to include only services with value in themselves(i.e., final services) and to exclude services that are valuable only because of their contribution to final services (i.e., the production of intermediate services). The subset of services or objectives that have value in themselves are often monitored or measured with biophysical indicators (such as area of wetland habitat). A common mistake in creating objectives hierarchies is to include management activities that might be taken to achieve underlying objectives (e.g.,restore wetlands). Such activities don’t belong in the objectives hierarchy, because they have value only in terms of their effects on the features of the environment that are of fundamental interest (e.g., at-risk species dependent on wetlands). As described below, these management activities belong in means-ends models (conceptual diagrams)—i.e., graphical, mental, or mathematical models used to depict relationships between actions taken and objectives achieved. These means-ends models show how a single action might affect many of the indicators that measure accomplishment of underlying objectives, whereas an objectives hierarchy divides overarching objectives into their components.
Objectives hierarchies, including the suite of indicators used to evaluate success in achieving objectives, can be the starting point for many subsequent analyses of management alternatives, including multivariate statistical analysis, multi-objective optimization, and multivariate simulation modeling, in addition to the form of multi-criteria analysis described here.
Creating a Means-Ends Model
A means-ends network (also known as a conceptual diagram, path model or influence diagram) illustrates how management activities propagate through the ecosystem to effect changes in the objectives identified in the objectives hierarchy (Figure 1)—that is, it is a model that links the objectives—the ends—to the means for achieving those ends (Figure 2). Unlike the objectives hierarchy, which is a static measurement model, a means-end network is an action-oriented process model.
A means-ends model can become a tangled web of arrows, showing that an action (e.g., modification of the schedule of reservoir releases) can affect many ecosystem elements and processes, both proximate and distant in space and time (e.g., reservoir releases affect dry-season flows upstream of national forest wetlands, changing their extent and status, which, in turn, affects the health and numbers of wetland-dependent species, on which both passive and active recreation may depend). The objectives are represented as endpoints in the means-ends model. As a general principle, intermediate structures and processes should be included in a means-end model only if they lead directly or indirectly to impacts on other target services. The construction of means-ends models (conceptual diagrams) is elaborated in Means-Ends Diagrams as a Tool for Incorporating Ecosystem Services into the Planning Process.
The complexity of the means/end model(s) can be reduced by focusing on key ecosystem service objectives that are most likely to be affected by the decision, that are most likely to be valued by beneficiaries, or both.
Figure 2. A possible means-ends network.
Evaluating Performance of Alternatives
The next step in MCDA is to use the means-end model to distinguish the varying levels of services provided by different management alternatives. At this stage of MCDA, evaluation of management alternatives uses the measures (indicators) specified in the objectives hierarchy. These indicators might be purely biophysical indicators (e.g., area of wetland habitat), but it is better if they are benefit relevant indicators that describe how an ecological resource meets a social need (e.g., miles of stream accessible to fishing).
It is common to present the anticipated performance of several management alternatives in a matrix with alternatives as row or column headings and measures of performance (indicators) on the other dimension of the matrix. Table 1 shows such a matrix for the wetland example. It is simplified to include only three alternatives and four measures, including implementation cost over a 10-year period calculated as net present value (NPV).
Table 1. Matrix showing the performance of three alternatives for restoring ecoforest wetlands in terms of three ecological measurement scales.
|Number of bird 1 (breeding pairs on forest)
|Wildlife viewing at walkway site (qualitative scale)
||One iconic sp < 5
||One iconis sp < 5‚ one > 5
||Both > 5
|Flood Events (annual average)
|Implementation cost ($MM NPV)
Note: Each measurement scale represents an objective in Figure 1. The wildlife viewing measure refers to opportunities to view individuals of one or both of two bird species especially associated with the wetlands in question.
A matrix like this can make it easy to see if there is one alternative that is better (or no worse) on all measures than all other alternatives (and can be chosen without further analysis) or one that is worse (or no better) on all measures (and can be discarded without further analysis). There are no such clear winners or clear losers here.
Alternatives matrices are not solely the province of MCDA; they are applied in other decision frameworks, including conjoint analysis and choice experiments for monetary valuation. General guidelines for their development are detailed in Displaying Assessment Results: Alternatives Matrices and Other Tools.
Expressing Relative Satisfaction with Performance on Individual Measures
Sometimes creating the alternatives/measures matrix is the final step of formal analysis, and expressions of relative satisfaction for different levels of performance and any tradeoffs among objectives are made intuitively, and often implicitly, during the decision process. A more formal consideration of relative satisfaction and tradeoffs requires establishment of a relationship between performance on each measure and a unitless scale (usually 0-1, sometimes 0-100) that describes how relative satisfaction changes over the range of performance levels encountered in an analysis of particular alternatives. These relationships, which are often called value functions or utility functions, serve the dual purposes of (1) putting unlike measures on a common scale so that they can be combined and (2) expressing relative satisfaction with different levels of performance for a single measure.
Relative Satisfaction (Value/Utility) Functions
It is common (but not always warranted) to simply assume a linear relationship between relative satisfaction and performance level, as in Figure 3 for number of breeding pairs of bird 1. A linear relationship is more likely to reflect relative satisfaction accurately when the management alternatives change performance relatively little compared to the status quo, as is the case for breeding pairs of bird 1, which varies only 10% from the status quo for any of the new management alternatives.
Figure 3. A linear relationship showing that the increase in relative satisfaction for each additional breeding pair of Bird 1 is the same over the range of 200 to 220 breeding pairs.
When performance levels vary more widely, as they do in this example for costs and flood events (Table 1), assuming a linear relationship may not be adequate. The shape of the relationship between relative satisfaction and performance is tied to the range of performance levels encountered in a particular problem. Therefore, the shape for numbers of breeding pairs ranging from 2 to 2000 may differ from than the shape for numbers of breeding pairs ranging from 200 to 220 or for numbers ranging from 2 to 22. This is one reason among many that a relationship derived for one set of alternatives in one context may not be meaningful for another context.
A common nonlinear relationship is diminishing marginal increases in relative satisfaction as the level of performance increases—i.e., the increment in satisfaction from an additional breeding pair is larger for small numbers of breeding pairs than it is for larger numbers of breeding pairs. The shapes of value or utility functions can be different for different stakeholders. Downstream landowners, who most immediately feel the pain of flood events, might experience a larger boost in relative satisfaction than upstream landowners from decreasing flood event frequency (Figure 4). Utility curves can also express more complicated issues in stakeholder preferences, such as levels of risk-aversion; such nuances are illustrated in Maguire (2014).3
Characterizing these relationships can be daunting. There are structured methods for eliciting value and utility functions (see Clemen 2001), and it is best to use the services of specialized consultants to implement them. Information can be collected through face-to-face interaction with decision makers or stakeholder representatives or remotely through surveys or social media.
A stopgap approach is to simply draw shapes on a graph that appear to capture the way relative satisfaction increases or decreases with performance level and read off the relative satisfaction that corresponds to a particular level of performance. It will often be the case that the choice of a particular alternative is not highly sensitive to the exact form of the relationship between satisfaction and performance.
Figure 4. Curves representing an increasing increment in relative satisfaction for upstream and downstream landowners as the probability of flooding begins to decrease from the status quo of 0.2.
Relative Preference for Qualitative Measures
Levels of quality of wildlife viewing (e.g., more than five of both iconic species, Table 1) are not given numerical labels because these labels are often misinterpreted as indicators of relative satisfaction. To express relative satisfaction for use in a more formal analysis of alternatives, numerical values between 0 and 1 must be assigned to the levels of the qualitative scale. The first step is to order the verbal categories from worst to best. This order might differ for different users, although the worst and best categories are likely to be obvious. For this scale, seeing none of either iconic species will receive a numerical value of 0 (worst).Seeing both in numbers greater than five will receive a numerical value of 1 (best). As mentioned above, the order of the four intermediate categories isn’t entirely obvious because one stakeholder group might prefer seeing both species in smaller numbers to seeing only one species but in larger numbers, and another stakeholder group might prefer the opposite.
There are a number of techniques for obtaining expressions of relative preference for qualitative measures from stakeholders and decision makers. One of these, the ratio method, is discussed in Maguire (2014).4 The hypothetical output of using the ratio method is shown in parentheses below the descriptions of levels of the wildlife viewing measure in Table 2.
Table 2. Performance evaluations and corresponding relative satisfaction for the three alternatives for restoring forested wetlands.
|Number of bird 1 (breeding pairs on forest)
|Wildlife viewing at walkway site (qualitative scale)
||One iconic sp < 5 (0.14)
||One iconis sp < 5‚ one > 5 (0.86)
||Both > 5 (1)
|Flood Events (annual average)
|Cost ($MM NPV)
Note: Performance levels for each of the four alternatives are translated to a 0–1 scale (in parentheses) expressing relative satisfaction using Figure 3 for number of breeding pairs and Figure 4 (downstream) for flood-risk reduction. The methods for determining relative satisfaction values for wildlife viewing and cost can be found in Maguire (2014).5
The alternatives matrix in Table 2 captures stakeholder satisfaction with the varying levels of performance for each of the target services. Comparing the alternatives requires that preferences be integrated over the different services.
Using Weights to Express Tradeoffs among Objectives
In addition to numerical expressions of relative satisfaction for the levels of individual measures, a formal evaluation of tradeoffs among multiple measures requires some expression of the priority accorded each measure. A common expression of priorities among a suite of measures is a set of fractional weights that add up to 1. These weights reflect willingness to accept worse performance on one measure in order to secure better performance on another, i.e., willingness to make tradeoffs among conflicting objectives. In a multi-criteria analysis, weights can help address the concern that gains in one ecosystem service might be accompanied by losses in another service (or in another valued objective).
There are a variety of structured ways to assess weights (see Clemen 2001); using a specialized consultant to implement these methods is a good idea. If that is not possible, a stopgap approach is to use a visual representation of weights, such as a bar with segment lengths proportional to the weight on each measure (Figure 5). The fact that the length of the whole bar is fixed at 1 requires that any increase in weight on one measure be compensated by decreases in the weight on one or more of the other measures.
Figure 5. A visualization of the relative weights assigned to the four measures of performance in the wetland restoration example.
Beyond the algebraic constraint that weights over all services must sum to 1.0, three nuances warrant particular emphasis. First, weights depend on the ranges of performance. Second, weights might not be transferable from decision context to another. Third, weights may differ among stakeholder groups.
Weights Depend on the Ranges of Performance
As with the relationships that express relative satisfaction, willingness to make tradeoffs is tied to the range of possible performance levels that might be encountered when evaluating a particular set of alternatives. It is easy to see that this is so by imagining that the range of costs in Table 2 is $500,000 to $550,000 instead of $100,000 to $1 million. If the ranges for the other three performance measures (breeding pairs, wildlife viewing, and flood events) remain the same, the impact that different levels of cost have in determining overall satisfaction with each alternative will be far lower when the range of costs is narrow than when the range is wide. The weight on cost will be lower in the former case than in the latter. (And, because the weights must add up to 1, the weights on the other three measures will be correspondingly larger in the former case.)
Weights Might Not Be Transferable from One Decision Context to Another
Unlike relationships that express relative satisfaction, wherein the range of performance levels for a single measure affects only the pattern of relative satisfaction for that measure independent of the performance levels on other measures, the weights for a set of measures have meaning only in relation to each other and only in relation to the ranges of performance on all measures for a particular problem. It is not credible to elicit weights for one set of performance levels in one decision context and then apply those weights in another context without first verifying that the performance levels and the particulars of the decision context are similar enough to justify such a transfer (and see Further Discussion, below).
Weights Differ among Stakeholder Groups
For most contentious decisions, the fundamental disagreements among stakeholder or user groups are about the priorities placed on different objectives, as expressed by weights. Capturing these differences by eliciting separate sets of weights for different users is very helpful to both decision makers and user groups. Attempting to gloss over differences in priorities by eliciting weights from only one or a few perspectives, or by averaging weights across user groups, is a recipe for continued contention.Eliciting weightsfrom all user groups can sometimes suggest where compromises that satisfy some of the needs of each group can be found. It also can help address concerns about distributional equity by identifying the values and preferences of groups defined by ethnicity or other cultural or socioeconomic markers.
Combining Value for Multiple Services
Articulating the combined value of a suite of ecosystem services has been the target of much research and the expressed desire of federal regulatory and budgetary organizations. The type of multi-criteria analysis described here offers one way of meeting that need by yielding numerical values that describe the relative capacities of a set of management alternatives (usually including the status quo) to produce desired ecosystem services. These numerical expressions of relative merit are tied to a particular decision context, a particular set of alternatives, and particular characterizations of relative satisfaction with performance and priorities among conflicting objectives. This type of analysis easily blends measures that are typically monetized (e.g., financial costs of implementing management actions) with those that are not easily monetized (e.g., the experience of viewing iconic wildlife species).
Estimating utility and weights for multiple services is a process subject to uncertainty from various sources (e.g., imprecision in ecological indicators, choice of stakeholder subjects). Maguire (2014) discusses possible methods, such as sensitivity analysis, for dealing with these uncertainties.6
Sometimes decision makers do not want to create numerical values that express the relative merits of each alternative but instead prefer to intuitively integrate in the decision process information about performance, relative satisfaction,and weights. When a combined score for each alternative is wanted, a commonly used method is to calculate an overall value on a 0–1 scale by adding up, for all measures, the weight on each measure multiplied by the relative satisfaction associated with performance on that measure. Table 3 includes the weights shown in Figure 5 as well as the performance and corresponding relative satisfaction values in Table 2 and reports the overall value for each of the three analyzed alternatives (e.g., (0.11)(0) + (0.06)(0.14) + (0.28)(0) + (0.55)(1) = 0.56 for the status quo).
Table 3. A full representation of a multi-criteria analysis of tradeoffs in performance for three wetland restoration alternatives evaluated with four performance measures.
|Number of bird 1 (breeding pairs on forest) (w = 0.11)
|Wildlife viewing at walkway site (qualitative scale) (w = 0.06)
||One iconic sp < 5 (0.14)
||One iconis sp < 5‚ one > 5 (0.86)
||Both > 5 (1)
|Flood Events (annual average) (w = 0.28)
||(0) 0.15 (0.18)
|Cost ($MM NPV) (w = 0.55)
Note: The weights from Figure 5 have been added to Table 2, and overall values for each alternative have been calculated by summing the weight times the relative satisfaction value associated with performance across the four measures.
As discussed in the overall framework, the decision process can play out at varying levels (from fully participatory to top-down authority), and other factors that were not included in formal analysis may enter into the decision (agency mandate, equity issues, jobs, costs). Information provided by multi-criteria analysis is advisory to the decision but does not necessarily dictate the decision itself.
Advantages and Disadvantages
The characteristics of structured decision making that are likely to mean the most to agency practitioners are ease of use, transparency, and minimization of the potential for misleading results. From a practitioner’s point of view, the type of MCDA presented here—multi-attribute utility analysis—may appear(1)too hard to implement, too much work to implement, or both; (2) limited by the decision context for which the analysis was made; and (3) useful only for comparison, with calculated values having no absolute meaning. Disadvantages 2 and 3 apply equally to other kinds of multi-criteria decision analysis and, therefore, are not a disadvantage of MAUA in particular.
Monetary valuation methods, to be used in benefit-cost analysis or some other type of economic analysis, are sometimes promoted as solutions to disadvantages 2 and 3. However, for the reasons described in the Monetary Valuation, and Benefits Assessments sections, values may be neither as transferable to other contexts nor as absolute as many assume they are.
The remaining disadvantage, that MAUA is too much work to implement or is too hard for non-specialists to implement competently, has some merit, as described in Going It Alone versus Engaging Specialized Consultants below. Some alternative types of multi-criteria analysis referenced in Maguire (2014) attempt to reduce the data required to implement an analysis and to simplify, or even automate, the judgments that must be elicited from decision makers or stakeholders (e.g., by requiring only pairwise comparisons of alternatives instead of numerically expressed evaluations or by applying a set of rules for reducing the dimensions of a decision problem).7 The availability of user-friendly commercial software for implementing some of these methods has undoubtedly enhanced their use in environmental applications. However, some of these alternate methods have structural flaws that can lead to results that do not accord with common sense, and many of them incorporate assumptions that are not wholly transparent to users. Thus these methods can fall short in terms of lacking transparency and producing potentially misleading results.
MAUA has several advantages as a tool for multi-criteria decision analysis: (1) It might be difficult to implement, but the problems being addressed are genuinely difficult for a host of reasons (e.g., disputes among parties, technical disagreements, limited information, conflicting goals and mandates). MAUA helps to identify and articulate these difficulties. (2) Going through the steps of MAUA (i.e., stating objectives, developing measurement criteria, evaluating performance, assessing relative satisfaction and weights), obliges decision makers and stakeholders to address all these sources of potential difficulty explicitly, even when they choose to do so only qualitatively. (3) Addressing all these stages of analysis explicitly enhances transparency, an especially important characteristic for public decision making. (4) The dependence of analytical results on decision context, and the inherently relative nature of those results, is real. That MAUA makes these limitations more obvious than some other types of analysis is to its credit rather than to its detriment.
Comparison of MCDA to Other Valuation Methods
In this guidebook, MCDA is presented as one of three ways of assessing the social impact of changes in the provision of ecosystem services. In the simplest alternative, methods based on socially relevant indicators, the decisionmaker implicitly invokes stakeholder satisfaction by accounting, as comprehensively as possible, for issues related to stakeholder access, stakeholder numbers or demographics, rarity and substitutability, and so on, but the stakeholders themselves are not directly involved in this process. In terms of how decision maker and stakeholder preferences are made explicit in a decision process, the essential choice is between monetary valuation and MCDA (or, as mentioned above, a combination of the two). The primary distinction between valuation and MCDA is the replacement of dollar values (or other currency recognized by stakeholders for trade or barter) with a unitless measure of relative satisfaction or preference (utility).
Expressions of Relative Satisfaction
Monetary valuation of non-market goods or services, such as wildlife viewing quality, is an alternative way of expressing relative satisfaction with different levels of performance. Monetization allows market and non-market goods and services to be compared on a common scale, but it is sometimes difficult, or perceived as inappropriate, to come up with monetary values. For example, many Native American tribes are culturally reluctant to assign monetary value to the gifts of nature. In such cases, the relative satisfaction expressed as utility offers an alternative approach.
Cost-effectiveness and Cost-utility Analysis
In valuation, an explicit aim is often to compare management alternatives in terms of their cost-effectiveness or to perform a full benefit-cost analysis. Cost-effectiveness analysis (CEA) reports how much it costs to increase performance of a particular measure by one unit (e.g., acres, number of animals). Because only one measure can be evaluated at a time, tradeoffs among non-monetized measures cannot be expressed.
Cost-utility analysis (CUA) involves much the same calculation as CEA, but it uses a multi-attribute function (weight times value or utility of each measure) to aggregate all measures except cost into a unitless metric on a 0–1 scale, as in the MCDA presented above. A CUA combines only the first three measures in Table 3 into overall value (on a unitless 0–1 scale), leaving the fourth measure, cost, in dollars. Overall value of a particular alternative divided by its dollar cost would then be used to compare the cost efficiency of different alternatives in enhancing the combined utility of the non-cost measures. (Note that this task would require re-estimation of the weights for the three measures other than cost.)
Capturing Diversity in Stakeholder Preferences
MCDA is often used to emphasize heterogeneity in preferences among different groups of stakeholders. Therefore, multiple alternative matrices may be created to show the range and diversity of perspectives. In contrast, often only one alternative matrix is created for economic valuation, which is commonly used to generate a single aggregated value across stakeholders for analyses like benefit-cost analysis. Such an aggregated value would incorporate the size of each stakeholder group population and their relative preferences for predicted changes in services. There is no real reason that preferences elicited using MCDA could not be aggregated, nor that economic valuation instruments could not be used to capture within-group heterogeneity; practitioners of the two approaches have simply pursued different applications with different aims.
Context Dependency of Social Impacts
As noted above, the relative satisfaction or preferences (utilities) elicited from stakeholders in MCDA are context-dependent in that the values depend on several factors: (1) which stakeholders are being engaged to inform expressions of preference (value or utility functions and weights), (2) the type and magnitude of ecological services that are produced by the management alternatives being evaluated, and (3) other particulars of the decision context (e.g., geographic, temporal). Because of this contextdependency, expressions of stakeholder preferences cannot easily be transferred to other decision contexts. The same caveats apply to utilities derived from broad surveys, because these utilities reflect the performance levels and management alternatives offered for evaluation. Moreover, the same caveats apply to monetary values estimated through nonmarket valuation methods, because these are estimated relative to or contingent on a specific decision context.
Transferring Expressions of Relative Satisfaction
Under a monetary valuation approach, benefits-transfer methods are sometimes used to transfer dollar values (or functional relationships between dollars and levels of a performance measure) to different user groups and different decision contexts. As noted above, such transfers are significantly limited. Recent methodological developments in benefits-transfer analysis and meta-analysis are promising in that they suggest relatively robust approaches to benefits-transfer modeling. In principle, similar approaches might be developed for MCDA. Devising robust methods for transferring estimates of social impacts remains a key challenge in generalizing the ecosystem services approach so that it can be implemented in different decision contexts without requiring new monetary or non-monetary valuations.
Going It Alone versus Engaging Specialized Consultants
Non-specialists can employ to good effect many of the elements of structured decision making: articulating objectives, scrutinizing objectives for completeness and redundancy, defining measures clearly, scrutinizing proposed measures for implicit inclusion of relative satisfaction (where it does not belong), and recognizing instances in which different decision makers or user groups may have different beliefs or different preferences that impinge on the decision structure (e.g., differing assessments of performance, differing relationships of relative satisfaction to performance, differing priorities among objectives).
Other elements of structured decisionmaking benefit greatly from the experience and judgment of specialized consultants in decision making. These elements include (1) scrutinizing objectives hierarchies to make sure that they accord with the assumptions for independence of different parts of the hierarchy, which are necessary to support subsequent stages of the analysis (i.e., eliciting relative satisfaction with performance levels, eliciting weights to express priorities, and forming a combined overall value of alternatives by summing the products of relative satisfaction and weight for each measure) and (2) identifying where and how to use sensitivity analysis to illuminate essential tradeoffs and establish how robust the results of an analysis are to changes in the ingredients used to compose it. (For example, sensitivity analysis can help reveal if different curves relating relative satisfaction to performance are likely to change the ranking of management alternatives).
Some elements of MAUA use complex procedures to elicit numerical representations of beliefs and preferences (e.g., eliciting expert opinion to fill in gaps in performance predictions, eliciting relationships for relative satisfaction with different levels of performance, eliciting weights to express priorities among objectives). Although some simplified methods of performing these tasks are offered here, non-specialists will face many challenges and therefore will benefit from the services of specialized consultants for these portions of an analysis. They can also take advantage of the expertise of the growing number of agency consultants trained in structured decision making.
Anyone at any level can clarify complex decision problems by taking a structured approach such as that recommended here. Even the most limited use of MAUA concepts in a purely qualitative fashion can help managers make decisions in a consistent and transparent way. Effort invested in articulating objectives and organizing them hierarchically and then defining transparent measurement scales is especially likely to improve any type of analysis that may follow. Even if they are not brought together in a full MAUA, any one or more of the stages of structured decision making (e.g., creating a performance matrix, defining relationships between satisfaction and performance for individual criteria, assessing priorities among criteria) can help inform decisions. But, where the stakes are high, engaging the help of consultants versed in structured decisionmaking is wise. Not doing so is tantamount to attempting an economic analysis of benefits and costs without engaging the help of economists.
MCDA offers a structured framework for estimating the relative preferences of stakeholders for changes in the provision of ecosystem services as affected by management alternatives. In this framework, decision makers are encouraged to embrace the complexity of the decision in terms of stakeholders, specific objectives, and the web of ecological interactions through which management actions can propagate and affect human systems. The approach can be applied to goods and services that are difficult to monetize, and it can be implemented with a variety of data, including expert opinion and qualitative metrics. There are intricacies and complications to the approach, but MCDA can be a useful tool in assessing the social impacts of managing ecosystem services.
Best practice questions for the use of multi-attribute utility analysis:
To follow best practices the assessor should be able to answer yes to ALL of these questions:
- Is an expert trained in multi-criteria analysis methods involved?
- Are the measures of preference tied to the decision context for which the preference evaluation input was obtained? Is a quantified difference in the provision of a service being evaluated?
- Are the preferences of all parties/stakeholders affected by a decision being assessed to ensure a transparent process? (If the assessment involves services and interests outside an agency’s authorities, collaboration may be necessary.)
- Are different preferences being assessed to reflect different marginal changes if the scale or other elements of the analysis are changing?
Clemen, R.T. (with T. Reilly). 2001. Making Hard Decisions. 2nd ed. revised. Pacific Grove, CA: Duxbury Press.
This comprehensive reference for the technical aspects of decision analysis covers elicitation of expert opinion, assessment of value and utility functions, and assessment of weights (and many other topics).
Department of Communities and Local Government. 2009. Multi-Criteria Analysis: A Manual. www.communities.gov.uk.
This manual aimed at non-specialist readers presents a helpful overview of the steps in multi-criteria analysis. Chapter 5 discusses decision context, objectives and measures, and evaluations of performance. Chapter 6 covers expression of relative satisfaction, determination of weights, and calculation of overall value using purchase of a toaster as an example. Chapter 7 presents some more complex examples that will resonate with environmental managers.
Gregory, R., L. Failing, M. Harstone, G. Long, T. McDaniels, and D. Ohlson. 2012. Structured Decision Making: A Practical Guide to Environmental Management Choices. Oxford, UK: Wiley Blackwell.
This book describes the use of multi-criteria decision analysis for real-world environmental decision making involving multiple stakeholders. It is not a how-to manual.
Hammond, J.S., R.L. Keeney, and H. Raiffa. 1999. Smart Choices. Cambridge, MA: Harvard Business School Press.
This non-mathematical presentation of the concepts of decision analysis is aimed at the general public and uses everyday decisions (such as buying a house). It presents some simplified tools for expressing relative satisfaction and weights and for determining overall value.
Thompson, M.P., Marcot, B.G., Thompson, F.R., McNulty, S., Fisher, L.A., Runge, M.A., Cleaves, D., and M. Tomosy. 2013. The Science of Decisionmaking: Applications for Sustainable Forest and Grassland Management in the National Forest System. General Technical Report WO-88, U.S. Department of Agriculture, http://www.fs.fed.us/rm/pubs_other/rmrs_2013_thompson_m004.pdf.
This Forest Service Technical Report synthesizes key points from the body of work on structured decisionmaking and illustrates how it can be relevant for land management planning in National Forests and Grasslands.
Continue to Next Section