Papers included in Watershed 96 proceedings reflect the opinions of the authors and do not necessarily represent official positions of the Environmental Protection Agency.
Leslie M. Reid, Research Geomorphologist
Robert R. Ziemer, Research Hydrologist
Thomas E. Lisle, Research Hydrologist
USDA Forest Service Pacific Southwest Research Station
Redwood Sciences Laboratory, Arcata, CA
Environmental problems are never neatly defined. Instead, each is a tangle of interacting processes whose manifestation and interpretation are warped by the vagaries of time, weather, expectation, and economics. Each problem involves livelihoods, values, and numerous specialized disciplines. Nevertheless, federal agencies in the Pacific Northwest have been given the task of evaluating environmental issues quickly over large areas (USDA and USDI 1994a, REO 1995). Similarly, the Washington State Forest Practices Board promotes watershed analysis as a way to develop long-term land-use plans for large watersheds (Washington Forest Practices Board 1993), and California now gives blanket approval of Timber Harvest Plans for large areas of private land once a sustained yield plan is approved. All of these procedures require large-scale analyses of the causes and effects of past and future environmental changes. Similar efforts in other areas have labels like "watershed plans" and "ecosystem management plans."
In each case, analysis requires the description of past and present environmental changes, evaluation of their causes, identification of the issues and resources that are potentially influenced by the changes, and prediction of how conditions might change in the future. The areas subject to evaluation usually range between several tens to thousands of square kilometers, and the time allotted for analysis is generally on the order of several months to a year. Results are used to guide land-use planning, design restoration work, and interpret future changes. Fulfilling such a task requires new ways of approaching problems.
In the past, each aspect of environmental change usually was examined independently. Most problems were first simplified to make them tractable and then were assigned to a single discipline. Detailed information was then gathered and integrated from the point of view of the discipline involved to yield a picture of the problem, its causes, and its solution. For example, if salmon were scarce in a Pacific Northwest watershed, fisheries biologists would improve habitat by building structures in channel reaches shown by stream inventories to have little woody debris. Whether woody debris was naturally lacking, whether those structures disrupted other river uses, whether they could withstand high winter flows, whether that aspect of the environment was important for sustaining fish populations, and whether the structures actually worked to produce more salmon were usually not addressed to any significant degree.
As a mandate grew for evaluating more complicated issues, different methods began to be used. Most depended on reducing the complexity of the problem to a readily tractable level, usually by simply refining existing strategies. Approaches to complex problems thus continued to follow largely mono-disciplinary, data-oriented strategies that could be abstracted into well-defined protocols, or "cookbooks." Environmental indices or surrogate measurements became popular for assessing everything from regulatory compliance (e.g. the "Total Maximum Daily Loads" used by the EPA) to the potential for cumulative watershed effects (e.g. the "Equivalent Roaded Acres" used by Forest Service Region 5). Inventory became an important tool in support of this strategy since a map of the distribution of index values provided all of the information necessary for planning. Serious efforts were made to identify the seven indices that would allow the "health" of a channel to be "measured" in the Pacific Northwestern United States (USDA and USDI 1994b), and a parallel effort to identify the indices needed to describe watershed health throughout the United States was also initiated. The sufficiency of an analysis was conveniently evaluated on the basis of whether the analysis procedure was followed, not on the basis of whether the results were valid and useful for a particular application. Although the index approach fulfilled institutional requirements for consistency and simplicity, its shortcomings were increasingly recognized: it was not valid for the types of problems it was being applied to.
Now the scale and complexity of the required analyses has grown yet again, and approaches that rest on measuring site-specific indices or on accumulating details until a general picture appears are no longer feasible: even channels cannot be inventoried over a 500-km2 watershed in a 2-month period. In addition, each watershed provides a different suite of conditions, problems, and expectations, so no single set of procedures can be used to evaluate every watershed. It thus becomes necessary to develop efficient strategies for figuring out what physical, biological, and socio-economic interactions are important in a large area, and for figuring out how best to evaluate those interactions in that particular area.
As regulatory and land-management agencies wrestled with the institutional requirements for broad-scale environmental analysis, other methods were being developed for ad-hoc applications on much smaller scales. In some cases, people needed to know what had caused a particular impact. In others, people needed to know what impacts a proposed activity might have. At this scale, evaluations could be tailored to the particular needs of the specific application. Thus there were no institutional requirements for consistency or simplicity, but instead, an over-riding requirement for validity of the results. Two complementary strategies for evaluating environmental problems were developed: the "bottom-up" and the "top-down" approaches.
The bottom-up approach proceeds from the point of view of a damaged resource or issue of concern, which is usually located near the bottom of the watershed. First, the types of existing or potential damage to the resource are identified. Next, the mechanisms that could produce those impacts are described. Finally, the historical changes in watershed characteristics (i.e. soils, vegetation, etc.; usually at upslope or upstream locations) that might have influenced these mechanisms are identified. Each step requires information from an increasing number of disciplines. This approach has often been used in "forensic" environmental studies: an impact has occurred, and an investigation is carried out to identify its cause.
In contrast, the top-down approach identifies land-use changes that have occurred in the watershed (often primarily in the uplands) and compares them to the natural disturbance regime. Changes in land-surface characteristics are documented, and the likely effects of these changes on hydrological, biological, social, and geomorphological processes are inferred. Each of these changes, in turn, is then examined to identify those influencing the focal problem (e.g. declining fish stocks or water quality downstream). Because each chain of causality involves influences that cross disciplinary boundaries (e.g. a change in vegetation influences runoff volume, and thus the utilization of water resources), this approach is also inherently interdisciplinary. The top-down approach has often been applied to "prophylactic" environmental studies: a project is proposed, and an investigation is carried out to identify its potential impacts.
The top-down approach is particularly useful for assessing impacts that occur away from the site of the triggering land-use activity. Most activities directly affect only a few land-surface characteristics, such as vegetation, topography, or soils. Most off-site impacts arise downstream of the triggering activity, and they can only occur because of changes in the production, transport, and storage of watershed products such as water, sediment, organic material, and chemicals. Changes in the primary characteristics can be evaluated to determine their influence on the watershed products and thus on the impacts of concern.
The top-down and bottom-up approaches work well for specific problems, but they require expertise and professional judgment instead of fixed protocols because the problems encountered are different everywhere. Both provide a means for understanding what is important in an area, and this is the primary objective for the new generation of environmental analyses. For these new applications, however, work must be carried out at a very different scale. Instead of being able to focus on a particular activity or impact, analysts must now evaluate the full variety of impacts that have occurred or might occur from many different activities. If the top-down or bottom-up strategies are to be applied to these broader problems, the strategies must be modified to make them workable.
Blind adherence to either of these approaches, of course, produces a rapidly broadening tree of potential causes and effects. The art of the analysis thus lies in discovering, as efficiently as possible, which branches can be lopped off and ignored. This can be accomplished in part by combining the two approaches. The first steps of each approach are the easiest, so carrying out the two in parallel provides an efficient way to evaluate the causes of a particular problem in a particular area, and can also be used to survey and prioritize the full range of issues and changes that may occur in an area (Table 1). The conceptual structure provided by the combined approach also gives a framework for gathering and interpreting reconnaissance-level information. This organization shows what types of information can most efficiently focus the analysis, and it allows easier ranking of the importance of the various branches.
|
The method outlined in Table 1 for combining the bottom-up and top-down strategies has been applied to problems as diverse as prioritizing watershed restoration needs in northern Tanzania (Reid 1990) and identifying flood hazards on Kauai (Reid and Smith 1992). In each case, the approaches were carried out in parallel (steps 1A-3A and 1B-3B), allowing the studies to increasingly focus on the most important topics. Identification of those topics then allowed the field- and office-work to concentrate on the most critical unknowns at a level of precision no greater than that needed to address the objectives. Each study area was divided into subareas likely to respond uniformly to the types of changes of concern (step 5), and the types of information needed to understand the problems were identified (step 6). Field-work and office-work were then carried out to provide the necessary information in each subarea. This framework for problem solving encourages progressive focusing on the most significant aspects of the problem, and it provides a structure that allows the relevance of particular pieces of information to be evaluated.
Although there is much discussion and concern about what is an appropriate analysis scale, adoption of these approaches essentially makes scale irrelevant. Instead, information is used from whatever scales are most useful for understanding the fundamental principles for each aspect of the problem. Regional information on a species' range may be combined with site-specific habitat sampling to evaluate the relative importance of different habitat types within a particular watershed. The same application may also require information from scales as disparate as those of DNA analysis and global climatic modeling.
Also important to the application of these strategies is acknowledgment that some essential information will always be unknowable. As long as these information gaps are identified, they can be evaluated for their significance and then worked around. In practice, however, more effort often goes into completing GIS coverage than is directed toward unraveling the fundamental unknowns that actually affect the understanding of an area. For the present levels of inquiry, complete data coverage is not necessary, and variations in data standards are irrelevant as long as the standards are understood.
Despite their history of application, these understanding-based strategies are met with discomfort by federal and state agency personnel who are in a position to use them. In the agency context, the aspects of these approaches that are new, and therefore uncomfortable, are: 1) that the particular methods used depend on the setting and so cannot be codified into a cookbook, 2) that analysis requires very little site-specific data, and 3) that the strategies require an overtly open-minded, interdisciplinary approach.
The perceived necessity for a cookbook stems in part from agency culture and in part from environmental regulations that judge compliance on the basis of procedure. In the past, both oversight and quality control rested on procedural consistency. Only when results are routinely given peer review for validity can quality assurance needs be met in the absence of a consistent procedural protocol. The reverence for data and the devaluation of qualitative understanding also stem from agency culture and regulations. Numbers are assumed to be capable of proving things, while qualitative information is perceived to be subject to interpretation.
Much of the apparent complexity of environmental problems arises from their interdisciplinary nature. No one person has the professional background required to understand fully the nuances of any one problem. A problem might thus be no more complicated inherently than one that falls completely within one discipline, but it will seem to be more complicated because of the arbitrary boundaries that western science has drawn around disciplines. In this sense, part of the difficulty of problem solving arises not from the problem itself but from the socio-cultural impediments to people from different disciplines working together, just as management of a river on the boundary between two nations is more complicated than management of a river located within a single country. These problems will become easier as representatives of different disciplines develop the skills needed to work with one another and as examples of successful interdisciplinary analyses are recognized.