In 1991, digital maps of rural areas were not available. Typically, paper topographic maps are themselves prepared in layers with separate masters for contours, detail (roads, buildings, streams), symbols, and annotation (e.g., place names). Therefore, it was possible to go back to original masters, choose only the contours and the detail layers, which could then be scanned and vectorized. In this way, scanned 1:5000 scale topographic maps, which were then edge-matched into seamless base mapping, were used as the means to co-register and, therefore, integrate all the other data sets. There were five sets of other data:
Historic flood events: Two well-documented flood events (1988 and 1989) were mapped from oblique photographs of the floods taken from a helicopter, from press photos, and from questionnaire interviews of residents. Thus, the flood extents were mapped differentiating certain and uncertain boundaries. These could be used as part of the model calibration.
Drainage structures: This documented the location and attributes of all bridges, weirs, and culverts that would influence channel flow. These were collected by field survey during which channel cross sections were measured to be input directly to the simulation model.
Sub-basin parameters: These were a series of map layers derived mostly from API to produce the lumped parameters at modeling nodes. These are discussed in detail below.
Development scenarios: These were a series of future development scenarios that would replace current land use in the lumped parameterization of “what if”-type analyses and included various mitigation options, such as river training.
Floodplain elevations: Spot-heights were digitized from 1:1000 topographic maps in order to build up detailed floodplain DEM so as to extend the field surveyed channel cross sections to the edge of the floodplain, but also to act as a key data layer in the postprocessing of the simulation outputs.
The process of preparing the lumped parameters for the simulation modeling is as follows. A land use classification is created that reflects run off characteristics and which can be consistently equated to the U.S. Soil Conservation Service’s (SCS) runoff curve numbers (CN). These are numbers
in the range 0 to 100 and can be loosely interpreted as the proportion of rainfall contributing to storm runoff depending on the nature of soils and
vegetation. In the Hong Kong case, with intense rainfall on steep slopes that quickly exceeds the infiltration capacity, runoff is most influenced by
land cover types rather than by soil types, which thus were not mapped separately. The highest CN is usually assigned to urban areas where there is
little infiltration or storage, a low CN would be assigned to woodland, and the lowest to, say, a commercial fish pond where all the rainfall is retained within the pond.
Eighteen classes of land cover were mapped by API and digitized into GIS. There is a tendency for tropical terrain to be characterized by a distinct break of slope between the steeper mountainous terrain and the gently sloping valley floors (noticeable in Figure 6.6 and Figure 6.7). This is an important feature in the simulation modeling that needs to be distinguished if realistic unit hydrographs are to be developed and so this too was mapped as a polygon boundary between “upland” and “lowland” zones. Land cover polygons together with upland and lowland areas for a small catchment are given in Figure 6.9(a). The engineers would identify the proposed locations of modeling nodes. These would be screen digitized into GIS and snapped to the drainage network.
The coordinates of each node and along stream distance to the next node were output to the simulation model. From API, the subcatchments subtended by each node are identified and digitized (Figure 6.9(b)). The three layers—land cover, break of slope, and subcatchments—are then overlaid (Figure 6.9(c)) and tables produced giving total area for each class of land cover in the upland and lowland areas of each subcatchment. These are used to produce an area-weighted average CN for the upland and lowland portions of each subcatchment and are the lumped parameters used as input to the flood simulation modeling.
The hydraulic modeling used for the flood simulation was pseudo 2D, solving the full St. Venant equations to simulate variations of flow in space
and time. The St. Venant equations assume that channel discharge can be calculated from the average cross-sectional velocity and depth and are based on a mass balance equation (6.1) and a momentum balance Equation (6.2), which assumes that water is incompressible (Beven, 2001):
where A = cross-sectional area, P = wetted perimeter, v = average velocity, h = average depth, So = bed slope, I = lateral inflow per unit length of channel, g = gravitational acceleration, f = the Darcy–Weisbach uniform roughness coefficient.
In the pseudo two-dimensional approach, flow along reaches is modeled as 1D (at a node) using mean velocities, whereas overflow into floodplain
storage and flow can also be simulated at representative nodes. Structures, such as bridges, weirs, and dams, are fully described. Calibration of the
models was achieved using historical rainfall data and corresponding data from gauging stations giving velocity and stream level at a number of locations and by reference to the historic flood extents mapped in GIS. Output from the simulation is a time series for each node giving the height and the velocity of the flow. Since the simulation output refers to a series of points (the nodes), the data need further processing in order to visualize the flood extents and assess the impacts.
Figure 6.10 summarizes this postprocessing carried out in GIS. At the time it was not possible to use the entire simulation output of height and velocity. Today, this could be resolved as a series of multiple maps brought together in an animation to show the progress of the flood from initial rise until it drains away. But, in 1991, that technology wasn’t quite with us and with the need to evaluate multiple scenarios, time was tight anyway. With no satisfactory way of extrapolating velocities over the floodplain, postprocessing focused on flood height. Because the catchments are relatively small and all nodes reach their peak flow within a short time of each other, we could take the maximum flow height at each node and use this to extrapolate flood extent and depth across the entire floodplain. This is not as straightforward as it may sound.