DATA ACCURACY AND QUALITY
The quality of data sources for GIS processing is becoming an ever increasing concern among GIS application specialists. With the influx of GIS software on the commercial market and the accelerating application of GIS technology to problem solving and decision making roles, the quality and reliability of GIS products is coming under closer scrutiny. Much concern has been raised as to the relative error that may be inherent in GIS processing methodologies. While research is ongoing, and no finite standards have yet been adopted in the commercial GIS marketplace, several practical recommendations have been identified which help to locate possible error sources, and define the quality of data. The following review of data quality focuses on three distinct components, data accuracy, quality, and error.
The fundamental issue with respect to data is accuracy. Accuracy is the closeness of results of observations to the true values or values accepted as being true. This implies that observations of most spatial phenomena are usually only considered to estimates of the true value. The difference between observed and true (or accepted as being true) values indicates the accuracy of the observations.
Basically two types of accuracy exist. These are positional and attribute accuracy. Positional accuracyis the expected deviance in the geographic location of an object from its true ground position. This is what we commonly think of when the term accuracy is discussed. There are two components to positional accuracy. These are relative and absolute accuracy. Absolute accuracy concerns the accuracy of data elements with respect to a coordinate scheme, e.g. UTM. Relative accuracy concerns the positioning of map features relative to one another.
Often relative accuracy is of greater concern than absolute accuracy. For example, most GIS users can live with the fact that their survey township coordinates do not coincide exactly with the survey fabric, however, the absence of one or two parcels from a tax map can have immediate and costly consequences.
Attribute accuracy is equally as important as positional accuracy. It also reflects estimates of the truth. Interpreting and depicting boundaries and characteristics for forest stands or soil polygons can be exceedingly difficult and subjective. Most resource specialists will attest to this fact. Accordingly, the degree of homogeneity found within such mapped boundaries is not nearly as high in reality as it would appear to be on most maps.
Quality can simply be defined as the fitness for use for a specific data set. Data that is appropriate for use with one application may not be fit for use with another. It is fully dependant on the scale, accuracy, and extent of the data set, as well as the quality of other data sets to be used. The recent U.S. Spatial Data Transfer Standard (SDTS) identifies five components to data quality definitions. These are :
The lineage of data is concerned with historical and compilation aspects of the data such as the:
|source of the data;|
|content of the data;|
|data capture specifications;|
|geographic coverage of the data;|
|compilation method of the data, e.g. digitizing versus scanned;|
|ransformation methods applied to the data; and|
|the use of an pertinent algorithms during compilation, e.g. linear simplification, feature generalization.|
The identification of positional accuracy is important. This includes consideration of inherent error (source error) and operational error (introduced error). A more detailed review is provided in the next section.
Consideration of the accuracy of attributes also helps to define the quality of the data. This quality component concerns the identification of the reliability, or level of purity (homogeneity), in a data set.
This component is concerned with determining the faithfulness of the data structure for a data set. This typically involves spatial data inconsistencies such as incorrect line intersections, duplicate lines or boundaries, or gaps in lines. These are referred to as spatial or topological errors.
The final quality component involves a statement about the completeness of the data set. This includes consideration of holes in the data, unclassified areas, and any compilation procedures that may have caused data to be eliminated.
The ease with which geographic data in a GIS can be used at any scale highlights the importance of detailed data quality information. Although a data set may not have a specific scale once it is loaded into the GIS database, it was produced with levels of accuracy and resolution that make it appropriate for use only at certain scales, and in combination with data of similar scales.
Two sources of error, inherent and operational, contribute to the reduction in quality of the products that are generated by geographic information systems. Inherent error is the error present in source documents and data. Operational error is the amount of error produced through the data capture and manipulation functions of a GIS. Possible sources of operational errors include:
|Mis-labelling of areas on thematic maps;|
|misplacement of horizontal (positional) boundaries;|
|human error in digitizing|
|GIS algorithm inaccuracies; and|
While error will always exist in any scientific process, the aim within GIS processing should be to identify existing error in data sources and minimize the amount of error added during processing. Because of cost constraints it is often more appropriate to manage error than attempt to eliminate it. There is a trade-off between reducing the level of error in a data base and the cost to create and maintain the database.
An awareness of the error status of different data sets will allow user to make a subjective statement on the quality and reliability of a product derived from GIS processing.
The validity of any decisions based on a GIS product is directly related to the quality and reliability rating of the product.
Depending upon the level of error inherent in the source data, and the error operationally produced through data capture and manipulation, GIS products may possess significant amounts of error.
One of the major problems currently existing within GIS is the aura of accuracy surrounding digital geographic data. Often hardcopy map sources include a map reliability rating or confidence rating in the map legend. This rating helps the user in determining the fitness for use for the map. However, rarely is this information encoded in the digital conversion process.
Often because GIS data is in digital form and can be represented with a high precision it is considered to be totally accurate. In reality, a buffer exists around each feature which represents the actual positional location of the feature. For example, data captured at the 1:20,000 scale commonly has a positional accuracy of +/- 20 metres. This means the actual location of features may vary 20 metres in either direction from the identified position of the feature on the map. Considering that the use of GIS commonly involves the integration of several data sets, usually at different scales and quality, one can easily see how errors can be propagated during processing.
Several comments and guidelines on the recognition and assessment of error in GIS processing have been promoted in papers on the subject. These are summarized below:
|There is a need for developing error statements for data contained within geographic information systems (Vitek et al, 1984).|
|The integration of data from different sources and in different original formats (e.g. points, lines, and areas), at different original scales, and possessing inherent errors can yield a product of questionable accuracy (Vitek et al, 1984).|
|The accuracy of a GIS-derived product is dependent on characteristics inherent in the source products, and on user requirements, such as scale of the desired output products and the method and resolution of data encoding (Marble, Peuquet, 1983).|
|The highest accuracy of any GIS output product can only be as accurate as the least accurate data theme of information involved in the analysis (Newcomer, Szajgin, 1984).|
|Accuracy of the data decreases as spatial resolution becomes more coarse (Walsh et al, 1987). ; and|
|As the number of layers in an analysis increases, the number of possible opportunities for error increases (Newcomer, Szajgin, 1984).|