ship.rtf

Cleaning and Rescue of Ship Soundings

Data quality is the most important aspect of bathymetric prediction. High resolution satellite gravity data is needed not only to interpolate among the sparse soundings but also to identify which soundings are bad. Many soundings from remote areas of the oceans were collected before shipboard computers and satellite navigation were available. Moreover, all of the ship sounding data were collected over a 30 year period on a variety of ships from many countries and with many different chief scientists aboard. Thus the quality of the data is highly variable and many entire cruises or sections of cruises are bad [Smith, 1993]; only the most recent (since ~1987) GPS-navigated multibeam data are reliable. Typical errors include: navigation errors, digitizing errors, typographical errors due to hand entry of older sounding, reporting the data in fathoms instead of meters, incorrect sound velocity measurements and even computer errors in reading punch cards One bad section of a cruise in an isolated region will introduce a seafloor topographic feature that does not exist. Some named examples are the Islas Orcadas Seamounts in the Weddell Sea and the Novara Knoll in the Southern Indian Ocean [Canadian Hydrographic Service, 1984]. The high resolution gravity fields provides the information needed to assess the accuracy of the ship sounding data. Our approach is to identify the bad cruises through a comparison with an initial prediction based on the gravity and either eliminate them or attempt to fix problem areas (data rescue); rescue is especially important for soundings that fill a large data gap. We maintain 4 data bases of standard underway geophysical data (navigation, depth, gravity, and magnetics) in a GMTPLUS-format that is easily assessable through GMT routines. There is a lot of overlap among the data bases although each contains some unique cruises. Some statistics on the number of good and bad cruises in each data base follows

Good Bad

WS 2185 564

SIO 1415 182

NGDC 1253 813

BB 125 848

WS - Wessel Smith data base which is a derivative of the original Lamont Data base. SIO - Scripps data base, Geological Data Center. NGDC - National Geophysical Data Center data base. BB - Brownbook derivative of Lamont data base. (WS and BB have some identical data so WS is searched first.)

The automation, maintenance, and rescue of the ship data is largely funded by the NSF Division of Ocean Sciences. In addition to these data we are preparing for the possible declassification of a the US Navy Ocean Survey data [Medea Report, 1995].

Data preparation and assembly is an ongoing process; the current data are sufficiently good to construct a global bathymetric grid. Here is one recipe (Nettleton's Method) that we are developing.

Nettleton's Method

1) Grid available bathymetric soundings on a 2 minute Mercator grid that matches our gravity anomaly grid. To avoid seams, all work is done on a global grid between latitudes of +72[ring]. Coastline points from GMT provide the zero-depth estimates. A finite-difference, minimum-curvature routine is used to interpolate the global grid [Smith and Wessel, 1990]. This gridding program requires at least 256 Mbytes of computer memory.

2) Separate the grid into low-pass and high-pass components using a Gaussian filter (0.5 gain at 160 km). Filtering and downward continuation are performed with a multiple strip, 2-D FFT that spans 0-360[ring] longitude to avoid Greenwich edge effects.

3) Form high-pass filtered gravity using the same Gaussian filter.

4) Downward continue the high-pass filtered gravity to the low-pass filtered bathymetry assuming Laplace's equation is appropriate. A depth-dependent Wiener filter is used to stabilize the downward continuation.

5) Accumulate high-pass filtered soundings and corresponding high-pass filtered/downward-continued gravity into small (160 km) overlapping areas and perform a robust regression analysis. In sediment-free areas, the topography/gravity transfer function should be flat and equal to 1/2[pi]G[Delta][rho] so in the space domain, a linear regression is appropriate. This works well on young seafloor but not on old seafloor where sediment cover destroys the correlation between topography and gravity. In these cases we assume the seafloor is flat and set the topography/gravity ratio to zero. Finally there are intermediate cases where topographic depressions will be sediment filled while the highs protrude above the sediments so the topography/gravity relationship is non-linear. It is these partially sedimented areas that make the bathymetric problem difficult and inherently non-linear. Continental margins and shelves pose similar problems.

6) Regional topography/gravity ratio estimates are gridded and multiplied by the high-pass filtered/downward-continued gravity to form high-pass filtered predicted bathymetry.

7) The total predicted bathymetry is equal to the sum of the high-pass filtered predicted bathymetry and the low-pass filtered bathymetry.

8) Finally, the pixels constrained by ship soundings or coastline data are reset to the measured values and the finite-difference, minimum curvature routine is used to perturb the predicted values toward the measured values. Measured depths are flagged so they can be extracted separately. This final step dramatically increases the accuracy and resolution of the bathymetric grid in well surveyed areas so it agrees with the best hand-contoured bathymetric charts.