Jump to ContentJump to Main Navigation
Atmospheric Boundary Layer FlowsTheir Structure and Measurement$
Users without a subscription are not able to see the full content.

J. C. Kaimal and J. J. Finnigan

Print publication date: 1994

Print ISBN-13: 9780195062397

Published to Oxford Scholarship Online: November 2020

DOI: 10.1093/oso/9780195062397.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 26 February 2021

Acquisition and Processing of Atmospheric Boundary Layer Data

Acquisition and Processing of Atmospheric Boundary Layer Data

7 Acquisition and Processing of Atmospheric Boundary Layer Data
Atmospheric Boundary Layer Flows

J. C. Kaimal

J. J. Finnigan

Oxford University Press

Much of what we know about the structure of the boundary layer is empirical, the result of painstaking analysis of observational data. As our understanding of the boundary layer evolved, so did our ability to define more clearly the requirements for sensing atmospheric variables and for processing that information. Decisions regarding choice of sampling rates, averaging time, detrending, ways to minimize aliasing, and so on, became easier to make. We find we can even standardize most procedures for real-time processing. The smaller, faster computers, now within the reach of most boundary layer scientists, offer virtually unlimited possibilities for processing and displaying results even as an experiment is progressing. The information we seek, for the most part, falls into two groups: (1) time-averaged statistics such as the mean, variance, covariance, skewness, and kurtosis and (2) spectra and cospectra of velocity components and scalars such as temperature and humidity. We discuss them separately because of different sampling and processing requirements for the two. A proper understanding of these requirements is essential for the successful planning of any experiment. In this chapter we discuss these considerations in some detail with examples of methods used in earlier applications. We will assume that sensors collecting the data have adequate frequency response, precision, and long-term stability and that the sampling is performed digitally at equally spaced intervals. We also assume that the observation heights are chosen with due regard to sensor response and terrain roughness. For calculations of means and higher order moments we need time series that are long enough to include all the relevant low-frequency contributions to the process, sampled at rates fast enough to capture all the high-frequency contributions the sensors are able to measure. Improper choices of averaging times and sampling rates can indeed compromise our statistics. We need to understand how those two factors affect our measurements in order to make sensible decisions on how long and how fast to sample.

Keywords:   Aliasing, Block averaging, Cosine tapering, Decimation, Ergodic hypothesis, Folding frequency, Hamming window, Nyquist frequency, Obukhov length, Padding

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .