The Taylor hypothesis (TH) as applied to rainfall is a proposition about the spacetime covariance structure of the rainfall field. Specifically, it supposes that if a spatiotemporal precipitation field with a stationary covariance Cov(r, ) in both space r and time moves with a constant velocity v, then the temporal covariance at time lag is equal to the spatial covariance at space lag r = v that is, Cov(0, ) = Cov(v, 0). Qualitatively this means that the field evolves slowly in time relative to the advective time scale, which is often referred to as the frozen field hypothesis. Of specific interest is whether there is a cutoff or decorrelation time scale for which the TH holds for a given mean flow velocity v. In this study, the validity of the TH is tested for precipitation fields using high-resolution gridded Next Generation Weather Radar (NEXRAD) reflectivity data produced by the WSI Corporation by employing two different statistical approaches. The first method is based on rigorous hypothesis testing, while the second is based on a simple correlation analysis, which neglects possible dependencies between the correlation estimates. Radar reflectivity values are used from the southeastern United States with an approximate horizontal resolution of 4 km 4 km and a temporal resolution of 15 min. During the 4-day period from 2 to 5 May 2002, substantial precipitation occurs in the region of interest, and the motion of the precipitation systems is approximately uniform. The results of both statistical methods suggest that the TH might hold for the shortest space and time scales resolved by the data (4 km and 15 min) but that it does not hold for longer periods or larger spatial scales. Also, the simple correlation analysis tends to overestimate the statistical significance through failing to account for correlations between the covariance estimates.