This study notes that the lack of convergent and discriminant validity of assessment center ratings in the presence of content-related and criterion-related validity is paradoxical within a unitarian framework of validity. It also empirically demonstrates an application of generalizability theory to examining the convergent and discriminant validity of assessment center dimensional ratings. Generalizability analyses indicated that person, dimension, and person by dimension effects contribute large proportions of variance to the total variance in assessment center ratings. Alternately, exercise, rater, person by exercise, and dimension by exercise effects are shown to contribute little to the total variance. Correlational and confirmatory factor analyses results were consistent with the generalizability results. This provides strong evidence for the convergent and discriminant validity of the assessment center dimension ratingsa finding consistent with the conceptual underpinnings of the unitarian view of validity and inconsistent with previously reported results. Implications for future research and practice are discussed.