In the present study, we provide a systematic review of the assessment center literature with respect to specific design and methodological characteristics that potentially moderate the construct-related validity of assessment center ratings. We also conducted a meta-analysis of the relationship between these characteristics and construct-related validity outcomes. Results for rating approach, assessor occupation, assessor training, and length of assessor training were in the predicted direction such that a higher level of convergent, and lower level of discriminant validity were obtained for the across-exercise compared to the within-exercise rating method; psychologists compared to managers/supervisors as assessors; assessor training compared no assessor training; and longer compared to shorter assessor training. Partial support was also obtained for the effects of the number of dimensions and assessment center purpose. Our review also indicated that relatively few studies have examined both construct-related and criterion-related validity simultaneously. Furthermore, these studies provided little, if any support for the view that assessment center ratings lack construct-related validity while at the same time demonstrating criterion-related validity. The implications of these findings for assessment center construct-related validity are discussed.