The objective of this paper was to comparatively evaluate different methods for obtaining cutoff scores in content and criterion-related validity settings. Cutoff score methods were compared in reference to the predictor standard error of measurement, number of correct and incorrect classification decisions obtained with each cutoff score procedure, and overall actual criterion performance of individuals passing each cutoff score. Results indicate that while the various methods do result in slightly different cutoff score values, they all fell within the standard error of measurement. In addition, although differences in cutoff values were reflected in differences in the number of correct and incorrect decisions, differences in the actual criterion performance of individuals passing each cutoff score were minimal. The results are discussed in terms of their implications for applied decision making especially in situations where criterion data are unavailable and the use of tests is based on content-related validity evidence.