One of the most preferred and widely used methods for generating cutoff scores in content-related validity situations is the Angoff procedure. This procedure, however, typically displays low inter-rater reliability. Very little is known about the accuracy of the cutoff scores. The present study compares the effect of two frame-of-reference rater training approaches on the reliability and accuracy of cutoff scores generated by the Angoff procedure. Results indicate that both frame-of-reference rater training strategies resulted in significantly higher levels of interrater reliability and more accurate decisions than a non-frame-of-reference condition.