Pre-employment testing not only helps employers assess applicants' job-specific skills but also adds to the employers' legal defensibility. To ensure legal defensibility, you must check the Validity and Reliability of Assessments.
Validity is the degree to which the pre-employment screening test assesses what it purports to assess and how well it evaluates the applicants' competencies.
It ensures that the pre-employment test measures only the intended characteristics and no other variables. In other words, you measure a trait or skill that is part of the overall desired skill needed for success on the job. To measure the construct validity, you would look at the scores from the assessment correlated with scores from other established tests that measure the same characteristics.
It ensures that the content of a test is relevant and measures the requirements and qualifications for the job role. If you were hiring for a job that required the ability to transcribe content, testing for the rate at which the candidate types would be high regarding content-related validity.
It indicates that the test demonstrates a correlation or other statistical relationship between test performance and job performance. Those who score high on the test tend to perform better on the job.
Test reliability depends on how consistently the test measures the skills required for a specific job role. A test can be considered reliable if a person appears for it repeatedly and the results are similar for every attempt.
Indicates repeatability obtained by giving the same test twice at different timings to a group of applicants.
For example, if a test is designed to assess the technical skills given to a set of applicants twice in two weeks, the results obtained from the two attempts will indicate the reliability of the test.
Indicates the stability of the test after administering different forms of a pre-employment test.
For example, to check if the logical reasoning test is reliable, create a set of questions that will evaluate the logical reasoning and divide the test into two parts. The outcome of both sets should be similar. If they are the same, it means all the items measure the same characteristics and can be used interchangeably.
The test is likely reliable if two or more raters give the same score and make the same assessment decision. Inter-rater reliability is useful because the evaluators will not interpret the same results; raters may disagree on how well certain responses of the constructor skill are being assessed.
The test is reliable if the different characteristics of the test yield similar results.