
SmartList allows you to screen applicants by watching a two-minute video response on customer interaction scenarios/technical questions/team handling situations. Using such recorded responses can lead to better screening, focusing on applicants' performance and not past glories. These one-way video responses can also replace a first in-person conversation and help one take better screening decisions.
The effectiveness of these assessments depends on the assessors' skills to reliably and consistently assess the responses at different times. Rating responses require assessors to remain neutral and use their listening and concentrating skills.
Key considerations while reviewing video responses:
Candidates commonly appear for these video responses from the comfort of their homes. Sometimes assessors may find the home environment not well kept or tidy enough. In these situations, focusing on the candidate's response and not letting our attention wander to the environment is essential. We should refrain from making assumptions about candidates from these initial impressions.
Environment, for example, available lighting plays a significant role in the quality of the output. Also, asset quality, for example, the microphone, plays a vital role in the audio quality of the response. There may be instances when the video is of inferior quality and hampers viewing. In such cases, it is essential to have the assessor conditioned to focus on the audio output and judge on the basis of evidence in the audio response. In absolute need, you may request the candidate to appear for a second attempt.
Most of us are camera conscious because of our lack of exposure to situations which demand us to talk in front of a camera. Most candidates also find it difficult to speak fluently when they start recording responses. You may see long pauses, a lit bit of mmm's and some discomfort on the faces initially. It is vital for assessors not to take decisions or record observations basis the first few seconds of the response. Watching the entire video before judging something as an enduring behaviour trait is essential.
Assessors may also come across candidates who read through responses from their systems or small pieces of paper. This stems from the inherent discomfort of speaking in front of the camera. The focus should be not on pre-judging but on hearing out the entire response for its quality, as making decisions based on a single act can unjustly impact candidates.
Candidates may or may not know the right kind of attire to be in before recording the responses. It is ok to find candidates in casual attire recording the responses. We should constantly endeavour to avoid linking this to a personality trait issue – such as casual behaviour.
While assessing videos, assessors have to process a considerable amount of information. Strictly following the defined assessment rating guidelines and dimensions can help improve ratings' accuracy.
It is also crucial to ensure that assessors minimize concentration lapses and attend to responses from the candidates with utmost concentration. Keeping away from mobile or emails while listening to responses can help immensely.
Allowing assessors enough time to hear and see all responses can help minimize errors and biases.
Key considerations while recording observations:
While recording observations, the assessor must focus on the response's behaviour and content. The assessor's task is to check:
- What each candidate did non-verbally
- What did each participant say or did not say
- Details of time, including consistent periods of silence
The focus should be on not making general statements or assuming why someone said something. Final ratings should be anchored on the rating scale available for each response type.
Training assessors from the user department or the recruitment team is important. This can help in the consistency of ratings between different assessors. This can happen in the format of a briefing about a specific recruitment activity. A little practice can help achieve consistency by minimizing errors and biases while rating.
Comments