You recorded your HireVue. You answered the questions. Now you are sitting there wondering what the recruiter actually pulled up on the other side. Was it a transcript? A score? Did the AI flag something you said in your second answer?
Here is the full picture of what recruiters see on HireVue — the dashboard, the AI scoring system, and the scorecard that human reviewers use to rank you against every other candidate in the pool.
This is not a guess. The mechanics covered here come from HireVue's published documentation, IO Psychology research, and the scoring framework their enterprise clients configure before a single candidate ever records a response.
What the Recruiter Dashboard Actually Shows
When a recruiter opens HireVue after your submission, they are not staring at a raw video file. They see a structured candidate management interface with four key components.
The first is your competency tier placement. HireVue places candidates into bands — typically Top, Middle, and Bottom — based on how their responses scored relative to other candidates who applied for the same role. Recruiters use these tiers to filter their review queue. If you land in the Top band, your profile gets reviewed first. If you land in the Bottom band, many companies never watch your video at all.
The second is your competency breakdown report. Rather than a single aggregate score, the dashboard shows a recruiter how you scored on each individual competency the company decided to measure. A role might evaluate five to seven competencies. You get a separate score for each one.
The third is your video playback. Recruiters can watch your recorded responses, jump between questions, and replay specific sections. In practice, most recruiters do not watch every video at 1x speed. Top-tier candidates get full reviews. Mid-tier candidates often get selective question playback.
The fourth is the human scorecard. After watching the video, the recruiter enters their own ratings. The AI score and the human score are separate data points. The recruiter's manual evaluation is always the final input before a candidate is moved forward or rejected.
How HireVue AI Scoring Actually Works
The most common misconception about HireVue is that the AI is reading your facial expressions. This is outdated. HireVue publicly confirmed they moved away from facial movement analysis years ago after significant criticism from researchers and regulators.
What the AI actually does is analyze the verbal content of your responses using natural language processing and large language models. The system transcribes your audio, then maps the text of your answers against a scoring model built for that specific role. It is looking for whether the language patterns and response content align with what high performers typically say when answering the same question.
The model is what HireVue calls "static and deterministic." It is trained in a controlled environment by Industrial-Organizational Psychologists before deployment. It does not learn or update in real time during live assessments. That means the same answer, given by two different people, gets the same score regardless of how their face looks or how their voice sounds.
In my experience reviewing how enterprise clients configure these systems, the AI output functions more like a screening filter than a hiring decision. It surfaces candidates worth human attention, not candidates who have been automatically hired or rejected.
The Scorecard: What Humans Look for After the AI Runs
Here is the part candidates never see because it lives entirely on the recruiter's side of the platform.
Before your HireVue link was sent, the company built a structured rubric. That rubric defines what a strong, average, and weak response looks like for each interview question. The scoring framework is typically built on Behaviorally Anchored Rating Scales (BARS), which means each rating level has specific behavioral descriptors, not vague adjectives.
A 5 on the "Problem-Solving" competency might require the candidate to have: described a specific technical obstacle, walked through their decision-making process, and quantified the outcome. A 2 might indicate the candidate gave a general statement without a concrete example.
Recruiters rate you on a numeric scale for each competency, enter notes, and submit their scorecard. In organizations that use collaborative hiring, multiple reviewers do this independently before comparing scores.
Look, the practical implication of this is significant. The recruiter is not watching your video with a vague sense of "do I like this person." They are working through a checklist. They are looking for specific things. And if your answer does not include the specific behaviors their rubric defines, the score suffers regardless of how polished your delivery was.
The 7 Competencies HireVue Evaluates Most Frequently
The specific competencies vary by role and by company configuration, but IO Psychology research and HireVue's published documentation confirm a consistent set of dimensions that appear across most deployments.
The most common ones recruiters is evaluating in real assessments are:
Communication Skills. Clarity, structure, and relevance. Can you answer the question directly and explain your thinking without rambling?
Teamwork and Collaboration. Evidence of working across teams, resolving friction, and deferring or advocating appropriately. The AI looks for language patterns that indicate experience working within group dynamics, not just asserting "I am a team player."
Problem-Solving. Did you walk through a logical framework? Did you identify constraints and tradeoffs? Vague answers score poorly here because the model is trained to look for structured reasoning.
Adaptability. Evidence of changing course when circumstances shifted. Companies are looking for a concrete story, not a philosophy statement about being flexible.
Drive for Results and Initiative. Did you do something without being told? Did you quantify the outcome? This competency rewards specificity. "I increased conversion by 18 percent after identifying a drop-off we had never measured before" scores better than "I took initiative to improve results."
Conscientiousness. Reliability and follow-through signals. Interviewers are trained to look for language that indicates you close loops, follow up, and do not let things fall through.
Dependability. Consistency under pressure. This one surfaces in how you describe handling competing priorities or tight deadlines.
What Actually Gets You Rejected Before a Human Watches Your Video
This is the question that matters most, and it has a direct answer.
Candidates get filtered out before human review when they land in the Bottom competency tier. The AI generates the tier placement. The recruiter then decides how deep into each tier they review, based on how many open slots they need to fill.
In high-volume hiring cycles at companies using HireVue for campus recruiting or mass-applicant roles, the Bottom third of the pool is very often never watched by a human. The recruiter simply does not have time, and the AI tier acts as the first gate.
What lands you in the Bottom tier is consistently the same: answers that are too generic, too short, or that fail to include concrete behavioral evidence. "I am a strong communicator" scores low. "I restructured our internal documentation after our team grew from four to twelve people, which cut onboarding time from three weeks to eight days" scores high because it is specific, measurable, and demonstrates the competency through action.
The other fast track to the Bottom tier is going off-topic. If the question asks for a time you handled conflict and you pivot to talking about a project you are proud of, the NLP model detects the mismatch between the expected behavioral pattern and what your response actually contains.
If You Opted Out of AI Scoring, Here Is What Changed
HireVue gives candidates the option to opt out of AI scoring. If you selected that option, your video is still reviewed, but it goes directly to a human recruiter using the same rubric and scoring criteria.
Practically speaking, the difference is prioritization speed. AI-scored candidates get tier placements and move into the review queue faster. Manually reviewed candidates depend on recruiter bandwidth. In high-volume roles, this means an opt-out candidate may wait longer before any human touches their profile.
The evaluation standards are identical either way. The rubric does not change. The competencies being measured do not change. The only thing that changes is whether a model or a human is the first to score your responses.
Now that you know what is being scored, the logical next step is understanding what to expect on the timeline side. How long after your HireVue submission should you hear back? We break that down in detail in our HireVue interview response time guide.
If you are on the employer side evaluating whether HireVue is worth the investment, our HireVue pricing guide breaks down exactly what companies pay before the sales call.
