I came across a Reddit post last week that should concern every BPO hiring leader.
A candidate applied for an entry-level work-from-home call center position. She scored above 80% on all the cognitive reasoning tests: pattern recognition, logical thinking, problem-solving. Strong performance across the board.
She was rejected.
The reason? A personality assessment, which accounted for 5/6 of the total evaluation score, dragged her overall rating below 70%. The algorithm decided she wasn't a culture fit before a human ever reviewed her application.
Here's the comment that stuck with me: "The assessment is definitely trying to filter out neurodivergence."
When your recruiting software is systematically screening out qualified candidates because they think differently, you're not using evidence-based hiring. You're using algorithmic discrimination with a veneer of science.
The Harver Problem: When Assessments Become Barriers
Harver has successfully captured significant BPO market share with its assessment-first approach. They market "scientifically validated assessments" backed by "over 35 years of rich data insights" and claim to "lower 90-day attrition rates by 25%."
The company serves 1,300+ customers globally, processes 100+ million candidates, and explicitly targets "leading BPOs, contact centers, and retail organizations."
But when you dig into verified candidate experiences and user reviews, a troubling pattern emerges.
The 150-Question Barrier:
Harver assessments include 15+ modules covering behavioral, cognitive, skills-based, and job-specific testing. One Capterra reviewer notes: "Candidates fill out about 150 questions, could be demotivated to apply. Creates hurdle for people."
Think about what this means in practice. You're competing for talent in a market where candidates have multiple offers. The average candidate market window is just 10 days according to Phenom research. Time-to-hire averages 38 days for contact center roles, according to Fidelity National Information Services.
You're already at a disadvantage. Now you're asking candidates to spend 60-90 minutes completing 150 questions before a human even sees their application.
One candidate described their experience: "Interview, written paragraph, 4-person interview, 2 videos, intro video, assessment, shadowing day—then rejected." All of that investment from both the candidate and the hiring team, only to be screened out by an algorithm at the end.
The Personality Test Overweight Problem:
Multiple candidates report that Harver's personality assessments carry disproportionate weight in final scoring. One Reddit user detailed: "The personality test weighed 5/6 of the whole evaluation. Even scoring 80%+ in reasoning tests, the personality component dragged my average below 70%."
This is problematic for three reasons:
Predictive Validity: Schmidt's meta-analysis shows cognitive ability tests have 0.5-0.6 validity coefficients—they're genuinely predictive of job performance. Personality tests have far lower validity, especially for entry-level positions where the role is highly structured.
Lack of Transparency: The candidate scored above 80% on the components that actually predict job success, but was rejected because of personality scoring. The algorithm turned subjective personality traits into numerical scores without any human judgment or context.
Disparate Impact: When personality tests become the primary filter, you're screening for conformity rather than capability. As one candidate noted: "Cognitive tests automatically discriminate against people with disabilities. No reasonable accommodation offered."
The Neurodivergence Filter:
This is where algorithmic screening crosses the line into discrimination.
Multiple candidates report: "Assessment definitely trying to filter out neurodivergence." Another stated: "Harver discriminates against disabilities with cognitive tests and offers no reasonable accommodation."
Consider what you're actually filtering for when personality tests dominate your hiring decision:
The research on neurodiversity in the workplace is clear: when properly supported, neurodiverse employees often outperform neurotypical peers on tasks requiring attention to detail, pattern recognition, and systematic thinking.
Yet Harver's approach—150 questions, personality-weighted scoring, no human review—systematically screens these candidates out before you ever meet them.
The False Precision Problem:
Here's what bothers me most as someone who believes in data-driven hiring: Harver's approach creates the illusion of scientific precision while making fundamentally subjective decisions.
They market "scientifically validated assessments" and "predictive analytics." The platform generates numerical scores. The branding emphasizes I/O psychology credentials.
However, when a candidate scores 80% or higher on cognitive tests (the components actually validated to predict job performance) and is rejected due to a lower personality test score (with lower validity), that's not science. That's pseudoscience dressed up with impressive-sounding methodology.
One Reddit user summed it up: "Cognitive tests provide zero information on applicant's ability to do the job. How do brain games relate to WFH call center entry level position?"
They're exactly right to be skeptical.
The Cost to Your Business
This isn't just a fairness issue. It's a business problem with quantifiable costs.
Candidate Drop-Off: When your application process includes 150 questions, you lose candidates before they complete it. Josh Bersin's research shows 60% of candidates abandoned slow/complex applications in 2024. Every qualified candidate who drops out is a hire you never make.
Lost Talent Pipeline: Candidates who complete your assessment and get algorithmically rejected don't just disappear. They tell their networks. They post on Reddit. They leave reviews on Indeed and Glassdoor. One bad experience with discriminatory screening creates ripple effects across your employer brand.
Compliance Risk: The EEOC is increasingly scrutinizing AI-powered screening tools for disparate impact. When your assessment systematically screens out candidates with disabilities, you're creating legal liability. The fact that Harver offers "no reasonable accommodation" according to multiple candidates compounds this risk.
Opportunity Cost: Every qualified candidate your algorithm rejects is someone your competitors might hire. In a tight labor market where BPOs compete aggressively for talent, algorithmic screening gives your competition an advantage.
What Evidence-Based Assessment Actually Looks Like
The solution isn't abandoning assessments. We've already established that skills-based assessments are 2-3x more predictive than resume screening.
The solution is evidence-based assessment design that actually predicts job performance without discriminating against protected classes.
Here's what that means in practice:
Schmidt's meta-analysis shows cognitive ability tests have 0.5-0.6 validity coefficients for predicting job performance. They should be weighted accordingly in your scoring model—not subordinated to personality tests with lower validity.
For contact center roles, assess the skills that actually matter: verbal communication, stress management under pressure, multitasking capability, empathy in customer interactions. These can be measured through work sample tests and situational judgment scenarios.
TestGorilla research shows 92% of employers using multi-measure testing were more satisfied with results. Multiple short assessments work better than one marathon 150-question battery. Aim for 20-30 minutes total.
When a candidate scores well on cognitive and skills tests but doesn't meet personality criteria, flag that for human review rather than auto-rejecting. Context matters. A candidate who's nervous during assessment might be excellent once comfortable in the role.
Candidates with disabilities are entitled to reasonable accommodation under ADA. If your assessment platform doesn't offer that, you're in legal jeopardy. Period.
Regularly audit your assessment results by protected class. If you're rejecting significantly higher percentages of candidates with disabilities, neurodivergent candidates, or other protected groups, your assessment has disparate impact regardless of intent.
The Gartner Standard
Gartner research provides a useful framework: recruiting functions that excel at workforce-shaping behaviors (including skills-based hiring) achieve 24% increase in quality of hire, with individuals performing successfully 20% faster.
Notice what Gartner emphasizes: quality of hire measured by actual job performance, not algorithmic personality scores.
When 81% of employers now use skills-based hiring (up from 56% in 2022, according to TestGorilla), and 90% report a reduction in mis-hires, the market is moving toward evidence-based assessment.
The question is whether you're implementing it correctly—using validated predictors of job performance—or implementing it poorly—using lengthy personality tests that filter out neurodivergent talent.
The Path Forward
I'm a strong advocate for data-driven hiring. Assessments, when designed correctly, dramatically improve hiring outcomes. The research backs this up unequivocally.
But I also believe that "data-driven" doesn't mean "abdicate judgment to algorithms."
When your platform auto-rejects candidates who score 80%+ on cognitive tests because of personality scores, you're not using data to make better decisions. You're using data to justify discriminatory outcomes you'd never defend if a human made them.
BPOs need high-volume hiring solutions that maintain quality. But quality doesn't mean algorithmic conformity. It means hiring people who can actually do the job, even if they think differently than your algorithm expects.
Your recruiting software should expand your talent pipeline, not systematically exclude qualified candidates based on how they answer personality questions.
If your current platform is doing the latter, it's time to ask whether you're really using evidence-based hiring or just expensive discrimination with scientific branding.
Get a Demo - See how Journeyfront's assessment platform predicts job performance without discriminating against neurodivergent candidates.