The setup: what role we were filling
The brief was deliberately generic — a mid-market B2B SaaS company looking for their first or second SDR. No fancy niche, no unusual requirements. Just a standard profile that most scaling companies would recognize.
Title: Sales Development Representative (SDR)
Company type: B2B SaaS, 50–200 employees
Target market: Mid-market / enterprise sales cycles
Comp: $50K–$65K base + uncapped commission
Location: Remote-first, US-based
We gave the system the same information a hiring manager would put in a job description. The goal: see what comes back when there's no curated database, no pre-screened pool, and no recruiter doing manual outreach.
The criteria we screened against:
The process: how AI sourced and screened
Most recruiting tools stop at sourcing — they find names and emails, then hand off to a human for everything else. We ran the full pipeline automatically, from initial discovery to ranked shortlist.
Broad sourcing across professional networks
The system searched across LinkedIn, job boards, and professional databases for candidates matching the base criteria. No pre-existing database — everything sourced fresh for this role.
Profile analysis and initial scoring
Each candidate profile was analyzed across six dimensions: outbound experience, quota data, company type, tenure, tool familiarity, and career progression. Candidates received an initial match score.
Signal enrichment
Top-scoring candidates were cross-referenced for additional signals: recent activity, public commentary on sales topics, any open-to-work indicators. This filters out stale profiles.
Shortlist generation with ranked scoring
The top tier was compiled into a structured shortlist with scores, key highlights, and an AI assessment note summarizing why each candidate was selected. Delivered as a shareable link.
No human reviewed the long list before shortlisting. No one cold-called candidates. No recruiter was involved in the sourcing phase. The entire process from role input to shortlist delivery ran without manual intervention.
The results
The numbers held up better than we expected. The filtration rate — from broad sourcing to final shortlist — was roughly 20%, which is in line with what good human recruiters achieve on a carefully curated search.
The shortlisted candidates averaged 75+ on our match scoring rubric, with the top tier clustering between 83–94. All 24 had verifiable outbound SDR experience in B2B SaaS, most had quota attainment data in their profiles, and all were flagged as currently active (not passive/stale).
"The filtration rate was ~20% — from 100+ candidates down to 24 interview-ready. That's not far off what a specialist recruiter delivers, but it happened in 48 hours, not 4–6 weeks."
The comparison against traditional recruiting is stark:
| Metric | Shortlist (AI) | Traditional Recruiter |
|---|---|---|
| Time to shortlist | 48 hours | 4–6 weeks |
| Cost per hire (recruiter fee) | $0 agency fee | $15,000–$20,000 |
| Candidates on shortlist | 24 | 3–5 typically |
| Human time required | ~2 min to submit role | Weeks of interviews & briefings |
| Scoring transparency | Full 6-dimension scores | Recruiter's subjective opinion |
What good SDR candidates actually look like
The most interesting output from the pilot wasn't the shortlist itself — it was the patterns we found across the top scorers. Four traits separated the 83–94 score range from the 60–74 range consistently.
If you're reviewing SDR candidates manually, these four signals are the fastest filter. Most recruiters don't screen for them explicitly — which is why candidate pools feel noisy. Our system weights all four in the scoring rubric by default.