1

The setup: what role we were filling

The brief was deliberately generic — a mid-market B2B SaaS company looking for their first or second SDR. No fancy niche, no unusual requirements. Just a standard profile that most scaling companies would recognize.

The Role

Title: Sales Development Representative (SDR)
Company type: B2B SaaS, 50–200 employees
Target market: Mid-market / enterprise sales cycles
Comp: $50K–$65K base + uncapped commission
Location: Remote-first, US-based

We gave the system the same information a hiring manager would put in a job description. The goal: see what comes back when there's no curated database, no pre-screened pool, and no recruiter doing manual outreach.

The criteria we screened against:

1–3 years outbound SDR experience
B2B SaaS background (not SMB / e-commerce)
Quota attainment data or specific metrics mentioned
Familiarity with Outreach, Salesloft, or Apollo
Progression signal (promotions, quota increases)
Currently open to new opportunities
2

The process: how AI sourced and screened

Most recruiting tools stop at sourcing — they find names and emails, then hand off to a human for everything else. We ran the full pipeline automatically, from initial discovery to ranked shortlist.

1

Broad sourcing across professional networks

The system searched across LinkedIn, job boards, and professional databases for candidates matching the base criteria. No pre-existing database — everything sourced fresh for this role.

2

Profile analysis and initial scoring

Each candidate profile was analyzed across six dimensions: outbound experience, quota data, company type, tenure, tool familiarity, and career progression. Candidates received an initial match score.

3

Signal enrichment

Top-scoring candidates were cross-referenced for additional signals: recent activity, public commentary on sales topics, any open-to-work indicators. This filters out stale profiles.

4

Shortlist generation with ranked scoring

The top tier was compiled into a structured shortlist with scores, key highlights, and an AI assessment note summarizing why each candidate was selected. Delivered as a shareable link.

What we didn't do

No human reviewed the long list before shortlisting. No one cold-called candidates. No recruiter was involved in the sourcing phase. The entire process from role input to shortlist delivery ran without manual intervention.

3

The results

The numbers held up better than we expected. The filtration rate — from broad sourcing to final shortlist — was roughly 20%, which is in line with what good human recruiters achieve on a carefully curated search.

100+
Total candidates sourced in the initial pull
Fully automated
24
Candidates on the final shortlist, interview-ready
~20% acceptance rate
75+
Average match score across shortlisted candidates
Scale of 0–100

The shortlisted candidates averaged 75+ on our match scoring rubric, with the top tier clustering between 83–94. All 24 had verifiable outbound SDR experience in B2B SaaS, most had quota attainment data in their profiles, and all were flagged as currently active (not passive/stale).

"The filtration rate was ~20% — from 100+ candidates down to 24 interview-ready. That's not far off what a specialist recruiter delivers, but it happened in 48 hours, not 4–6 weeks."

The comparison against traditional recruiting is stark:

Metric Shortlist (AI) Traditional Recruiter
Time to shortlist 48 hours 4–6 weeks
Cost per hire (recruiter fee) $0 agency fee $15,000–$20,000
Candidates on shortlist 24 3–5 typically
Human time required ~2 min to submit role Weeks of interviews & briefings
Scoring transparency Full 6-dimension scores Recruiter's subjective opinion
4

What good SDR candidates actually look like

The most interesting output from the pilot wasn't the shortlist itself — it was the patterns we found across the top scorers. Four traits separated the 83–94 score range from the 60–74 range consistently.

📊
Specific quota data in their profile
Top candidates mentioned specific numbers — percentage of quota hit, number of meetings booked per week, pipeline generated. Vague language like "exceeded targets" scored lower.
Example: "Averaged 28 meetings/month, 118% of quota for 3 consecutive quarters"
📈
Upward progression within 12–24 months
The best candidates showed movement — either a promotion, a quota increase, or a move to a faster-growing company. Flat tenure without progression was a yellow flag.
Example: SDR → Senior SDR at 14 months, then AE track offered
🛠
Named their outbound stack
Candidates who listed specific tools (Outreach, Salesloft, Apollo, ZoomInfo, Gong) signaled they'd actually done the work. Generic "CRM experience" scored much lower.
Example: "Outreach sequences + Apollo for prospecting + Gong for call review"
🎯
B2B SaaS deal cycles, not SMB velocity
SDRs from enterprise/mid-market SaaS backgrounds outperformed those from high-volume SMB or e-commerce environments. The qualification habits are different, and it shows.
Example: "Booked discovery calls for $30K–$80K ACV deals, multi-stakeholder"
What this means for your search

If you're reviewing SDR candidates manually, these four signals are the fastest filter. Most recruiters don't screen for them explicitly — which is why candidate pools feel noisy. Our system weights all four in the scoring rubric by default.