How to Find AI Jobs: A Job-Hunt Playbook
Most AI job hunts in 2026 fail at one of three predictable steps: candidates apply to the wrong tier of company for their evidence, send the same generic resume to dozens of openings, or run out of momentum after the first month with no plan for the second. The market is open, the volume is real, and the pay is competitive, but the search itself rewards a discipline that most candidates do not bring on day one. This is the playbook for the search itself, separate from the role-specific guides elsewhere in this hub. It covers the four discovery channels worth your time, what an AI-specific resume actually looks like in 2026, the recruiter outreach pattern that works, the portfolio that gets phone screens, the cadence of a productive 60-day search, and the negotiation specifics that the major employers expect you to know.
Table of contents
- The four discovery channels
- Building the AI-specific resume
- Recruiter outreach that works
- Portfolio that gets calls
- The first 60 days of an AI job hunt
- Negotiating an AI offer
- Frequently asked questions
- The bottom line
The four discovery channels
Where you actually find AI jobs in 2026 splits into four channels with very different yields and time costs. Most candidates over-invest in the lowest-yield channel and under-invest in the highest.
Channel 1: company careers pages directly. The highest-signal channel. Pick 30-50 target companies based on what they build, where their funding came from, and how their public technical writing reads. Bookmark each careers page, check weekly, and apply to the specific roles that match your evidence. The yield per application is the highest of the four channels because the postings are accurate and you are competing against a smaller, self-selected pool. The time cost is real but it amortises over the search.
Channel 2: warm referrals through your network. The highest-signal-per-application channel. A referral from any current employee, even a peer-level one, gets your application read by a human within 48 hours instead of going through algorithmic resume filters. The hit rate to phone-screen on a referred application is roughly 4-6x the hit rate on a cold application. Spend serious time building this channel: list every person in your extended network who works at any of your target companies, reach out individually, and ask for the referral specifically.
Channel 3: founder-posted threads on Hacker News. The monthly "Who is hiring" thread on Hacker News is one of the most reliable AI job sources we have tracked through 2024-26. Founder-posted, mostly real, often includes direct contact information, and reliably has fully-remote AI roles that do not appear on the major aggregators. Volume per month is small but the signal-to-noise ratio is excellent. Check it on the first business day of each month.
Channel 4: LinkedIn and aggregators. The highest-volume, lowest-signal channel. The volume is real, but the spam-and-ghost-posting ratio is high enough that this should not be your primary channel. Use it for opportunistic discovery and for letting strong inbound recruiter messages find you, not as the foundation of the search.
The single biggest mistake we see in candidate searches is treating Channel 4 as the primary effort. The 80/20 split that works in 2026: roughly 50% of search time on Channel 1, 30% on Channel 2, 10% on Channel 3, 10% on Channel 4. Most candidates we have advised had the proportions reversed when we met them.
Building the AI-specific resume
An AI-specific resume in 2026 differs from a generic technical resume in three ways that hiring managers explicitly screen for.
The summary statement is replaced with a one-line AI specialism. Instead of a paragraph summary, list one line at the top that names your specific AI specialism: "AI engineer, RAG and evaluation systems, shipped to 50K MAU" beats "Senior software engineer with 10 years of experience and a passion for AI". The first line tells the reviewer what bucket to put you in. Without it, the reviewer will guess, and the guess will be wrong half the time.
Each role bullet emphasises shipped AI work, with metrics. Generic engineering bullets do not signal AI fit. The specific bullets that do: "Built a retrieval-augmented support assistant against a 12K-document corpus; shipped to 8K monthly users with a 78% deflection rate against the prior keyword-search baseline." Numbers, scope, and outcome. Bullets without numbers are screened against bullets with numbers and lose.
A dedicated "AI projects" section near the top. Above the work history, a section listing 2-3 substantive projects with one-line descriptions and links. The links matter more than the descriptions; reviewers click through. Projects that do not have a link (a personal repo with no readme, a private codebase, a course assignment) score worse than no project at all because the absence of a link reads as "cannot show the work".
The standard resume mistakes that hurt AI applications: leading with education for senior candidates (move it to the bottom), listing every framework you have ever touched (top three relevant frameworks only), and including every job from a fifteen-year career (the last three relevant roles only, with a one-line earlier-career summary). Reviewers spend 20-40 seconds on the first pass; the resume must read clearly in that window.
Recruiter outreach that works
Cold recruiter outreach has a low base rate, but a disciplined version works in 2026 if your evidence is strong. The pattern that produces responses:
Pick the right person. Internal recruiters at the company you want, not external agency recruiters. The recruiter's title at the company should match your role family (an AI engineering recruiter, not a generic SWE recruiter). LinkedIn search filters narrow this fast.
One specific role mention. Reference a specific open posting on the company's careers page, with the role title and posting URL. Generic outreach ("interested in AI roles at your company") reads as a low-effort scattergun and is usually ignored.
One specific evidence link. Include exactly one link to your strongest piece of public work, with one sentence explaining why it is relevant to the role. The link should be your single best evidence; saving it for later is a mistake because there often is no later.
One sentence on availability. Tell the recruiter when you can interview and what your time-zone availability looks like. Recruiters are time-pressed and specific, actionable details get you scheduled faster than vague enthusiasm.
Total message length: 4-6 sentences. Longer outreach has a worse response rate; shorter outreach lacks the specifics that make it actionable. Send the message as an InMail or via the company's careers page contact form, not via cold email to a guessed address. The specific channel matters less than getting all four elements right.
Portfolio that gets calls
The portfolio that produces phone screens in 2026 has four properties that distinguish it from the tutorial portfolios most candidates put together. We have tracked this across roughly 200 candidate searches.
| Property | What it looks like | What it is not |
|---|---|---|
| Solves a real problem | A tool you actually use, or one that has 50+ real users | Tutorial reproductions of public examples |
| Shipped to users | Public URL, live, working, monitored | A repo with a screenshot but no live link |
| Includes evaluation | Public eval methodology, test set, scored regression history | "Tested by hand and it works" |
| Has a written walkthrough | 2,000-4,000 word engineering post on design choices and failure modes | A README that just says "to run, type X" |
The single most underweighted element on most candidate portfolios in 2026 is the written walkthrough. Engineers at the labs and at top startups read engineering blogs heavily, and a substantive written piece is often what triggers a referral conversation that turns into a phone screen weeks later. The piece does not have to be perfect; it has to be specific, technical, and honest about what did not work.
Two examples of portfolio approaches that hire well in 2026: a candidate who built a small AI-native tool used by 200 paying customers, with a public eval methodology and a long-form blog post on retrieval design choices, was hired by an Anthropic-tier startup six months after the post went up. A candidate who contributed three non-trivial features to a popular open-source agent framework, each with thoughtful PR discussions, was hired into a frontier-lab AI engineering role nine months later. Both candidates skipped the "ten different demos" pattern in favour of a small number of substantive projects with public artefacts.
The first 60 days of an AI job hunt
A productive AI job hunt has a cadence that most candidates do not naturally produce. The 60-day pattern that works:
Days 1-7: target list and resume. Pick 30-50 target companies. Research each and write down what they build, who funds them, and one sentence on why you want to work there. Update the resume to the AI-specific format above. Get the resume reviewed by one senior person whose feedback you trust.
Days 8-21: portfolio polish and warm outreach. Audit your public work. If your strongest piece is more than 12 months old, write something new. Reach out to every person in your network who works at one of your target companies; aim for 15-25 specific conversations.
Days 22-35: applications begin in disciplined batches. Apply to 5-10 target companies per week, one application at a time, each tailored to the specific role. Track applications in a simple spreadsheet (company, role, date applied, contact, status, next step). Schedule recruiter screens as they come in.
Days 36-49: interview loops and reapplications. Interview prep specifically for each company; do not generic-prep. Reapply to roles you missed in the first batch as new postings appear. Continue warm outreach to companies you cannot reach via cold application.
Days 50-60: offers, decisions, and re-strategy. By day 50, you should have at least one offer or one strong loop in progress. If not, the strategy needs adjustment: usually it means the target tier is too high for the current evidence, and the right move is to add 1-2 mid-tier targets while continuing the higher-tier loops.
The two patterns that fail: applying to 100 companies in week one with no targeting, and applying to 5 companies in week one and waiting for them to decide before doing anything else. Both lose. The disciplined batching pattern wins because it produces parallel loops at the right level of attention.
Negotiating an AI offer
AI offer negotiation in 2026 has specific patterns that differ from generic SWE negotiation. The major employers expect candidates to know how to negotiate AI offers, and candidates who do not are visibly under-paid relative to candidates who do.
Always negotiate. The base rate of pay increase from a single round of negotiation in 2026 is roughly 8-15% of total comp, and the recruiter has authority to move within an internal band that is typically 20-30% wide. There is essentially no downside to a polite, well-framed negotiation; recruiters expect it and the candidates who skip it are the ones who feel under-paid two years later.
Negotiate the equity, not just the base. Equity is usually the largest line item in AI offers, and the recruiter has more flexibility on equity than on base. Asking for an extra $50K of base is harder than asking for an extra $100K of four-year equity, and the latter is often more valuable.
Ask explicitly about refresh policy. Equity refresh grants (additional grants given annually after the initial four-year grant) are now standard at frontier labs and major AI companies. The recruiter rarely brings this up; ask directly, and ask whether the refresh is calibrated against the original grant value or against current market.
For private companies, ask about secondary sale windows. Frontier labs and late-stage private companies often have periodic windows when employees can sell some of their vested equity to outside investors. The frequency, the cap, and the historical pricing methodology are all consequential and often negotiable as part of the package.
For OpenAI specifically, understand PPU mechanics. The PPU instrument is bespoke and the recruiter's first PPU number is rarely their best number; ranges of 30-60% upside have been reported. We discuss this in detail in our OpenAI hiring guide.
The standard negotiation move that works in 2026: a competing offer or strong recruiter interest from a peer-tier company. If you do not have a competing offer, a serious recruiter conversation that has reached the on-site stage at a peer company can serve the same purpose. The negotiation framing should be calm, specific, and oriented around what would make the offer easier to accept rather than around the company being unfair.
Frequently asked questions
How long does an AI job hunt take in 2026?
Median time from start to written offer is roughly 8-14 weeks for candidates with strong existing AI evidence, and 4-9 months for candidates building evidence during the search. The variance is large; some candidates close in three weeks, others in six months. The variables that predict faster searches are: a tight target list, strong existing portfolio with public artefacts, an active referral network, and disciplined application cadence.
Should I take a counter-offer from my current employer?
Usually not, with caveats. The structural problem with counter-offers is that they reveal the employer was paying you below market until they had to match an external offer; the trust dynamic is usually permanently affected. Empirical data on tech counter-offers is reassuring for the candidate who leaves: roughly 70-80% of candidates who accepted a counter-offer left within 18 months anyway, often at lower pay than the original external offer would have produced. The exception: a counter-offer that includes a meaningful new role or scope (not just a pay match) is worth taking seriously.
How do I evaluate offers from companies of very different stages?
The cleanest approach is to convert each offer into expected total comp over four years, treating equity at three discrete probabilities (zero, current paper value, current paper value times two). The wider the equity distribution, the more weight you should put on the zero scenario. Late-stage and public-company equity is closer to a known quantity; early-stage equity is closer to a lottery ticket and should be valued accordingly. Cash-heavy offers from non-tech employers are often more valuable than candidates expect once you account for equity discount.
What if I get rejected from every company on my target list?
Treat it as evidence that the target tier was too high for the current evidence, not as a reason to stop. Add 5-10 mid-tier companies to the list (one tier down from the original), continue applications there, and use the time to build one or two more substantive public artefacts that bring the original tier into reach. Most candidates we have advised who initially failed at their target tier eventually landed there 6-12 months later after building more evidence. The single worst response is to pause the search entirely; the cadence is what produces results.
How do I handle gaps in employment when applying to AI roles?
Be honest and specific. A six-month gap during which you built one substantive AI project and wrote about it reads better than three years of continuous unrelated employment. Hiring managers in AI specifically understand the value of focused study time and self-directed building. The presentation matters: list the gap as "independent AI engineering work" with a specific deliverable, not as "sabbatical" or "personal time".
Are AI job interviews scheduled to test on weekends?
Sometimes for take-homes, almost never for live interviews. Take-home assessments at most major AI employers are sent on a Friday with a Monday or Tuesday return, which functionally means weekend work for most candidates. Live interview rounds are scheduled in business hours. If you cannot accommodate weekend take-home work, communicate this explicitly with the recruiter; some companies will adjust the timeline.Should I use LinkedIn Premium or other paid services?
LinkedIn Premium has marginal value for AI job searches in 2026 because the highest-yield channels (company careers pages, warm referrals, Hacker News threads) do not require it. Paid resume-review services have variable quality; the better signal is one in-network senior person reviewing your resume directly. Paid "curated job board" services we have tested in 2025-26 produced essentially no offers; we recommend skipping these.
The bottom line
An AI job hunt in 2026 rewards a specific discipline most candidates do not bring on day one. Pick a focused target list of 30-50 companies, restructure your resume around an AI-specific format with shipped-work bullets and a dedicated projects section, build a small portfolio with substantive public artefacts including written walkthroughs, run disciplined application batches over 60 days with parallel warm outreach, and negotiate every offer including the equity and refresh policy. The market is open and the pay is competitive, but the candidates who find the right roles are the ones running the search as a structured project, not as a series of one-off applications. Read the role-specific guides in our AI careers hub to choose the right tier of company for your current evidence, and the broader market context in our AI careers pillar.
Last updated: May 2026
