Google DeepMind Careers: Inside the Hiring Process

Google DeepMind is the only frontier lab whose hiring you should think of through two completely different lenses at once. The London-headquartered research organisation that merged with Google Brain in 2023 still runs the most research-heavy interview process in the industry, with paper-discussion rounds for engineers and a meaningful PhD preference. The integrated Google AI engineering org that emerged from the same merger now operates at a scale none of the other frontier labs match: thousands of engineers, dozens of product surfaces, and a very different funnel that looks much more like Google's standard SWE loop. Which version of the company you encounter depends entirely on which job description you applied to. This guide covers how to read those distinctions, what the interview loop actually contains in 2026, the realistic PhD-vs-no-PhD calculus, and the compensation bands.

Table of contents

DeepMind vs Google AI vs Google DeepMind

Until April 2023, three things existed: DeepMind, the London-based research lab Google had acquired in 2014; Google Brain, the Mountain-View-based research organisation inside Google itself; and Google AI, the umbrella name for AI research efforts across the company. The April 2023 merger combined DeepMind and Google Brain into a single organisation called Google DeepMind. Google AI continues to exist as a brand and umbrella, but the unified research and applied AI organisation is the one a candidate now interviews with.

The unification was not a paper exercise. By 2026, Google DeepMind is the largest single AI organisation in tech by headcount, with offices in London (still HQ for research), Mountain View, Zurich, Paris, New York, Tel Aviv, and Tokyo. The Gemini model family, Gemini-powered Google products (Search, Workspace, Cloud Vertex AI), and frontier research efforts like AlphaFold and Project Astra all sit inside the same org chart. The model has produced two practical consequences for candidates: research roles still feel research-heavy, but the engineering roles increasingly look and feel like Google SWE roles with an AI specialty, and the interview loops differ accordingly.

Reading the job description carefully matters more here than at any other lab. A "research scientist" posting on the careers page will run a research-style loop with paper discussions; an "applied software engineer, Gemini" posting will run a Google SWE loop with one AI-specific round. The titles are not interchangeable.

Research vs engineering tracks

The cleanest cut for candidates is research vs engineering, and within engineering, research engineering vs applied engineering vs platform engineering.

Research scientist roles produce papers, propose new architectures, and run multi-month research programmes. Approval to publish is not automatic but is the cultural default. The org maintains a strong publication track record at NeurIPS, ICML, ICLR, and Nature for biology-adjacent work. Headcount within research scientist tracks is in the low hundreds globally, with the densest cluster still in London.

Research engineer roles pair with research scientists and own the experimental infrastructure: training runs, dataset curation, evaluation harnesses, model exports. Headcount is roughly 2-3x research scientist, and the role is more accessible to non-PhD candidates. This is the highest-impact track for a strong engineer who wants to do research-adjacent work without the PhD path.

Applied engineering roles ship Gemini-powered features into Google products. Search, Workspace, Cloud, Android, and Pixel all have applied AI engineering teams. Headcount is large (thousands), the loop is closer to the standard Google SWE loop, and the work is product-shipping rather than novel research.

Platform engineering roles build the infrastructure underneath Gemini training and inference: TPU systems software, distributed training frameworks, inference serving stacks. The bar on systems depth is high, the work is closer to a hyperscaler infra role than to ML, and compensation is among the highest at the company because the talent pool is small.

Within the four engineering tracks, research engineering pays best for candidates with research-adjacent backgrounds. Applied engineering pays best for candidates with strong Google SWE-style profiles. Platform engineering pays best for candidates with hyperscale infra backgrounds, regardless of AI experience.

The PhD vs no-PhD reality

The PhD-vs-no-PhD question is more loaded at Google DeepMind than at OpenAI or Anthropic. The honest position in 2026:

Research scientist roles effectively require a PhD. A handful of exceptions exist (candidates with sufficient first-author publications and a strong research record built outside academia), but they are exceptions. If you do not already have a PhD or a publication record that would justify one, applying to research scientist roles is unlikely to clear the screen.

Research engineer roles do not require a PhD, but a strong research literacy is read favourably. Candidates who can discuss recent papers in detail, who have implemented or extended methods from papers in their own work, and who show interest in the research questions the team is working on, score visibly higher. A pure engineering candidate without research interest can still hire into the role, but the bar on engineering quality is correspondingly higher to compensate.

Applied and platform engineering roles do not weight PhD at all. The loop reads as a Google SWE loop with one ML-specific round. Candidates from Google, Meta, Apple, Stripe, and similar companies hire well into these tracks without any research background at all.

For non-PhD readers, the practical implication: target research engineering or applied engineering. The realistic outcome of applying to research scientist roles without a PhD is a polite decline at the recruiter screen, regardless of how strong the engineering profile is. We discuss the broader path question in our AI careers pillar.

The interview loop

Google DeepMind's loop sits between OpenAI's compressed three-week process and Anthropic's drawn-out ten-week process. Five to nine weeks elapsed time is typical. The structure differs by track but the components are recognisable.

Research scientist loop: recruiter screen, two technical screens (one ML depth, one general technical), a paper-discussion interview where the candidate presents one of their own papers and is challenged on the methodology, an on-site of four to five interviews including research-track-fit, a hiring committee, and an executive review. Total: 8-10 weeks.

Research engineer loop: recruiter screen, one or two technical screens, a paper-discussion interview (lighter than the research scientist version, often discussing a paper from the team rather than the candidate's own), an on-site of four to five interviews mixing engineering and research literacy, hiring committee. Total: 6-8 weeks.

Applied engineer loop: recruiter screen, one or two technical screens (algorithmic coding plus an ML-specific round), an on-site of four to five interviews including system design, algorithms, ML system design, and Googleyness/leadership, hiring committee. Total: 5-7 weeks. Closely resembles the standard Google SWE loop with one AI-specific overlay.

The signal that distinguishes successful candidates in the paper-discussion round, in our hiring-manager interviews, is the ability to identify what is novel and what is incremental in a paper, and to articulate the specific weakness in the methodology. Candidates who treat the paper as authoritative and only discuss its strengths score notably worse than candidates who can articulate one or two genuine concerns with the work, even if those concerns are minor. Scientific taste is the underlying signal.

Compensation in 2026

Google DeepMind compensation is structured as Google compensation: cash base, target bonus, and Google stock units (GSUs) vesting over four years. There are no PPUs as at OpenAI and no privately-held equity to manage. The simplicity is a structural advantage when comparing offers, and the GSUs are immediately liquid.

TrackLevelBase salary (USD)GSU value (4-year)Annualised total comp
Software engineer (applied)L4$210K$320K$320K
Software engineer (applied)L5$280K$680K$480K
Software engineer (applied)L6$340K$1.4M$730K
Software engineer (applied)L7$400K$2.4M$1.05M
Research engineerL5$300K$800K$520K
Research engineerL6$360K$1.6M$790K
Research scientistL5 (PhD entry)$340K$1.0M$610K
Research scientistL6$420K$2.0M$960K
Research scientistL7$500K+$3.5M+$1.4M+

The London-based bands run roughly 70-80% of the US bands when converted to local-currency total comp, with one important quirk: London-based GSUs are paid out in USD-denominated stock, so the comp tracks USD even when the base salary is in GBP. London engineers benefit from this when the dollar strengthens.

Two structural advantages over OpenAI and Anthropic on compensation. First, the equity is liquid, immediately. There are no secondary-market windows to navigate and no PPU complexity. Second, Google's level system is well-mapped to the broader industry, so using an offer at Google DeepMind to anchor a counter-offer elsewhere is mechanically easier than doing the same with an OpenAI PPU package.

The structural disadvantage: at frontier-lab compensation peaks (L7+ research scientist, L7 software engineer), Google DeepMind pays 15-25% less than the equivalent senior offer at OpenAI. Senior candidates negotiating between the two often see this gap; some prefer the liquidity at Google over the higher headline at OpenAI. Compare across employers in our AI careers compensation pillar.

Internal mobility from Google

Internal Googlers have a meaningful mobility advantage. Google DeepMind absorbs hundreds of internal transfers per year from Google's product engineering organisations, bypassing the external interview loop entirely or replacing it with a single team-fit conversation. Engineers at Google with even modest AI experience and good performance ratings have a clearer path into Google DeepMind than equivalently strong external candidates.

For external candidates, this has two implications. First, it raises the bar at the external loop, because internal mobility absorbs much of the natural "medium-strong" candidate pool. Second, joining Google as an applied SWE first and transferring after 12-18 months has been a viable indirect path for candidates who want Google DeepMind specifically but cannot clear its external loop. This path takes longer but the cumulative success rate is higher than two cycles of external Google DeepMind applications. We discuss similar two-step strategies in our job-hunt playbook.

Frequently asked questions

Is DeepMind still based in London?

Yes. London remains the headquarters for the research organisation, with Mountain View as the secondary research hub and roles distributed across Zurich, Paris, New York, Tel Aviv, and Tokyo. London is the strongest single hiring centre, particularly for research scientist and research engineer roles. UK Skilled Worker visa sponsorship is well-established for candidates who do not hold UK work authorisation.

How does Google DeepMind compare to Google AI on the careers page?

"Google DeepMind" is the actual organisational name as of the 2023 merger. "Google AI" remains an umbrella brand for AI work across Google, including parts of Google DeepMind but also some product-side AI work outside the core research and applied AI org. For most candidates, the relevant org is Google DeepMind specifically; the careers page filters this clearly when you select "Google DeepMind" as the team.

Does Google DeepMind hire for AI safety roles?

Yes, in two organisations: the AGI Safety team within research, and the Responsible AI / model governance teams within applied. Both have grown substantially through 2024-26, partly driven by EU AI Act compliance work. The research-side AGI Safety roles favour candidates with formal ML or alignment research backgrounds; the applied-side Responsible AI roles take a wider range of backgrounds. Salary in both runs at parity with engineering, which has not always been true historically.

Can I move from research scientist to research engineer or vice versa?

Internal mobility between research tracks is supported but uncommon. Most engineers who try this in their first 12-24 months find that the role expectations and the cultural defaults of the two tracks are different enough that the move requires a deliberate change in working pattern. Candidates who do move successfully tend to do so after building credibility in their first track for 18-24 months. Plan for the role you are interviewing into, not the role you might want to switch to.

What is the work-life balance like?

Better than at OpenAI or Anthropic by most accounts. Research scientists report relatively standard 40-50 hour weeks with seasonal intensification around paper deadlines. Engineering teams shipping Gemini-powered product features report higher-pressure cycles around major model releases, but the baseline is sustainable. London teams in particular have a noticeably more conventional work pattern than San Francisco-based frontier-lab teams.

Does Google DeepMind sponsor visas for the US offices?

Yes, including H-1B and O-1 sponsorship through Google's standard global mobility process. The internal mobility team is among the most experienced in the industry. The bottleneck remains the H-1B cap lottery rather than employer willingness. Many strong candidates who could not clear the H-1B lottery have used the London office as a faster route into the company, then transferred internally to a US office after the relevant work-authorisation pathway opened.

How many publications do I need to be competitive for research scientist?

For an entry-level research scientist coming directly from a PhD, two to four first-author papers at NeurIPS, ICML, ICLR, or equivalent venues is the typical bar. The papers should show a coherent research direction rather than scattershot one-off contributions. For senior research scientist roles, a multi-year publication record with clear impact (citation volume, downstream work building on the candidate's research) matters more than count.

The bottom line

Google DeepMind in 2026 is two organisations under one name. The research organisation continues to run a research-heavy hiring process with a meaningful PhD preference and a paper-discussion round that screens for scientific taste. The applied AI engineering organisation runs a much larger Google SWE-style loop with a single AI-specific round. Read the job description carefully and target the right track for your background. London is the strongest single hiring centre, and the lab's work-life balance and immediate stock liquidity are structural advantages over OpenAI and Anthropic. If you are a non-PhD candidate, target research engineering or applied engineering rather than research scientist; the realistic odds make this the right call. Internal Google mobility into Google DeepMind is a viable indirect path for candidates who would not clear the external loop. Compare the lab against alternatives in our OpenAI hiring guide and the broader market in our AI careers hub.

Last updated: May 2026