How to Learn AI in 2026: A Complete Roadmap

The honest answer to "how do I learn AI" depends on a question almost no roadmap asks first: what do you plan to do with it? Someone who wants to build models from scratch and someone who wants to sell them to enterprises follow different curricula, and the most common reason people stall after three months is that they picked a roadmap built for a different person. The good news is that in 2026 the resources are abundant and most of the best ones are free. The bad news is that abundance is the new obstacle. This roadmap sorts the noise. It splits learners into three honest tracks, gives each a 90-day plan with named courses and project milestones, names the certifications worth your money and the ones that aren't, and tells you which platforms deserve a paid subscription. If you finish this article and still don't have a next step, the problem isn't AI, it's that you haven't picked which version of "learning AI" you actually want.

Table of contents

Why the standard 'learn AI' advice is wrong in 2026

Most "learn AI" guides written before 2024 told you to start with linear algebra, matrix calculus, and a refresher on probability. That advice has not aged well. It was tuned for a world where building useful AI meant training your own models from labeled data. In 2026, the dominant skill is no longer training, it is wielding pre-trained foundation models effectively. The change in starting point is real and the implication is uncomfortable for traditionalists: you can do useful work in AI now without ever solving a partial derivative.

This does not mean math is irrelevant. If you intend to do research, fine-tune transformer architectures, or work on the systems that train new foundation models, you still need the math. But that population is small. The much larger population, applied developers, product builders, analysts, marketers, operations leads, is doing valuable work with APIs, retrieval pipelines, agent frameworks, and prompt patterns. The skills that matter for them are: shipping software, evaluating outputs, designing prompts, choosing the right model, and reasoning about cost and latency.

The other piece of broken advice is the order of operations. Old roadmaps started with theory and ended with projects. The reverse works far better in 2026. Build something tiny on day one, observe what breaks, then learn the theory that explains the failure. The OpenAI Cookbook, Anthropic's documentation, and the LangChain quickstart all assume you'll learn by doing. They are right. Two weeks of building and reading errors will teach you more about how language models actually behave than two months of lectures.

Steelman the other view: a strong theoretical foundation does compound over decades. People who took the long path twenty years ago in machine learning are now the people running labs at Google DeepMind and Anthropic. That career exists. It just isn't the career most readers are after, and most readers will move faster by inverting the order, projects first, theory when needed.

Three paths in: technical builder, applied user, manager/leader

Three distinct learner types use the same word, "AI", to mean three different things. The first step in any honest roadmap is to admit which one you are.

The technical builder wants to write code that calls models, trains models, fine-tunes models, or wires up agent systems. They will read documentation, read source code, debug API responses, and care about token economics. Their endpoint is an engineering job: ML engineer, AI application developer, MLOps, prompt engineer, AI research engineer. The right curriculum is heavy on coding, light on slide-deck theory.

The applied user wants to use AI to do their existing job better. They are marketers, lawyers, analysts, writers, designers, teachers, customer support leads, and operations managers. They will not write Python. They will configure tools, design workflows, evaluate outputs, and chain prompts. Their endpoint is a 30 to 70 percent productivity gain in their current role. The right curriculum is heavy on tool fluency and prompt design, light on code.

The manager or leader wants to make defensible decisions about AI in their organisation: when to buy, when to build, when to wait, which vendor, what risk, which use case to fund. They will not write code or design prompts at scale. Their endpoint is good judgment. The right curriculum is heavy on case studies, vendor analysis, ethics, regulation, and economics, and light on technique.

Most failed self-study attempts come from picking the wrong path. A marketer who takes a linear algebra course quits because nothing she learns connects to her work. A founder who takes a "ChatGPT for productivity" course can't evaluate the AI startup he's about to fund. Pick the path that maps to your endpoint. Be honest about which one that is.

PathBest forTime to first useful skillEnd-state job
Technical builderDevelopers, engineers, technical founders2-4 weeks (first deployed app)AI engineer, ML engineer, applied scientist
Applied userMarketers, analysts, writers, lawyers, ops2-3 weeks (first workflow saving hours/week)AI-augmented version of current role
Manager/leaderPMs, VPs, founders, board members4-6 weeks (first defensible vendor decision)Decision-maker who isn't fooled

If you genuinely span two paths, many founders do, pick the one closer to your weakness. A technical founder learning AI for the first time should follow the manager track to fill the strategic gap, then pick up the technical work as needed.

One specific failure pattern is worth naming, because almost every self-described "AI learner" falls into it. The pattern is to start as a manager-track learner ("I just want to understand what's happening"), drift into reading technical content because the technical content sounds smarter, then abandon both because the technical content is over your head and the manager content seems shallow by comparison. The cure is to commit to your path in writing on day one. Not "I'll see how it goes", a written commitment, ideally to another person who will check on you in week six. Path commitment beats path optimisation. The wrong path completed is worth more than the right path abandoned.

One more honest note. The lines between paths are blurrier than the table suggests. An applied user who gets fluent enough may end up writing small Python scripts after all. A technical builder may discover that they enjoy the strategy work more than the engineering. A manager who gets deep into evaluations may, three years later, be doing prompt engineering full-time. The labels are useful for the first 90 days. After that, follow the work that makes you most useful to the people you serve.

The 90-day beginner roadmap (week-by-week)

Ninety days is enough time to go from zero to genuinely productive on any of the three paths. Less than that and you are still memorising vocabulary. More than that and you are procrastinating. The plan below is week-by-week for the technical builder, with parallel notes for the other tracks.

Weeks 1–2: Orientation and first hands-on contact. Read the OpenAI and Anthropic API quickstarts. Sign up, get an API key, run the hello-world request. Read Andrej Karpathy's "Intro to Large Language Models" talk on YouTube once and his "Let's build GPT from scratch" if you want depth. Build a tiny script that takes a question and returns an answer. That is week one. In week two, modify it to take a long document and answer questions about it. That is your first retrieval-augmented generation (RAG) system, even if you don't call it that yet.

Weeks 3–4: A real project, end to end. Pick a problem that bothers you in real life. Sorting your inbox, summarising research papers, generating standups from your commit log, and build a working tool. Deploy it somewhere. Streamlit, Vercel, or a Telegram bot will do. Constraint: it must be used by a human other than you for a week. This forces you to handle real-world inputs, weird edge cases, and the small ego shock of seeing a stranger break your prompt in 30 seconds.

Weeks 5–6: Evaluations and reliability. Read the section on evals in the OpenAI Cookbook. Write 50 test cases for your week-3 project. Find the failure rate. Try to reduce it by half through prompt iteration, model swap, or retrieval improvements. This is the work that separates hobbyists from people who get hired. Most beginners never do it.

Weeks 7–8: A second project with agents. Build something that takes multiple steps and uses tools. Use the OpenAI Assistants API or LangGraph or Anthropic's tool-use API. Examples: an agent that researches a topic and produces a brief, an agent that triages your GitHub issues, an agent that books meetings. Get something working end-to-end, even if it is brittle. The point is to feel where agents fail in 2026 (long-horizon planning, hallucinated tool calls, runaway loops).

Weeks 9–10: Theory you actually need. Now that you have built three things, read or watch one solid resource on transformer architecture and one on RAG. The 3Blue1Brown series on neural networks is excellent. Read Lilian Weng's blog posts on prompting and on hallucination. You will absorb ten times more theory at this point than you would have in week one, because every concept now has a hook in your memory.

Weeks 11–12: Specialisation and a portfolio piece. Pick a specialisation, fine-tuning, multimodal, voice agents, code generation, evals, structured outputs, and go deep. Build one polished, documented, deployable thing. Write up the build on a personal blog or LinkedIn. This becomes the thing you point hiring managers at. A working demo is worth ten certifications.

For the applied user, compress the same plan: weeks 1-4 become tool fluency in three apps (ChatGPT or Claude, plus a domain tool like Cursor for coders or Jasper for marketers, plus an automation layer like Zapier or Make), weeks 5-8 become workflow design and prompt libraries, weeks 9-12 become measurable productivity gain in your real job. For the manager, weeks 1-4 are intensive reading (Stripe Press's The Annotated Transformer, Stratechery on AI, Latent Space podcast), weeks 5-8 are vendor evaluation case studies, weeks 9-12 are leading one real AI initiative inside your team.

Two practical tactics make any of these tracks dramatically more likely to finish. The first is the public commitment: tell three people, in writing, that you are doing this 90-day plan and that you'll send them a link to your finished portfolio piece on day 91. Public commitments work because the social cost of quitting is real. The three people don't need to be technical or even interested in AI; they need to be willing to ask "how's the project?" once a fortnight. The second is the project-first sequence: every week, the question to ask is "what are you shipping this week?" not "what are you studying this week?" If the answer to the first question is nothing, the second question's answer doesn't matter.

The hardest weeks of the plan are 5 and 6, where the project from week 3 is now behaving badly on real inputs and the easy momentum of weeks 1-4 has run out. Almost every learner considers quitting at this point. The right move is to stay on the broken project. Do not start a new one. Reliability work feels less rewarding than novelty, but it is the actual content of professional AI engineering in 2026. Most working AI engineers spend more of their time on evaluation, debugging, and edge-case handling than on greenfield building. The week-5 plateau isn't a sign you're failing; it's a sign you've reached the part of the work that pays the salary.

Where the major platforms actually fit (Coursera, DeepLearning.AI, Google, MIT, OpenAI Academy)

The branded course providers occupy different niches. They are not interchangeable, and choosing one because it is famous wastes time.

Coursera is best when you want a structured spine and accountability. The Andrew Ng-led Machine Learning Specialization remains the gold-standard introduction for anyone who wants to understand what's actually happening inside the models. It is a six-month commitment if you take it seriously, and the coding assignments use Python and matter. Coursera's value is its consistency, the videos are produced, the auto-grader works, and the certificate from a recognised university or company carries weight at the resume-screening stage.

DeepLearning.AI (Andrew Ng's own platform, with content also published on Coursera) is the best place for short, deep specialisations on a specific topic. Their Generative AI with Large Language Models, Building Systems with the ChatGPT API, and LangChain for LLM Application Development short courses are roughly 1-3 hours each, free, and unusually good. If you only do one thing this month, it should probably be one of these.

Google's AI courses through their Cloud Skills Boost platform are excellent for anyone who plans to deploy on Google Cloud or work with Vertex AI and Gemini. Their Generative AI Learning Path is free and the labs use real GCP infrastructure. The trade-off is that the framing is Google-centric, you will learn Vertex AI but not the equivalent on AWS or Azure.

MIT Open Learning and MIT OpenCourseWare host the actual lectures from the actual MIT classes. Introduction to Deep Learning (6.S191) and Linear Algebra (18.06) are both available free and are taught at the level of a smart undergraduate. This is the right resource for the technical builder who wants foundations without paying for a degree.

OpenAI Academy is the newest entrant and skews practical. It pairs short courses with hands-on labs that use the OpenAI API directly. The advantage is that the content is updated as the API evolves; the limitation is that everything is OpenAI-flavoured.

For most readers, the right combination is one Coursera or DeepLearning.AI specialisation as the spine, plus DeepLearning.AI short courses or OpenAI Academy modules as the topical fillers, plus MIT open courses if you need the math. We covered this stack in detail in our 2026 AI courses comparison.

Free vs paid: what's worth paying for, what isn't

The honest answer is that almost everything you need to learn AI in 2026 is free. The OpenAI Cookbook is free. Anthropic's documentation is free. Andrej Karpathy's lectures are free. The MIT courses are free. Hugging Face's NLP course is free. Google's Generative AI Learning Path is free. Stanford's CS229 lectures are on YouTube. There is no shortage of high-quality free material.

What you pay for is structure, accountability, and specific outcomes. Pay for a Coursera specialisation if you have shown that you do not finish unstructured material, the deadlines and graded assignments are the product, not the videos. Pay for a bootcamp only when you need accelerated job placement support and you have done the math on the salary lift. Pay for a 1:1 mentor if you can find one whose job you want to have.

Do not pay for: any course that promises "become an AI engineer in 6 weeks" with no prerequisites, any certification from a vendor you've never heard of, any "AI Masters" diploma sold by a non-accredited online school. The market is full of these. They are designed to extract money from people who don't yet know enough to evaluate the offering. We covered this in our breakdown of when to pay for AI courses.

The one consistent paid recommendation is a ChatGPT Plus or Claude Pro subscription, around 20 dollars a month. You will use this every single day if you are serious. Combined with API credits of perhaps 50 dollars over your first three months, your total cash outlay to learn AI from zero in 2026 is under 200 dollars if you stick to free courses.

AI certifications that move the needle

Certifications matter most for one specific use case: getting past the resume screen at a large enterprise that uses keyword matching on applications. They are far less useful at startups, which look at your portfolio, and almost useless in research, which looks at your publications.

The certifications that hiring managers actually recognise in 2026 are short. Microsoft's AI-900: Azure AI Fundamentals is broad, vendor-flavoured, and signals that you know the words. The AWS Certified AI Practitioner is its AWS counterpart. Google's Generative AI Leader certification covers the strategic side. None of these will teach you a deep skill, but each will teach you the vocabulary of one cloud and put a recognisable badge on your resume.

The certifications that signal more depth are the role-based ones: Microsoft Certified: Azure AI Engineer Associate (AI-102) and the equivalent AWS Certified Machine Learning Engineer - Associate. These require demonstrating implementation skills, not just terminology. They are appropriate after you've actually built things.

The certificates that hiring managers do not care about: anything from a vendor with no public adoption, anything labelled "AI Master Certificate" sold for several thousand dollars by an online-only school, anything that is essentially a course completion badge from a free platform. We covered this in detail in our 2026 AI certifications guide.

CertificationIssuerCost (USD)Worth it?
AI-900 Azure AI FundamentalsMicrosoft~165Yes. Entry-level, recognised, broad
AWS Certified AI PractitionerAWS~100Yes, same niche as AI-900, AWS shops
Generative AI LeaderGoogle Cloud~99Yes for managers; less so for engineers
AI-102 AI Engineer AssociateMicrosoft~165Yes after some real building
AWS ML Engineer AssociateAWS~150Yes after some real building
"AI Master Certificate" online schoolsVarious1,500-5,000No
Coursera specialisation completion certsCoursera/Universities50/moYes if you actually finish

Tooling you should own before any course

You will be unable to learn AI without a working dev environment. People underestimate the friction of getting one set up and quit before they start. Get this out of the way in week one, ideally on day one.

For the technical builder. You need: Python 3.11 or newer, a code editor (VS Code or Cursor, Cursor is purpose-built around AI assistance and worth the 20 dollars a month), Git, a GitHub account, an OpenAI API key, an Anthropic API key, and a terminal you don't fear. On Windows, install via the official Python installer or use Anaconda. On Mac, use Homebrew. On Linux, you already know. Add a single virtual environment, install openai and anthropic, run a hello-world. If this takes you a full day, that is normal; do it once and never again.

For the applied user. You need: a paid subscription to either ChatGPT Plus or Claude Pro (do not skimp on the free tier, the rate limits will quietly cripple your learning), at least one specialised tool relevant to your domain (for writers, this might be Sudowrite or Lex; for analysts, it might be Hex or Mode; for designers, Figma's AI features or Magnific), and an automation layer like Zapier, Make, or n8n. Optional but useful: a tool with built-in browsing such as Perplexity or one of the agent platforms.

For the manager. Your tooling is reading and conversation. Subscribe to one source-of-truth newsletter (Stratechery for strategy, Latent Space or The Sequence for technical depth, Import AI for research roundups). Set up a recurring 30-minute weekly slot to talk to one technical AI person. Internal or external, and bring a real question.

One rule applies to all three paths: pick one stack and stick to it for the first 90 days. The temptation to switch tools, frameworks, or even paths is strong, especially when something new launches every fortnight. Resist. The cost of switching is paid in confusion and lost momentum, and the new tool is rarely better enough to justify the switch. By month four, you will have enough fluency to evaluate alternatives properly. Until then, every "I should try the new framework" is a procrastination instinct in disguise. The single biggest predictor of whether someone finishes the 90 days is whether they tool-hop. Don't.

A note on hardware. You do not need an expensive laptop or a GPU to learn AI in 2026. Almost all useful applied work happens through cloud APIs that run on servers thousands of miles away. A standard laptop with a working internet connection is sufficient. The single hardware investment worth considering is a second monitor, having documentation on one screen and code on the other is genuinely faster. If you plan to fine-tune or train models locally, you'll eventually want either a Mac with Apple Silicon (M-series chips have become very competent for local model work) or a desktop with an Nvidia RTX 4090 or better. Most readers will not need either. Renting GPU time on RunPod, Lambda, or Modal is cheaper for occasional use.

How to build a portfolio that gets you hired

The most valuable investment a self-taught learner can make is a portfolio of three to five real, working projects with code, demos, and a write-up. This is what hiring managers actually evaluate. They will not read your transcripts. They will read your README and click your demo link.

What makes a portfolio piece work in 2026: it solves a real problem (yours or a friend's, not a tutorial's), it is deployed somewhere a stranger can click it, the source is on GitHub with a clean README, the README explains the architecture and what would break it, and there is at least one substantive design decision you can defend in an interview. "I tried both retrieval and fine-tuning and chose retrieval because the corpus updates weekly" is the kind of sentence that closes interviews.

Avoid: clones of well-known tutorials, projects that only run on your laptop, projects with no README, and projects that depend on private API keys you cannot share for a demo. Recruiters and engineering managers see thousands of these and ignore them.

Three project archetypes that age well and are still impressive: a domain-specific RAG system over a corpus you actually use (your work documents, a textbook, a podcast archive); a structured-output extractor that takes messy unstructured input and returns reliable JSON for a downstream system; and an agentic workflow that completes a multi-step task with tools (browsing, calculator, email, database). One of each, polished, and you have a credible portfolio.

The README is more important than the code. A README that explains the problem, the design choices, the trade-offs you considered, and the failure modes you observed is the document a hiring manager actually reads. It signals seniority. Junior engineers ship code; senior engineers explain why. Spend an afternoon on every README. Read three other engineers' READMEs you admire and steal their structure. The standard parts: a one-paragraph problem statement, a labelled architecture diagram (a hand-drawn one is fine, Excalidraw is excellent), the design decision section, an honest section on what doesn't work yet, and a "how to run this locally" block that actually works.

One specific anti-pattern: do not submit your portfolio to dozens of jobs as a generic block. The application that lands the interview is the one where the cover letter ties one specific project to one specific responsibility in the job description. "I built X, which solved a problem in Y way; I noticed your team is working on something similar with the Z constraint, and I'd want to discuss how my approach would or wouldn't transfer." Three lines. This converts at multiples of the rate of generic applications. The portfolio is the artefact; the cover-letter framing is what makes the portfolio relevant to the specific role.

For projects that don't fit one of the three archetypes, ask whether they show off the same skills the archetypes do. Handling real messy input, producing structured output, multi-step planning, evaluation under uncertainty. If the project doesn't show at least one of those, it is probably a tutorial-clone in disguise and won't differentiate. Replace it with one that does.

Pitfalls that waste six months of your life

The pattern of failure is so consistent that it is worth naming the traps directly. If you find yourself in any of these, stop and re-route.

The tutorial loop. You watch one course, then another, then another, and somehow never ship anything. The cure is a calendar. Set a hard date by which you will deploy your first project. If you have watched more than 20 hours of video without writing code, you are in the loop.

Math paralysis. You are convinced you cannot learn AI until you "really understand" linear algebra. You buy textbooks. The textbooks intimidate you and you slow down. Cure: skip ahead to the project, get stuck on a real problem, learn the math you need to unstick yourself, repeat. The math gets learned in service of building, not in advance of it.

Tool-hopping. A new framework releases every two weeks (LangChain, LlamaIndex, Haystack, AutoGen, CrewAI, LangGraph, DSPy, Pydantic AI). You learn the basics of each and master none. Cure: pick one, ship something, then evaluate whether changing tools will pay back the switching cost. Almost always it won't.

The impostor spiral. Twitter (or X, or whatever it is this week) shows you brilliant 22-year-olds shipping miracles. You feel hopelessly behind. Cure: unfollow the highlight reel for 30 days and commit to comparing your week-12 self to your week-1 self.

Specialisation too early. You decide in week two that your speciality is going to be multimodal agents, then can't follow the multimodal-agents content because you don't yet understand basic agents. Cure: 90 days of breadth before going deep.

Ignoring evaluations. You build something that works on your three test inputs and call it done. It blows up on a real user's input the next day. Cure: in week six, sit down and write 30-50 test cases, run them, fix the failures. This habit alone separates working engineers from hobbyists.

The "production parity" delusion. Your project works on your laptop with your API key, your prompts, and your three test inputs. You imagine that "shipping" is just a matter of putting it on a server. It isn't. Production differs from development in a dozen specific ways: rate limits, retries, observability, secrets management, cost monitoring, error handling for partial outputs. The cure is not to read about production, it is to deploy the project somewhere a stranger uses it, and let the production realities educate you. Pain is the teacher.

Reading without writing. You consume Twitter threads, blog posts, and newsletters voraciously. You feel current. You haven't shipped anything. Reading without writing produces an illusion of competence that survives until the first interview, where it dies fast. The cure is a hard ratio: for every hour of reading, an hour of writing code or producing artefacts. Track it for a week if you don't believe you have the imbalance. Almost everyone does.

How to keep up after the roadmap ends

Day 91 is when most people quit, because the structure goes away and the field still moves at terrifying speed. The trick is shifting from a learning routine to a maintenance routine that takes 30-60 minutes a day at most.

A defensible weekly cadence: one long-form read (a paper, a Stratechery, a Latent Space episode) per week; one paragraph in a personal notes doc summarising what you learned and what you would change in your projects; one small experiment or refactor on an existing project per week; one conversation with another practitioner per week (Discord, meetup, coworker, anywhere you can debate a design choice out loud).

Subscribe to two or three newsletters at most. The Sequence, Import AI, and Latent Space cover most ground. Do not subscribe to ten, you will skim them all and absorb none. Pick the one or two voices whose taste matches yours and trust them to surface what matters.

The single most underrated routine is rebuilding one of your old projects every six months. Models change, tools change, prices change. The thing you built in week eight is probably 30 percent obsolete by month nine. Rebuilding it forces you to confront what's new and integrate it. Most learners skip this and stagnate at whatever they knew when they finished the roadmap.

Browse all our learning guides when you need a refresher on a specific topic. The fundamentals don't change as fast as the tooling does, and the tooling content gets dated quickly enough that you should always check the publication date.

Frequently asked questions

How long does it take to learn AI from scratch?

For useful productivity gains, two to four weeks of focused effort if you are an applied user, and six to twelve weeks for a technical builder to ship their first real project. The "complete" answer is that AI is now a continuous-learning field, there is no graduation point, only a maintenance routine. Aim for 90 days of intense ramp-up followed by a 30-60 minutes-a-day maintenance cadence indefinitely.

Do I need to know Python to learn AI?

If you want to be a technical builder, yes. Python is still the lingua franca, and JavaScript or TypeScript is a strong second for full-stack AI work. If you are an applied user or manager, no. You will spend your time in well-designed UIs and prompt libraries. The line is whether you plan to call APIs directly. If you do, learn enough Python to be dangerous in two weeks; you don't need to be a software engineer first.

Is it too late to start learning AI in 2026?

This is the wrong question. The right one is whether the demand for AI-fluent workers will outstrip supply through 2030. The data on enterprise AI hiring suggests yes, even discounting hype, every Fortune 500 company is staffing AI initiatives, and most are short-handed. The window for "early-mover advantage in your career" is mostly closed; the window for "competent practitioner" is wide open and likely will be for years.

Should I get a degree to learn AI, or self-study?

For applied work and engineering jobs in industry, self-study with a strong portfolio works. Hiring managers at every major lab and many top startups will hire self-taught engineers who can demonstrate work. For research positions and roles that require novel model architectures, a graduate degree (specifically a PhD) is still the standard path. Most readers want the first kind of job. We covered this trade-off in our self-taught vs bootcamp vs degree comparison.

Which is better, Coursera or YouTube, for learning AI?

Coursera if you struggle with structure and don't finish unstructured material. YouTube if you finish what you start and want to move faster than a 6-month curriculum allows. The key is honesty: most learners overestimate how disciplined they are. If you have abandoned three free courses in the last year, the structure of a paid Coursera specialisation may be the difference between learning and not learning.

Do I need a powerful computer or GPU to learn AI?

No. In 2026 almost all useful applied work uses cloud APIs (OpenAI, Anthropic, Google, AWS Bedrock). A standard laptop with a working internet connection is enough. You only need a GPU if you plan to fine-tune or train models locally, and even then, renting from RunPod, Lambda, or Modal is cheaper for occasional use than buying hardware.

What's the best AI certification for getting hired?

For entry-level breadth, AI-900 (Microsoft) or AWS Certified AI Practitioner. For mid-level depth on cloud platforms, AI-102 or AWS Certified ML Engineer Associate. For management roles, Google's Generative AI Leader. None of these will get you hired alone, they unlock the resume screen at large enterprises but a portfolio of real projects beats them in interviews. We compared them in detail in our certifications guide.

How much money does it cost to learn AI?

Under 200 USD if you are disciplined. The breakdown: a ChatGPT Plus or Claude Pro subscription at 20 USD per month for three months (60 USD), API credits for hands-on building (50-100 USD), one optional Coursera specialisation if you need structure (50 USD per month for one to two months, optional). Free everywhere: documentation, the Andrew Ng courses, Karpathy's lectures, MIT OCW, Hugging Face NLP, Google Generative AI Learning Path, OpenAI and Anthropic cookbooks, all major AI YouTube channels.

The bottom line

Pick a path before you pick a course. Decide whether you are a technical builder, an applied user, or a manager, and stay in that lane for the first 90 days. Set up your tools on day one, build something in week two, deploy it in week four, write evaluations in week six, ship a polished portfolio piece by week twelve. Pay only for structure you genuinely need: a Plus subscription, API credits, and at most one paid specialisation. Skip the certifications that aren't from Microsoft, AWS, or Google. Build the portfolio. Move on. The gap between people who can talk about AI and people who can do useful work with AI is now mostly motivational, not informational. Almost everything you need to know is free, public, and findable. The ones who get there are the ones who actually open a code editor in the next 24 hours and write the first 10 lines.

Last updated: May 2026