Custom GPTs and Projects: When They Are Worth Building

OpenAI shipped Custom GPTs at DevDay in November 2023 with a pitch that suggested everyone would build one. Two years later the GPT Store has hundreds of thousands of public GPTs, the vast majority of which are barely-distinguishable wrappers around the same handful of system prompts. The honest truth most users discover too late: 90% of "I should build a Custom GPT for this" instincts are better served by a saved prompt in your notes app. The other 10% — the ones that genuinely pay back — are the ones this guide is about. We cover what Custom GPTs and Projects actually do, where they earn their keep, where they do not, and the worked example of building one in 30 minutes.

Table of contents

What a Custom GPT actually does

A Custom GPT is a packaged ChatGPT instance with three things baked in: a system prompt that sets behaviour, optional uploaded knowledge files (PDFs, spreadsheets, references the model can search at query time), and optional tool access (web browsing, code interpreter, image generation, custom actions to your APIs). The user starts a chat against this packaged instance. The packaged behaviour applies to every message in the chat, including new chats started against the same GPT.

What Custom GPTs do not do, despite occasional confusion: they do not fine-tune the underlying model. The base model is still GPT-4o or whichever the picker provides. The Custom GPT is a configuration layer on top. This matters because expectations of "trained on my docs" are misplaced. The docs are made available at query time through retrieval, not folded into the model's weights.

The right mental model is closer to a saved prompt with a pinned reference shelf and a few buttons. Useful, sometimes powerful, never magic.

Custom GPT vs Project (different things)

Projects, announced 13 December 2024, look superficially similar to Custom GPTs and are doing something quite different. The distinction matters because using the wrong one is the most common cause of disappointment.

A Custom GPT is global. It exists outside any specific chat. Anyone you share it with, or anyone in the GPT Store if it is public, can start a chat against it from anywhere. The behaviour is consistent. It is the right tool for codified, reusable behaviour — a brand-voice editor, a code reviewer for a specific style guide, a triage assistant for a known process.

A Project is local. It bundles related chats, files, and a Project-level instruction within a single Project, owned by you. New chats inside the Project inherit the instruction and have access to the files. The right tool for ongoing, evolving work — a thesis, a launch plan, a client engagement — where context accumulates and the system prompt itself evolves over weeks.

FeatureCustom GPTProject
SharableYes (publicly or by link)No, private to you
Reusable across many chatsYesYes, within the Project
System prompt evolvesLess often, more deliberatelyOften, as work progresses
Knowledge filesYesYes
Custom actions to APIsYesNo
Best forCodified behaviour for many usersOngoing personal or team work

Putting client-specific files in a Custom GPT and sharing it publicly is one of the easiest ways to leak confidential information; that material belongs in a Project or, for teams, in a Custom GPT inside ChatGPT Team where access is controlled.

Cases where Custom GPTs pay back

Long, stable system prompts. If you have a 1,500-word style guide that should apply to every editing task you do, and the guide changes only quarterly, a Custom GPT pays back fast. Pasting that guide into every new chat is a chore. Baking it into a GPT removes the chore.

Reference knowledge that should ground answers. A product spec. A contract template. A glossary of internal terms. A style book. The model's retrieval over the uploaded files surfaces the right references at query time, which is more reliable than asking it to remember.

Behaviour you want shared. If five people on your team should all use the same prompt structure, a Custom GPT inside ChatGPT Team is the right answer. Everyone sees the same thing. Updates propagate. Drift across team members is reduced.

Tool integrations. Custom GPTs can call external APIs through actions. A GPT that can lookup customer data in your CRM and draft a contextual reply is the kind of integration that justifies the build. This is where the technology starts to look genuinely useful, and it is also where the build cost climbs — actions require an OpenAPI spec and authentication.

Constrained personas with hard rules. A bot that should never make medical claims. A teaching assistant that should not give answers, only Socratic prompts. A roleplay persona with explicit limits. The system-prompt slot enforces rules that a free-form chat will not.

Cases where they do not

One-off prompts you happen to use a lot. If your prompt is "rewrite this email warmer," a Custom GPT is overkill. A saved prompt in your notes app, dragged in as needed, is faster to maintain and equally effective.

Tasks that depend heavily on context that changes per chat. If every use needs a different brief, a different audience, a different goal, the Custom GPT framing fights you. The system prompt assumes stability. Where there is no stability, plain chats win.

Replacing a search engine. "A Custom GPT that knows about [topic]" is a category of failed builds. The model with web search, given a fresh question, beats a Custom GPT with stale uploaded references for most current-information tasks.

Trying to fine-tune behaviour through volume of examples. Custom GPTs are not fine-tuning. Uploading 200 example outputs in the hope that the model will absorb the style does not work the way it would for fine-tuning. Five precise examples in the system prompt outperform 200 in a knowledge file.

Mass-market public GPTs in saturated categories. The GPT Store is full of duplicate "best resume builder," "best email writer," "best brainstorming partner" GPTs. A new entrant in those categories almost never finds an audience. Build for narrow specifics, not for broad demand.

Worked example: building one in 30 minutes

The brief: a Custom GPT for a marketing team's monthly customer interview synthesis. They run 10 to 15 customer interviews each month, transcripts arrive in a shared drive, and the team needs a consistent synthesis: top three themes, three illustrative quotes per theme, two surprising findings, and a list of follow-up research questions. Currently this takes a researcher half a day per round. The build:

Step 1 — Open Explore GPTs in the sidebar, click Create. The interface gives you a chat with the GPT Builder on the left and a preview chat on the right. The GPT Builder asks you what the GPT should do. Tell it.

Step 2 — Refine the system prompt by hand. The GPT Builder produces a draft system prompt you can edit in Configure. Replace its boilerplate with the actual brief, in the actual voice of the team, with explicit constraints: number of themes, length of quotes, how to handle conflicting themes, what counts as "surprising," how follow-up questions should be phrased.

Step 3 — Upload the knowledge files. The team's interview synthesis style guide. Five examples of past good syntheses, with their original interview transcripts. Optionally, a glossary of product terms so the GPT does not misread internal jargon.

Step 4 — Configure the conversation starters. Three suggested first messages: "Synthesise the attached transcripts," "Compare this month to last month," "What patterns emerge across the three customer segments." Conversation starters lower the friction for first-time users.

Step 5 — Test on a real input. Paste a real transcript. Read the output critically. Iterate the system prompt against the gaps. Repeat until three test inputs produce satisfactory outputs without further intervention.

Step 6 — Share with the team and review weekly for the first month. The first month of any Custom GPT in real use surfaces the cases the build did not anticipate. Plan to spend an extra hour each week tightening the system prompt.

Total elapsed: roughly 30 minutes for the initial build, plus four hours over the first month of refinements. The half-day per round of synthesis collapses to under an hour. The payback is fast and durable as long as the work itself does not change shape.

One nuance the worked example glosses over deliberately: the system prompt should reference the knowledge files explicitly. "Always consult the style guide in the attached file before producing a synthesis. Cite specific past examples by name when applying patterns." Without those instructions, the model treats the knowledge files as background reading it may or may not use. With them, retrieval is reliable. This single-line addition to the system prompt is the difference between a Custom GPT that uses its uploaded knowledge and one that ignores it.

Distribution and the GPT Store

The GPT Store launched January 2024. OpenAI introduced revenue-sharing for top GPTs in 2024 and expanded it through 2025. The economics for individual builders are real but modest — most paid GPT Store creators in 2026 earn less than a side-project blog. The exceptions are tools in narrow professional verticals where the GPT solves a real problem and the audience pays for tools generally.

The structural lessons learned from the Store's first two years. Discoverability is hard; SEO-style category-leader GPTs with strong reviews and clean naming win. Public attention is a fraction of what it was at launch; the early gold rush is over. Quality wins more often than novelty does. And the Custom GPTs that genuinely accumulate reviews and usage are the ones with deep, narrow, specific use cases — not the broad ones.

For most teams, the right answer is private GPTs in ChatGPT Team or Enterprise, not public Store distribution. Internal tools have a clear audience, a clear feedback loop, and a clear value path. Public distribution requires marketing skill on top of build skill, and most builders do not have both. We treat the broader marketing-and-distribution question in the AI tools hub.

Frequently asked questions

Do Custom GPTs train on my data?

The conversations users have with your Custom GPT follow the standard ChatGPT data policy of the user's account. If they are on Free or Plus with default settings, OpenAI may use the conversations for training. If they are on Team or Enterprise, no. The GPT itself, its system prompt, and its knowledge files are not used to train the base model.

Can I make money from a Custom GPT?

OpenAI's revenue-sharing programme exists, with eligibility tied to usage and country. Most builders earn modest amounts; a small number with strong professional-grade GPTs earn meaningful side income. The honest expectation: do not build for the GPT Store as a primary income stream. Build for a real workflow, then consider distribution as a secondary outcome.

What is the difference between a Custom GPT and an AI agent?

An AI agent typically means a system that can take multiple steps, use tools, and act in the world — often beyond chatting. A Custom GPT can incorporate tool use through actions and is in that sense a simple agent, but the term agent is usually reserved for more autonomous systems. Our AI agents hub covers the distinction in depth.

Can I update a Custom GPT later?

Yes, freely. Edits to system prompt, knowledge files, and configuration are possible at any time. Users of the GPT see the updated behaviour immediately on their next chat. Versioning is light — there is no built-in way to roll back to an earlier system prompt other than restoring it manually, so keep your own copy of the prompt in version control.

What is the maximum knowledge file size?

OpenAI publishes limits per file and per GPT, with the totals shifting over time. As of 2026 a Custom GPT can hold tens of thousands of pages of reference material across multiple files. The practical limit is not size but signal. Uploading less, more curated, almost always outperforms uploading more.

The bottom line

Custom GPTs are a real feature, used poorly by most builders and well by a small minority. The single best heuristic before building one: would you maintain this same configuration manually for the next six months? If yes, a Custom GPT pays back. If no — if the prompt changes weekly, if the audience is too small, if the use is one-off — a saved prompt or a Project is the better tool.

The other heuristic, less often stated but equally true: a Custom GPT is good when at least three other people will use it. The setup cost rarely pays back for a single user, because that user could maintain the same configuration in a notes app at lower cost. The break-even is the moment you stop being the only consumer of your own prompt. Build less, more deliberately, and the GPTs you ship will be the ones that get used. Our pillar ChatGPT guide covers the broader feature context, and the full hub has further depth on every capability that pairs with Custom GPTs.

Last updated: May 2026