GPT Builder is bad. Here's how to build GPTs without it.
Plus, a few confessions from a Prompt Thinker
Here is a hard truth: You’re probably not building GPTs in the most effective way possible, but it’s totally understandable. Like children, they didn’t exactly come with a user manual. I had this realization myself a few weeks back while building a few GPTs for work, and since then, I’ve spent a ton of time rebuilding my entire GPT library and creating new ones using a new approach. One I want to share with you now. It's taken me a while to put it into words, and it’s not a catch-all, but it should help you reframe how you’re approaching your GPT builds and help you achieve a higher level of output.
Now, if you're newer to the world of GPT creation, I suggest starting with Part 1 and Part 2 of this series. But if you're an old hand at this, keep on reading.
Let’s start with why I think most of us are building GPTs in the wrong way: GPT Builder. If you’re familiar with it, then you know how to use the Create tab. It feels like magic at first, but in truth, it's as useful as a chocolate teapot when it comes to training your tool. I've tried every which way to make it work, but I've never managed to create a truly impressive GPT with it. Here’s what I mean when I say the Create tab is… just bad:
Instead of working with me to deconstruct my prompt, enhance my thinking, or get clear on what I want the tool to do, it moves straight to making a name and image. It does this almost every time. While this example isn’t concrete proof, I’ve spent enough time conversing with these tools to know when it’s not taking my requests to task. Instead, it feels like a pretty weak set of pre-defined protocols that gives the illusion of building a tool, without actually getting into the meat of it.
So, what's the solution? It's simple – ditch the "Create" tab and build in the "Configure" section. And no, this isn’t the big revelation. Just the first shift that I humble suggest.
Let’s continue.
Another truth: Creating a great GPT is not easy. It requires detail and nuanced prompt structuring, trial and error, and a team of amazing people to help you test and learn. With this in mind, I want to highlight the three core principles I apply to desigining GPTs. If you stick to them, you should be well on your way to building seriously powerful tools:
Keep it simple
Think like a UX designer
It takes a village to build a GPT, and GPT Builder has no place in that village
Let’s look at each of these principles with some examples.
#1 - Keep it simple
When I say keep it simple, I mean make your GPT clear, directional, and focused. Don't try to create a jack-of-all-trades tool; instead, make it a master of one.
Take this Proofreader GPT, for example. I've given it a clear persona (expert proofreader), focused its functions, and built it with a clear end goal in mind. Here’s the custom instructions built into the configure tab:
I know what you’re thinking, “I thought you said to keep it simple”. While it’s a long set of instructions, it’s still a good example of keeping it simple. Here’s why:
1. A clear persona keeps the GPT focused. By giving your tool a specific role - this is one thing only: an expert proofreader. By doing this, you’re giving it a job description. This helps the GPT stay on task and consistent. It's like hiring a specialist instead of a generalist – you know exactly what you're getting.
2. Focused functions lead to predictable results. When you limit your GPT's functions to a specific set of tasks, you're reducing the chances of unexpected or inconsistent outputs. It's giving your GPT a script to follow – it might not be the most thrilling conversation, but it'll allow the user to focus less on snake charming the GPT and more on getting the job done reliably.
3. A clear end goal provides a roadmap. By defining the desired outcome of the interaction, you're giving your GPT a destination to work towards. This helps keep the conversation on track and ensures that the user gets what they came for.
#2 - Think Like a UX Designer
Almost every person who builds a GPT starts by building it for themselves. It’s a big problem for Open.AI’s GPT store because no one builds a GPT for you… quite like you. But a real superpower for GPTs is the ability to build for your teams, leaders, friends, and other prompters who may find your GPT useful.
But whether you’re building a GPT for an audience of one, or you’re doing it in hopes of mass adoption, you need to think about it from the perspective of a UX designer. What is the user experience and how do you get the GPT to not only consistently the right service, but to help nudge the user in the direction of its strengths?
Let's take a look at Meeting Notes Pro as an example.
When redesigning this GPT, I had to consider not just its primary function, but also the various "branches" of work it needed to tackle. The goal was to create a tool that was intuitive, efficient, and valuable to any user. It needed to solve for multiple inputs: messy notes, compiling notes, and analyzing meeting transcripts. It’s still simple and focused because it approaches one problem: getting better notes. There’s a lot more to consider when the inputs from users are more varied. To add more complexity to it, I needed to make sure that users knew exactly what to do with the tool should they choose to use it.
Here’s why this is a solid example of experiential design in a GPT:
Clear purpose and user expectations set the stage. By communicating the GPT's capabilities upfront for the user, you're setting the stage for a successful interaction. In this one, I’m nudging the user to simply click the one Conversation Starter available so that the GPT runs a defined script. It then gives the user an explanation, or a menu, if you like food metaphors. It's like a first date – you want to put your best foot forward and let your users know what they can expect from the relationship.
Actionable output is the main course. Your GPT's responses are like the main course of a meal – they need to be satisfying, well-prepared, and easy to digest. Whether it's providing information, answering questions, or assisting with tasks, your GPT's output should be concise, organized, and actionable. You want your users to leave the conversation feeling full and satisfied, not hungry for more. With this GPT, I gave it a very specific template to follow for each choice. That makes the output consistent and keeps users confident in what is returned.
Flexibility and user control are the cherry on top. Giving your users the freedom to navigate the conversation in a way that suits their needs while always bringing them back to the core strengths of the tool allows them to be creative and explore but within the constraints of what the GPT is trained on. They can go wild in another chat window – your GPT is focused on a particular goal. By providing multiple paths or options at key decision points to achieve that goal, you're letting your users take control of their experience.
#3 - It takes a village to build a GPT
If you’ve been following Prompt Thinking for any amount of time, I know what you’re thinking – “Does this guy know how to use anything other than ChatGPT?” A fair question. And with this comes the confession I promised at the beginning: I’ve fallen in love with Claude Opus. 😲😲
I've been using Claude Opus as the "man behind the curtain" to help me augment my GPT design. Instead of relying solely on ChatGPT or EVER using GPT Builder, I've found that building my instructions with Claude, having it think like a UX designer building "chatbots," and iterating with it on the desired behavior is incredibly powerful. You of course have to drop the finished product into the Customer Instructions, but building outside GPT Builder is a major unlock.
So, one of my biggest pieces of advice is… use more GenAI tools to build your GenAI tools. I often open up a ChatGPT window, Gemini, or Claude (or sometimes run all three to test the differences in output) and experiment. Using GenAI to build your GenAI… shocking advice from a Prompt Thinker.
But that’s not all, when I say it takes a village, I mean it literally. GenAI is great at helping with the refinement and polishing of the tool design, and I love my little army of augmented machines, but you need more than other GenAI’s to truly get to an amazing tool.
Which brings me to my next point, and I’ll keep saying it until it becomes my catchphrase: Think first, prompt later.
It’s the most important philosophy for us to keep in mind as we augment our thinking. A key part of thinking first and prompting later is thinking with other smart people first. You can't build great GPTs alone. Find your tribe (to extend the village metaphor) of prompt thinkers and tinkerers. I look for two distinct types of people for input when building my tools:
1. The end-users: These are the folks who will benefit most from using your GPT. They're like your beta testers, putting your creation through its paces and providing invaluable feedback. Listen to them, learn from them, and iterate based on their experiences.
2. The trusted advisors: These are the people you trust to give you candid, no-holds-barred feedback. They're like your own personal Simon Cowell, likely without so much Botox and sass, telling you when your GPT is a bit pitchy or needs to work on its stage presence. Embrace their criticism, because it's the only way to improve your output.
Here's the last secret I’ll divulge – some of the best GPTs I've made have been pulled out of the minds of people who barely use GenAI. It’s these interactions that allow for novel ideation around use cases, and of course, it’s an opportunity to show them the magic behind GenAI. It’s a win-win.
So, that’s it. That’s the GPT series. Go forth and build, my friends.
Happy prompting.
Awesome article and advice. I wish I had read it this morning! Going to need to pick your brain on the UX. I’ve been thinking a lot about UX and the behavioral design of GPTs.