Prompt Engineering at Olympia

Tips and tricks for communicating effectively with your AI team

Obie Fernandez
6 min readJul 13


Olympia’s pre-configured personas mark an exciting new chapter in AI technology. For the first time in history, we can have natural, productive conversations with non-human entities as if they were real people. It doesn’t feel right to call them bots (and we shun the term), because they are real characters in our lives that can take initiative, express opinions, remember what you’ve discussed previously, and even interact with each other.

Their backstories, directives, and toolsets aren’t just window dressing — they’re the bedrock that enables them to respond naturally to a wide range of conversational prompts. Yet, despite this level of sophistication, we find ourselves (along with everyone else in the world exploiting AI large language models) living in a period that I call an “intermediate technology phase”.

This phase is similar to the infancy of search engines like Google, where in order to really harness their power, you needed to understand their parameters and configuration settings. Back then you could have called it “Search Engineering” and people would have understood what you meant. We’re in a similar spot today with large language models and the art of “prompt engineering”. While we’ve come a long way in just the last few months, and you don’t need to be a “prompt engineer” to use Olympia, there’s still a learning curve in knowing how to frame prompts effectively to get the most out of our AI team members.

Many of us old-timers used to know how to “search engineer” google using its parameters

The need for prompt engineering will lessen as AI technology advances, and the “intermediate technology phase” will eventually end, but for now, effective prompting is an illuminating skill that can help you navigate and maximize your interactions with our AI team members at Olympia.

The rest of this post highlights some of the most important prompt engineering techniques that we’ve learned so far while using Olympia ourselves.

The Magic of Ellipsis…

Ever seen a movie where the character dramatically trails off their sentence and leaves you hanging on the edge of your seat? That’s pretty much what happens when our AI team members get cut off before they finish their response. What to do in this situation? Just type ‘…’ (yes, the magic ellipsis!) and voila! They pick right back up where they left off. It’s like a gentle nudge to say “Hey, weren’t you saying something?”

Off the Rails? No Problem!

Now, let’s talk about those moments when our AI team members, with all their enthusiasm and energy, might veer off track. It’s like they’re at a party, and they suddenly start talking about quantum physics. Fun, right? Well, not always. In these cases, you have the power to get them back on track. Simply delete the messages that took a weird turn and start again. It’s that easy!

In this example, I clarified Melissa’s misunderstanding, but it’s always a good practice to remove incorrect replies from the transcript so that they don’t add confusion to the mix.

But what about when you’re working on longer text content, like blog posts? In that case, it’s a good idea to place instructions below the block of text you’re working on, rather than above. Why? Because our AI team members read from top to bottom, and this ensures they consider your latest instruction.

Remember, clear and direct prompts are the GPS that keeps our AI from taking a detour.

Our AI doesn’t have infinite memory (Sadly, it’s not an elephant)

The Scrolling Context Window

Imagine having an engaging, in-depth chat with your AI team member. It’s all going great until, well, the conversation gets a little weird or it seems like our team member has forgotten previous details.

The problem is probably that due to the constraints of their underlying Large Language Model (LLM), we have to limit the part of the transcript that the AI is able to read and consider to the most recent messages, roughly about 5,000 words. Picture it as a texting window: as you keep typing, older messages drift up and out of sight, or in this case, out of memory. It’s somewhat like having a conversation that gradually forgets its own beginning.

Our roadmap, soon to be unveiled to our valued customers, is heavily focused on refining short-term memory management. This means we’re actively improving how our AI team members handle important details from earlier parts of your conversation and prevent them from scrolling out of the context window. So, brace yourself for smoother and more seamless conversations with your AI team members in the near future!

C’mon, you promised to clean up the kitchen!

When They Stall, Stand Firm

Every now and then, your AI team member might enter what we call ‘stall mode’, where they seem to be promising to get back to you later with results of work. Don’t be fooled, though. This is a bit like your dog promising to do the dishes — charming, but not going to happen.

An example of an AI team member “stalling”

Team members don’t stall because they’re being lazy. Stalling is a form of hallucination, based on the way that AI large language models work. If you give your team member work in a way that “feels” like they should go work on it and come back to you later, that’s what they will respond.

That said, we have laid the groundwork for AI team members to work autonomously on tasks and get back to you. We feel that it’s a more natural way of working with teammates, but that functionality will not be generally available until later this year.

If your team member stalls, for now what you should do is to insist, firmly, but politely that they do the work right now. No procrastination, no delays. Note that it helps tremedously to delete stalling messages and make your prompts a bit more forceful each time until they eventually comply.

This little unassuming icon is one of the most powerful/dangerous features in Olympia

Pinning: A Double-edged Sword

Now, let’s talk about a feature that’s as exciting as it is tricky: pinning. It’s like the secret weapon in your Olympia arsenal. This feature lets you rewrite the last message in your conversation repeatedly, perfect for those perfectionists among us who love revising long text content.

But handle with care! Pinning can be like a wild horse, powerful but can get out of control. Once you pin a message, there’s no ‘undo’ button to press if things go sideways. So, use this feature wisely and remember, with great power comes great responsibility.

Mastering the art of ‘prompt engineering’ opens doors to engaging with Olympia’s AI in ways that are not just productive, but also downright fun. And guess what? This is only the beginning! The conversational prowess of our mighty team members is set for an exciting upgrade as our tech whizzes at Olympia continue to fine-tune the perfect prompts to power natural dialogues.

So here’s to unlocking meaningful conversations with AI — one prompt at a time!

Request an invite for the Olympia beta program here.



Obie Fernandez

CEO of RCRDSHP, Published Author, and Software Engineer. Chief Consultant at MagmaLabs. Electronic Music Producer/DJ. Dad. ❤️‍🔥Mexico City ❤️‍🔥 LatinX (he/him