All you need to know about prompt engineering

All you need to know about prompt engineering

I recently recorded a podcast with Dan Shipper on Every. We covered a lot, but most interestingly spoke a lot about prompt engineering from first principles.

Figured I would out all the highlights in blog form.

The reports of prompt engineering's demise have been greatly exaggerated.

Watch the full episode here.

The Three Pillars of Prompt Engineering

The foundation of effective prompt engineering rests on three key primitives:

  • The Prompt: The actual instructions given to the model
  • The Evaluation: How we measure success
  • The Dataset: The examples and data used for testing

What's fascinating is that while you can potentially automate one of these elements, you can't eliminate all three. Each plays a crucial role in creating robust AI applications that deliver real value.

DSPy and similar prompt automation tools are great. But they simply shift the focus from the prompt to the dataset.

The core of prompt engineering is domain context and can not be automated.

The Rise of Non-Technical Prompt Engineers

I predict that one of the biggest step functions in the field of software development will be the prompt engineer. Specifically the non-technical prompt engineer.

Rather than requiring deep technical knowledge, successful prompt engineering is about subject matter expertise.

You don't need to be an ML engineer for most interesting LLM use-cases. For example:

  • Teachers crafting educational AI responses
  • Legal experts fine-tuning legal assistance systems
  • Healthcare professionals designing medical chatbots

The Scientific Method vs. Research Papers 🔬

Contrary to the academic approach often seen in research papers about prompting strategies, effective prompt engineering is more about systematic iteration than theoretical frameworks. The key to success lies in:

  • Rapid testing and iteration
  • Clear evaluation frameworks
  • Strong feedback loops
  • Domain-specific optimization

Just do it. Just try things, and check the results. That's the key to writing good prompts. There is no silver bullet.

Best Practices for Modern Prompt Engineering

A few categories on the top of my mind for prompt engineering below.

The Router Approach

Modular Prompting on the right

Instead of creating one massive prompt that handles everything, build specialized prompts for specific tasks and route between them.

Read more about modular prompting here.

The advantage of this is not just reliability and the ability to mix-and-match models, modular prompting let's you actually collaborate with your team.

Start with Simple Evals

Start simple. Most teams don't even ship their LLM product to production. As always, don't over-engineer.

Backtesting is the most simple way to prompt engineer. Just run the new prompt on historical data and see how it changes.

For a concrete example, read the PromptLayer case study with Gorgias to see an effective use of backtesting in production.

Who will win the AI race?

The winner in the AI space won't be determined by who has the best machine learning engineers, but rather by who can best integrate domain expertise into their systems.

This means:

  • Empowering subject matter experts
  • Creating intuitive tools for non-technical users
  • Building flexible, modular systems
  • Maintaining human oversight and expertise

You are not going to build defensibility & a successful product by working with ML engineers who don't have a unique insight into the AI workflow that is being automated.

Practical Implementation Tips

Start Simple

  • Begin with a single prompt to understand the problem space
  • Iterate based on real feedback
  • Gradually increase complexity as needed

Focus on Modularity

  • Break down complex tasks into smaller, manageable components
  • Create specialized prompts for specific functions
  • Implement clear routing logic

Embrace Domain Expertise

  • Involve subject matter experts early
  • Create tools that empower non-technical users
  • Build feedback loops with domain experts
The myth of the perfect, general-purpose prompt is just that – a myth.

Success in AI application development comes from understanding that different contexts require different approaches, and the key to scaling is empowering domain experts to shape these systems.


PromptLayer is the most popular platform for prompt engineering, prompt management, and evaluation. Teams use PromptLayer to build AI applications with domain knowledge.

Made in NYC 🗽 Sign up for free at www.promptlayer.com 🍰

Read more