Prompt Engineering with Anthropic Claude Tips on how to prompt Claude more effectively. Take-aways from a talk by Anthropic’s “Prompt Doctor” (Zack Witten).
You should be A/B testing your prompts. Ground truth is subjective, and the only reliable way to evaluate prompts is with real user metrics. A/B testing helps you safely iterate.
Tool Calling with LLMs: How and when to use it? LLM tool calling as an AI idiom, its benefits over JSON mode, and examples of how to use function calling in your real projects.
Speeding up iteration with PromptLayer’s CMS (tips for prompt management) This post was cross-posted with permission from Greg Baugues. You can find the original at https://www.haihai.ai/friction/
Gorgias Uses PromptLayer to Automate Customer Support at Scale Gorgias uses PromptLayer every day to store and version control prompts, run evals on regression and backtest datasets, and review logs.
From Zero to 1.5 Million Requests: How PromptLayer Powered Meticulate’s Viral Launch Meticulate Case Study — PromptLayer empowers AI startup to debug complex agent LLM pipelines, rapidly build MVP, and go viral.
How Speak Empowers Non-Technical Teams with Prompt Engineering and PromptLayer Speak Case Study — PromptLayer empowered content, product & bizops teams to efficiently scale AI-driven workflows, fueling rapid growth.
How Ellipsis uses PromptLayer to Debug LLM Agents Ellipsis Case Study — PromptLayer slashes LLM agent debugging time by 75%, fueling 500K+ requests and 30 new customers in just 6 months.
How PromptLayer Enables Non-Technical Prompt Engineering at ParentLab ParentLab Case Study — How non-technical prompt engineers use PromptLayer to build highly-personalized AI user interactions.