How Ellipsis uses PromptLayer to Debug LLM Agents

Ellipsis Case Study — PromptLayer slashes LLM agent debugging time by 75%, fueling 500K+ requests and 30 new customers in just 6 months.

How Ellipsis uses PromptLayer to Debug LLM Agents

The following is a case study of how Ellipsis uses PromptLayer.

Ellipsis is one of the most impressive AI startups around. Their product leverages LLMs to provide automated code reviews, pull request summaries, and easy-to-accept code suggestions. By integrating with tools like GitHub, Slack, and Linear, Ellipsis offers thoughtful, context-aware responses that consider style guides, product roadmaps, logs, and bug reports. Their agent-based infrastructure allows them to provide deep, LLM-powered code reviews on every commit, freeing up developers to make progress on other tasks.

Challenge

As Ellipsis rapidly scaled to support hundreds of thousands of requests and newly onboarded customers every day, their engineering team faced the challenge of quickly triaging and debugging issues that arose in customer workflows. With lots of variables to isolate across a high volume of requests, the team needed an efficient way to identify the root cause when things went wrong.

For now, I think just like “great ML engineers spend a lot of time looking at their data”, AI engineers [have to] spend a lot of time reading agent logs and manually inspecting results.
Nick Bradford (Founder & CTO) in his blog post on LLM agents

Solution

Ellipsis turned to PromptLayer’s observability platform to streamline their debugging workflow:

All requests are logged to PromptLayer and enriched with metadata like execution IDs and user IDs. When an issue is reported, engineers simply paste the relevant workflow ID into PromptLayer as a metadata search filter. In just 3–4 clicks, they can pinpoint the exact prompt that caused the error.

Using advanced search capabilities in PromptLayer to jump to a specific user request.

PromptLayer’s LLM-tailored observability UI allows engineers to quickly view the full agent conversation history and identify where the agent may have gone off track. One more click opens the request in a playground for interactive debugging and testing. This enables rapid iteration on prompts to resolve issues, without having to copy and paste the prompt or re-run some local workflow.

This tight integration between PromptLayer’s dense, informative UI, conversation history viewer, and Playground enables Ellipsis engineers to go from an error report to a functional code change in minutes. The alternative — building an in-house observability stack — would have diverted significant time and effort from their core product.

If I’m at my desk and see that somebody’s workflow went bad, it takes only 3 or 4 clicks. I go to PromptLayer, filter by the workflow ID, and I’m in. The information density means my time to being productive is really really good.
Nick Bradford (Founder & CTO)

Results

  • After adopting PromptLayer in October 2023, Ellipsis scaled from zero to over 500,000 requests and 80M+ daily tokens across 80+ customers within 6 months
  • PromptLayer enabled Ellipsis to reduce debugging time by 90%, making it 10x faster than their previous workflow
  • With PromptLayer, Ellipsis engineers can identify and resolve issues in just 3–4 clicks, a 90% reduction in the number of steps required compared to their previous workflow
  • Ellipsis avoided having to spend hundreds of engineering hours building out their own internal issue triaging solution

PromptLayer is the most popular platform for prompt engineering, management, and evaluation. Teams use PromptLayer to build AI applications with domain knowledge.

Made in NYC 🗽 Sign up for free at www.promptlayer.com 🍰