What is Test Time Compute?

“More compute!” is a common refrain these days in discussions about enhanced LLM performance and capability. Just scan a few recent headlines to see the resources major companies are willing to pour into getting it. In a broad sense, this refers to more and better hardware used in model training.

What Are Prompt Evaluations?

What makes one prompt more effective than another? And how can we quantify and document those differences? That's where prompt evaluations come in. Prompt evaluations are a way for you to assess and refine the inputs (prompts) you provide to an AI model, resulting in improved performance. Whether

AI Agents vs. Workflows

In the fast-evolving world of AI, the debate between agents and workflows is becoming increasingly relevant. Businesses and their coterie of developers and prompt engineers are exploring how these two approaches can enhance their projects. But what exactly are agents and workflows, and which is right for your use case?

The first platform built for prompt engineering