Claude Code has changed how we do engineering

Prioritization is different. Our company has shipped way faster in the last two months. Multiple customers noticed. It helped us build a “Just Do It” culture and kill prioritization paralysis. Claude Code (or OpenAI Codex, Cursor Agents) is an AI coding tool that is so good it made us rethink

Grok Code Fast 1: First Reactions

256k context and ~92 tokens/sec—xAI's coding model lands in GitHub Copilot public preview, aiming for speed at pennies per million tokens. This post covers what it is, why it's fast and cheap, best uses, where to try it, and what's next. Why

xAI's Prompt Engineering Guide for grok-code-fast-1

xAI recently released a prompt engineering guide for their new grok-code-fast-1 model. These guides are awesome ways to learn how to be a better day-to-day prompt engineer, not just with grok. Below are my takeaways after reading it. They are aimed to help with day-to-day prompt engineering. Iteration is better

Agent Client Protocol: The LSP for AI Coding Agents

What if switching between AI coding assistants was as easy as changing text editors? That's the promise of the Agent Client Protocol (ACP)—a new open standard that aims to do for AI agents what the Language Server Protocol did for programming languages. Just as LSP decoupled language

LLM Idioms

An LLM idiom is a pattern or format that models understand implicitly - things their neural nets have built logic and world models around, without needing explanation. These are the native languages of AI systems. To me, this is one of the most important concepts in prompt engineering. I don&

How I Automated Our Monthly Product Updates with Claude Code

From tedious manual work to comprehensive automated analysis in one afternoon 0:00 /2:41 1× If you're like me, you probably dread writing those monthly product update emails. You know the ones – where you have to comb through dozens (or hundreds) of commits across multiple repositories, trying

Why LLMs Get Distracted and How to Write Shorter Prompts

Context Rot: How modern LLMs quietly degrade with longer prompts — and what you can do about it Context Rot: What Every Developer Needs to Know About LLM Long-Context Performance How modern LLMs quietly degrade with longer prompts — and what you can do about it If you've been stuffing

The first platform built for prompt engineering