Presented by:

Richard Crowley

PlanetScale

Richard is an engineer, engineering leader, and recovering manager. His work focuses on cloud infrastructure, tooling, databases, and distributed systems. He is an engineer at PlanetScale. Prior to joining PlanetScale he founded a consulting practice called Source & Binary and led operations engineering at Slack. He tolerates computers, likes bicycles, isn't a very good open-source maintainer, and lives in San Francisco with his wife and two kids.

No video of the event yet, sorry!

When delivering software via the Web, we monitor 99th percentile latency, often without thinking about exactly what it means. Informally, it's almost the slowest experience anyone has using our product. It's (close to) the "worst-case scenario." Too often, we use these informal definitions to excuse our 99th percentile latency. We take comfort in our median latency, which is certainly lower and may even be very impressive! This is a disservice to our users.

In this talk, we're going to examine the implications of measuring the latency at various layers of the stack, why often the metrics most useful to developers lie the most about user experience, and how we can use latency data more wisely in the development and performance optimization of our software.

This has implications all the way down to Postgres database operations. We'll demonstrate how very good median query latency or even OK 99th percentile query latency can conspire to provide a frustrating user experience for not just the unlucky few but for practically everyone.

Date:
Duration:
20 min
Room:
Conference:
Postgres Conference: 2026
Language:
Track:
Ops
Difficulty:
Medium