Continuous Performance Improvement of HTTP API

In my previous post, I detailed a few code tricks to improve backend performance. How did I know where to focus and what to optimize, though? Indeed, joining Cython and other low-level gizmos to the party should have solid reasoning.

I work at Athenian. Athenian offers a SaaS that helps engineering leaders build a continuous improvement software development culture. We have pretty strict performance targets dictated by the UX. It’s hard to achieve great P95 response times without proper tooling. So we’ve wrapped ourselves with high-quality apps and services:

  • Sentry Distributed Tracing allows us to investigate why a particular API request executed slow in production. This tool works in the Python domain.
  • Prodfiler gives an independent zoom into the native CPU performance, including all the shared libraries.
  • py-spy is an excellent low-overhead Python profiler by 

    Ben Frederickson

    .
  • Prometheus + Grafana help to monitor the immediate situation and trigger performance disaster recovery.
  • Google log-based metrics augment the previous toolchain by indicating an elevated frequency of important operational events.
  • Google Cloud SQL Insights is a must-have managed PostgreSQL performance monitor on the individual query level.
  • explain.tensor.ru is my favorite PostgreSQL execution plan visualizer. It offers many automated hints that are always relevant.

Today’s post illustrates the weekly routine to identify slow spots and speed up things. In particular, I will demonstrate the usefulness of Sentry traces, py-spy, and Prodfiler.

Read More

Tags: API HTTP