How three lines of configuration solved our gRPC scaling issues in Kubernetes

<p>It all started with a question I asked our senior software engineer:<br /> <em>&ldquo;Forget the speed of communication. Is it really better for you to develop communication in gRPC instead of REST?&rdquo;</em><br /> The answer I didn&rsquo;t want to get came immediately:&nbsp;<em>&ldquo;Absolutely yes.&rdquo;</em></p> <p>Before I asked this question, I was monitoring a strange behavior of our service during a rolling update and mostly when scaling pods up. Most of our microservices have historically communicated via REST calls without any issues. We have migrated some of these integrations to gRPC mostly because of the overhead of REST we wanted to get rid of. Lately, we have observed several issues that pointed in the same direction &mdash; our gRPC communication. Of course, we followed suggested practices for running gRPC without service mesh in Kubernetes like those described in&nbsp;<a href="https://medium.com/swlh/balancing-grpc-traffic-in-k8s-without-a-service-mesh-7005be902ef3" rel="noopener">this blog post</a>&nbsp;and used a headless service object on the server, and deployed client &ldquo;round-robin&rdquo; load balancing with DNS discovery in gRPC, etc.</p> <p><a href="https://medium.com/jamf-engineering/how-three-lines-of-configuration-solved-our-grpc-scaling-issues-in-kubernetes-ca1ff13f7f06"><strong>Learn More</strong></a></p>
Tags: gRPC Scaling