How We Scaled to 100 Million Active Users Using Kafka and Golang — Eventual Consistency
<p>Nowadays, we have reached an era where the most popular startups reach millions of users within less than a year. During my experience as a software developer, where I had the privilege to work on a couple of them, I’ve seen that the most common bottleneck within a backend service is caused by I/O overhead. In this article, we will discuss the <strong>Eventual Consistency</strong> technique and how we can overcome I/O bottlenecks in scale by utilizing Kafka.</p>
<h1>How simplicity allows Kafka to scale up to millions of messages per second</h1>
<p>Basically, a Kafka cluster consists of multiple servers, called brokers in the Kafka vocabulary, and one or more topics.</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:620/1*gQYItzs6sjU95oCHhnPwxg.png" style="height:478px; width:620px" /></p>
<p><strong>What is a topic in Kafka?</strong> A topic is a stream of messages with similar behaviors. They can be similar events, JSON, or anything as bytes. For instance, we had topics for published posts, comments, and likes, as well as action logs to study user experiences.</p>
<p><strong>How is the structure of a topic in a cluster? </strong>Topics have two main properties: the number of partitions and replications. The replication determines how many copies we need from a topic to increase resiliency in case of broker failures. Before describing partitions, let’s discuss a message's journey, from its publication to a topic until received and committed by a consumer.</p>
<p><strong><a href="https://itnext.io/how-we-scaled-to-100-million-active-users-using-kafka-and-golang-eventual-consistency-6241cfeba7e8">Read More</a></strong></p>