Start-up Data Engineering bible: Ingestion (Part 2)
<h1>About me</h1>
<p><em>Hello I’m Hugo Lu, a Data Engineer who’s also worked in Finance and now CEO@ </em><a href="https://getorchestra.io/?utm_campaign=6CGK8EBVP4" rel="noopener ugc nofollow" target="_blank">Orchestra</a><em>. Orchestra is a data release pipeline tool hat helps Data Teams release data into production reliably and efficiently. I write about what good looks like in Data.</em></p>
<h1>Introduction</h1>
<p>In the <a href="https://medium.com/p/2ec5330900a6/edit" rel="noopener">last part of Ingestion</a> I covered a few ways of thinking about how to structure your data ingestion. We saw there are two main factors to consider: speed / latency and throughput / volume. There are also considerations around the destination you’re using — sending data to a data lake is different to moving it to a data warehouse for example. In this article, we’ll cover some different technical ways of achieving some of these methods, and we’ll ignore streaming for now which deserves its own topic.</p>
<p><a href="https://medium.com/@hugolu87/start-up-data-engineering-bible-ingestion-part-2-72a65f020ca2"><strong>Learn More</strong></a></p>