Are Lakehouses a joke or is Databricks the endgame??

<h1>About me</h1> <p><em>I&rsquo;m Hugo Lu &mdash; I started my career working in finance before moving to JUUL, a scale-up, and falling into data engineering. I headed up the Data function at London-based Fintech&nbsp;</em><a href="https://codat.io/?utm_campaign=hugolu" rel="noopener ugc nofollow" target="_blank"><em>Codat</em></a><em>. I&rsquo;m now CEO at&nbsp;</em><a href="https://getorchestra.io/?utm_campaign=2023_8_orchestra_medium_account_medium_social" rel="noopener ugc nofollow" target="_blank"><em>Orchestra</em></a><em>, a data release pipeline management platform that helps Data Teams release data into production reliably and efficiently.</em></p> <h1>Introduction</h1> <p>There&rsquo;s this notion in maths of a limit. For example, the sum of the series of 1/2^n tends to 2 as n tends from 0 to infinity. This is helpful when considering what the endgame is for data engineering tools and software.</p> <p>You can apply this to batch processes, in particular the frequency with which you run them and their corresponding size or throughput. Like 1/2^n tending to two, batch processes tend to &ldquo;streaming&rdquo; use cases; treating data as streams and triggering operations as soon as new data becomes available. This is where the Lakehouse comes into play</p> <p><a href="https://medium.com/orchestras-data-release-pipeline-blog/are-lakehouses-a-joke-or-is-databricks-the-endgame-6d0e578ee0a4"><strong>Read More</strong></a></p>