Overcoming The Final Hurdle of Data Automation With Fewer Failures
<p>I’m the embodiment of the meme in which a developer spends hours automating a relatively simple task. In other words, while much of the world is increasingly apprehensive of replacing processes with AI, I’m still pro-automation.</p>
<p><img alt="Drake in a meme with accompanying text." src="https://miro.medium.com/v2/resize:fit:630/1*59tKIcZ5Te5brBNL1k_tGg.jpeg" style="height:700px; width:700px" /></p>
<p>Image courtesy of starecat.com.</p>
<p>And while I’ve developed some pipelines outside of work to serve my own needs or to help out a friend, I still struggled with one very important aspect of each ETL build.</p>
<p>If you’re reading this, I imagine you might struggle with the same issue.</p>
<p>Deployment.</p>
<p>To be clear, at work in nearly two years I’ve written thousands of lines of code, created probably 50-ish pipelines and written CI/CD processes ranging from cloud function deployment to Docker image updates.</p>
<p>When I wanted to replicate some of these processes with my own builds, I experienced a lot of failure.</p>
<p><a href="https://medium.com/pipeline-a-data-engineering-resource/overcoming-the-final-hurdle-of-data-automation-with-fewer-failures-1ff060dd2b37">Read More</a></p>