The Golden Age of Open Source in AI Is Coming to an End

<p>I joined the Google Brain team in 2015 right as&nbsp;TensorFlow was open sourced. Contrary to popular belief, TensorFlow was&nbsp;<strong>not</strong>&nbsp;the secret sauce behind Google&rsquo;s success at that point in time. Only a handful of researchers had used it, and it took several years before it transformed Alphabet&rsquo;s most important properties in a material way.</p> <p>However, TensorFlow&rsquo;s impact on the open source community was almost immediate. It ushered in an era of community-driven innovation that has directly contributed to the breakneck pace of AI advancements in the last couple of years. To be fair, TensorFlow was not the first open source deep learning library (e.g.&nbsp;Caffe&nbsp;was released in 2014), but it was the first that was backed by the level of credibility (and developer advocacy budget) of a company like Google.</p> <p>But TensorFlow is just a library. Critically, you still need to provide your own data to actually train predictive&nbsp;<strong>models</strong>. In order to predict future housing prices, you need a dataset with historic housing prices and use TensorFlow to train a model. The model that comes out on the other end now encodes the aggregate knowledge of your data. A few years after open sourcing TensorFlow, Google took another fateful step that accelerated the path towards &ldquo;free for all&rdquo; AI. The decision to&nbsp;open source the BERT model&nbsp;in 2018 helped trigger an avalanche in large language models. Shortly thereafter in 2019, OpenAI (still a non-profit at that point in time)&nbsp;open sourced their GPT2 model. And just like that, open sourcing trained models became a thing.</p> <p><a href="https://towardsdatascience.com/the-golden-age-of-open-source-in-ai-is-coming-to-an-end-7fd35a52b786">Read More</a></p>