Why More Is More (in Artificial Intelligence)
<p>Deep neural networks (DNNs) have profoundly transformed the landscape of machine learning, often becoming synonymous with the broader fields of artificial intelligence and machine learning. Yet, their rise would have been unimaginable without their partner-in-crime: stochastic gradient descent (SGD).</p>
<p>SGD, along with its derivative optimizers, forms the core of many self-learning algorithms. At its heart, the concept is straightforward: calculate the task’s loss using training data, determine the gradients of this loss in relation to its parameters, and then adjust the parameters in a direction that minimizes the loss.</p>
<p>It sounds simple, but in applications, it has proven to be immensely powerful: SGD can find solutions for all kinds of complex problems and training data, given it is used in conjunction with a sufficiently expressive architecture. It’s particularly good at finding parameter sets that make the network perform perfectly on the training data, something called the<strong> interpolation regime</strong>. But under which conditions are neural networks thought to <strong>generalize well</strong>, meaning that they perform well on unseen test data?</p>
<p><img alt="" src="https://miro.medium.com/v2/resize:fit:700/1*RRw0KXtmqiz-51FQsliliA.png" style="height:700px; width:700px" /></p>
<p>The quest to generalize lies at the heart of machine learning. Envisioned by DALL-E.</p>
<p>In some ways, it’s almost too powerful: SGD's abilities aren’t only limited to training data that can be expected to lead to good generalization. It has been shown e.g. <a href="https://arxiv.org/abs/1611.03530" rel="noopener ugc nofollow" target="_blank">in this influential paper</a>, </p>
<p><a href="https://towardsdatascience.com/why-more-is-more-in-deep-learning-b28d7cedc9f5"><strong>Visit Now</strong></a></p>