Reinforcement Learning with AWS DeepRacer

<p>In March 2016, Lee Sedol, the greatest Go player of the past decade, was defeated 4&ndash;1 by&nbsp;<a href="https://deepmind.com/research/case-studies/alphago-the-story-so-far" rel="noopener ugc nofollow" target="_blank"><strong>AlphaGo</strong></a>. Computers have beaten the best humans at chess before, but Go is at another next level at complexity. Do you know what&rsquo;s even&nbsp;<em>crazier</em>? The machine had only been learning how to play for the past<em>&nbsp;8 hours</em>&nbsp;before the match.</p> <p>This January 2019, the Google DeepMind team took AlphaGo to another level, making&nbsp;<a href="https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii" rel="noopener ugc nofollow" target="_blank"><strong>AlphaStar</strong></a>. Starcraft is one of the most complex video games out there and it&rsquo;s&nbsp;<em>never</em>&nbsp;been mastered by a computer &nbsp;before, until now. AlphaStar took out two world-class players, beating them both 5&ndash;0.</p> <p><a href="https://towardsdatascience.com/reinforcement-learning-with-aws-deepracer-99b5dd2557c8"><strong>Learn More</strong></a></p>
Tags: AWS DeepRacer