Reinforcement Learning with AWS DeepRacer
<p>In March 2016, Lee Sedol, the greatest Go player of the past decade, was defeated 4–1 by <a href="https://deepmind.com/research/case-studies/alphago-the-story-so-far" rel="noopener ugc nofollow" target="_blank"><strong>AlphaGo</strong></a>. Computers have beaten the best humans at chess before, but Go is at another next level at complexity. Do you know what’s even <em>crazier</em>? The machine had only been learning how to play for the past<em> 8 hours</em> before the match.</p>
<p>This January 2019, the Google DeepMind team took AlphaGo to another level, making <a href="https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii" rel="noopener ugc nofollow" target="_blank"><strong>AlphaStar</strong></a>. Starcraft is one of the most complex video games out there and it’s <em>never</em> been mastered by a computer before, until now. AlphaStar took out two world-class players, beating them both 5–0.</p>
<p><a href="https://towardsdatascience.com/reinforcement-learning-with-aws-deepracer-99b5dd2557c8"><strong>Learn More</strong></a></p>