Sneak peek of topics you better know before taking the Associate ML Certification exam — Part 2: HyperOpt for Distributed Hyperparameter Tuning
<p>When we think of the two common traditional methods of hyperparameter tuning (grid search, random search), they all share one thing in common — in both of these methods, we treat each combination of hyperparameters independently. While this allows us to easily parallelize the hyperparameter tuning process, it lacks using the learning from the prior search processes. Wouldn’t it be nice to be able to assess how the model performed with certain parameters and use that learning and choose the next parameters based on what the model observed with prior parameters and improve the model performance further?</p>
<p>This blog will walk you through how using HyperOpt allows you to optimize ML models by adapting the idea of the sequential model-based optimization search method.</p>
<p><a href="https://medium.com/@mojganmazouchi/sneak-peek-of-topics-you-better-know-before-taking-the-associate-ml-certification-exam-part-2-701efd6a5834"><strong>Click Here</strong></a></p>