Hugging Face has written a new ML framework in Rust, now open-sourced!
<p>Recently, Hugging Face open sourced a heavyweight ML framework, Candle, which is a departure from the usual Python approach to machine learning, written in Rust with a focus on performance (including GPU support) and ease of use.</p>
<p>According to Hugging Face, Candle’s core goal is to make Serverless inference possible. Full machine learning frameworks like PyTorch are very large, which makes it slow to create instances on a cluster; Candle allows for the deployment of lightweight binaries. Additionally, Candle allows users to remove Python from production workloads. Python overhead can have a serious impact on performance, and <a href="https://www.backblaze.com/blog/the-python-gil-past-present-and-future/" rel="noopener ugc nofollow" target="_blank">GIL</a> is a known headache.</p>
<h1>Is Rust really possible?</h1>
<p>The Pytorch framework is written in Python, and the API is based on Python, which makes getting started very fast. In addition, Python itself is a simple and easy-to-learn programming language, which makes it suitable for both beginners and professional developers.</p>
<p>However, the Python-based Pytorch framework has obvious problems. Python can cause performance problems in some cases compared to some static graph frameworks such as TensorFlow, Python’s Global Interpreter Lock (GIL) can affect performance in multi-threaded situations, especially when it comes to CPU-intensive tasks, and Python’s interpreted nature can also introduce some runtime overhead. Additionally, deploying Python-based PyTorch models into a production environment may require some extra steps that are not as convenient as other compiled languages.</p>
<p><a href="https://medium.com/@Aaron0928/hugging-face-has-written-a-new-ml-framework-in-rust-now-open-sourced-1afea2113410">Website</a></p>