ONNXRuntime Integration with Ubuntu and CMake
<p>In recent times, the realm of machine learning and deep learning has witnessed an unprecedented surge in applications. With a plethora of frameworks available, ranging from SkLearn to PyTorch, Tensorflow, and Caffe, the options for training models have become abundant. Simultaneously, the diversity of deployment targets, encompassing mobile devices, desktop CPUs, GPUs, TPUs, and more, has expanded. This presents a challenge: selecting the right tool for deploying a model trained with a specific framework onto a particular deployment target.</p>
<p>Furthermore, in organizations, team members may have varying levels of familiarity with these tools. Consequently, bridging the gap between Research and Development (R&D) and Engineering teams can be a daunting task. Selecting the appropriate tool for deploying a machine learning model is pivotal to ensure optimal performance. It involves considering the compatibility between the chosen framework and the target device. For instance, deploying a Caffe model on an Android device may necessitate specific considerations.</p>
<p><a href="https://medium.com/@massimilianoriva96/onnxruntime-integration-with-ubuntu-and-cmake-5d7af482136a">Website</a></p>