Semantic Segmentation with Open3D-ML, PyTorch Backend, and a Custom Dataset
<p>As part of my experimentation with Open3D-ML for Point Clouds, I wrote articles explaining how to install this library with <a href="https://medium.com/@kidargueta/testing-open3d-ml-for-3d-object-detection-and-segmentation-df125e7a8283?sk=ebf7a48cedb1499ddb13e2846e49358f" rel="noopener">Tensorflow</a> and <a href="https://medium.com/@kidargueta/installing-open3d-ml-for-3d-computer-vision-with-pytorch-d640a6862e19?sk=4d899c1dde4126ec011cee7273a106c2" rel="noopener">PyTorch</a> support. To test the installation, I explained how to run a simple Python script to visualize a labeled dataset for Semantic Segmentation called <a href="http://www.semantic-kitti.org/" rel="noopener ugc nofollow" target="_blank">SemanticKITTI</a>. In this article, I go over the steps I followed to do inference on any Point Cloud, including the test portion of SemanticKITTI, as well as on my private dataset.</p>
<p>The rest of this article assumes that you have successfully installed and tested Open3D-ML with PyTorch backend by following my previous article. Having done so also means you have downloaded the SemanticKITTI dataset. To run a Semantic Segmentation model on unlabeled data, you need to load an Open3D-ML pipeline. The pipeline will consist of a Semantic Segmentation model, a dataset, and probably other pre/post-processing steps. Open3D-ML comes with modules and configuration files to easily load and run popular pipelines.</p>
<p><a href="https://medium.com/@kidargueta/semantic-segmentation-with-open3d-ml-pytorch-backend-and-a-custom-dataset-b82e324ebe95"><strong>Click Here</strong></a></p>