Navigating the Future

<p>In my recent work on&nbsp;<a href="https://natecibik.medium.com/multiformer-51b81df826b7" rel="noopener">Multiformer</a>, I explored the power of lightweight&nbsp;<a href="https://natecibik.medium.com/the-rise-of-vision-transformers-f623c980419f" rel="noopener">hierarchical vision transformers</a>&nbsp;to efficiently perform simultaneous learning and inference on multiple computer vision tasks essential for robotic perception. This &ldquo;shared trunk&rdquo; concept of a common backbone feeding features to multiple task heads has become a popular approach in multi-task learning, particularly in autonomous robotics, because it has repeatedly been demonstrated that learning a feature space that is useful for multiple tasks not only produces a single model which can perform multiple tasks given a single input, but also performs better at each individual task by leveraging the complementary knowledge learned from other tasks.</p> <p><a href="https://towardsdatascience.com/navigating-the-future-62ea60f27046"><strong>Read More</strong></a></p>