20. December 2024
With the release of the Segment Anything Model (SAM) released by Meta AI Research last year, the lie of the land changed
quite substantially in Computer Vision, as now images could be segmented easily, with great results even zero-shot. With
the release of SAM2 earlier this year, I wanted to get hands on and experiment with these models myself.
This post walks you through how SAM2 could be used in practice, provides a mini analysis of segmentation results and will
be released with code so that you can explore further if you want to. This could be expanded to interesting use cases,
such as facilitating object grasping in robotic systems, branded product addition or removal in marketing images, or
mapping changes in forested areas from satellite imagery over time for environmental monitoring.