GitHub - segments-ai/panoptic-segment-anything: Combining Segment Anything (SAM) with Grounded DINO for zero-shot object detection and CLIPSeg for zero-shot segmentation (original) (raw)

Zero-shot panoptic segmentation using SAM

Open In Colab Open In Colab

This is a proof of concept for zero-shot panoptic segmentation using the Segment Anything Model (SAM).

SAM cannot immediately achieve panoptic segmentation due to two limitations:

To solve these challenges we use the following additional models:

You can try out the pipeline by running the notebook in Colab or by trying out the Gradio demo on Hugging Face Spaces.

The notebook also shows how the predictions from this pipeline can be uploaded to Segments.ai as pre-labels, where you can adjust them to obtain perfect labels for fine-tuning your segmentation model.

🖼️Results

Results

🏗️ Pipeline

Our Frankenstein-ish pipeline looks as follows:

  1. Use Grounding DINO to detect the "thing" categories (categories with instances)Step 1
  2. Get instance segmentation masks for the detected boxes using SAMStep 2
  3. Use CLIPSeg to obtain rough segmentation masks of the "stuff" categoriesStep 3
  4. Sample points in these rough segmentation masks and feed these to SAM to get fine segmentation masksStep 4a Step 4b
  5. Combine the background "stuff" masks with the foreground "thing" masks to obtain a panoptic segmentation labelStep 5

💘 Acknowledgements