MMLAB的实例分割算法mmsegmentation

        当谈及实例分割时,人们往往只会提到一些早期的经典算法,比如 PSP-Net、DeepLabv3、DeepLabv3+ 和 U-Net。然而,实例分割领域已经在过去的五六年中蓬勃发展,涌现出许多新的算法。今天,让我们一起探索这个算法库,它包含了众多最新的实例分割算法。后面,我将会为大家详细介绍如何使用这个算法库。总的来说,若你关注实例分割领域的最新进展,这个算法库值得你拥有。

MMLAB的实例分割算法mmsegmentation_第1张图片

1、目前支持的算法:


- [x] [SAN (CVPR'2023)](configs/san/)
- [x] [VPD (ICCV'2023)](configs/vpd)
- [x] [DDRNet (T-ITS'2022)](configs/ddrnet)
- [x] [PIDNet (ArXiv'2022)](configs/pidnet)
- [x] [Mask2Former (CVPR'2022)](configs/mask2former)
- [x] [MaskFormer (NeurIPS'2021)](configs/maskformer)
- [x] [K-Net (NeurIPS'2021)](configs/knet)
- [x] [SegFormer (NeurIPS'2021)](configs/segformer)
- [x] [Segmenter (ICCV'2021)](configs/segmenter)
- [x] [DPT (ArXiv'2021)](configs/dpt)
- [x] [SETR (CVPR'2021)](configs/setr)
- [x] [STDC (CVPR'2021)](configs/stdc)
- [x] [BiSeNetV2 (IJCV'2021)](configs/bisenetv2)
- [x] [CGNet (TIP'2020)](configs/cgnet)
- [x] [PointRend (CVPR'2020)](configs/point_rend)
- [x] [DNLNet (ECCV'2020)](configs/dnlnet)
- [x] [OCRNet (ECCV'2020)](configs/ocrnet)
- [x] [ISANet (ArXiv'2019/IJCV'2021)](configs/isanet)
- [x] [Fast-SCNN (ArXiv'2019)](configs/fastscnn)
- [x] [FastFCN (ArXiv'2019)](configs/fastfcn)
- [x] [GCNet (ICCVW'2019/TPAMI'2020)](configs/gcnet)
- [x] [ANN (ICCV'2019)](configs/ann)
- [x] [EMANet (ICCV'2019)](configs/emanet)
- [x] [CCNet (ICCV'2019)](configs/ccnet)
- [x] [DMNet (ICCV'2019)](configs/dmnet)
- [x] [Semantic FPN (CVPR'2019)](configs/sem_fpn)
- [x] [DANet (CVPR'2019)](configs/danet)
- [x] [APCNet (CVPR'2019)](configs/apcnet)
- [x] [NonLocal Net (CVPR'2018)](configs/nonlocal_net)
- [x] [EncNet (CVPR'2018)](configs/encnet)
- [x] [DeepLabV3+ (CVPR'2018)](configs/deeplabv3plus)
- [x] [UPerNet (ECCV'2018)](configs/upernet)
- [x] [ICNet (ECCV'2018)](configs/icnet)
- [x] [PSANet (ECCV'2018)](configs/psanet)
- [x] [BiSeNetV1 (ECCV'2018)](configs/bisenetv1)
- [x] [DeepLabV3 (ArXiv'2017)](configs/deeplabv3)
- [x] [PSPNet (CVPR'2017)](configs/pspnet)
- [x] [ERFNet (T-ITS'2017)](configs/erfnet)
- [x] [UNet (MICCAI'2016/Nat. Methods'2019)](configs/unet)
- [x] [FCN (CVPR'2015/TPAMI'2017)](configs/fcn)

方法

时间

题目

dsdl

Standard Description Language for DataSet

san

2013

Side adapter network for open-vocabulary semantic segmentation

unet

2015

U-net: Convolutional networks for biomedical image segmentation

erfnet

2017

Erfnet: Efficient residual factorized convnet for real-time semantic segmentation

fcn

2017

Fully convolutional networks for semantic segmentation

pspnet

2017

Pyramid Scene Parsing Network

bisenetv1_r18-d32

2018

Bisenet: Bilateral segmentation network for real-time semantic segmentation

encnet

2018

Context Encoding for Semantic Segmentation

icnet_r50-d8

2018

Icnet for real-time semantic segmentation on high-resolution images

nonlocal

2018

Non-local neural networks

psanet

2018

Psanet: Point-wise spatial attention network for scene parsing

upernet

2018

Unified perceptual parsing for scene understanding

ann

2019

Asymmetric non-local neural networks for semantic segmentation

apcnet

2019

Adaptive Pyramid Context Network for Semantic Segmentation

ccnet

2019

CCNet: Criss-Cross Attention for Semantic Segmentation

danet

2019

Dual Attention Network for Scene Segmentation

emanet_r50-d8

2019

Expectation-maximization attention networks for semantic segmentation

fastfcn

2019

Fastfcn: Rethinking dilated convolution in the backbone for semantic segmentation

fast_scnn

2019

Fast-scnn: Fast semantic segmentation network

hrnet

2019

Deep High-Resolution Representation Learning for Human Pose Estimation

gcnet

2019

Gcnet: Non-local networks meet squeeze-excitation networks and beyond

sem_fpn

2019

Panoptic feature pyramid networks

cgNet

2020

Cgnet: A light-weight context guided network for semantic segmentation

dnlnet

2020

Disentangled Non-Local Neural Networks

ocrnet

2020

Object-Contextual Representations for Semantic Segmentation

pointrend

2020

Pointrend: Image segmentation as rendering

setr

2020

Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

bisenetv2

2021

Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation

dpt

2021

Vision Transformers for Dense Prediction

isanet_r50-d8

2021

OCNet: Object Context for Semantic Segmentation

knet

2021

{K-Net: Towards} Unified Image Segmentation

mae

2021

Masked autoencoders are scalable vision learners

mask2former

2021

Per-Pixel Classification is Not All You Need for Semantic Segmentation

maskformer

2021

Per-pixel classification is not all you need for semantic segmentation

segformer

2021

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers

segmenter

2021

Segmenter: Transformer for semantic segmentation

stdc

2021

Rethinking BiSeNet For Real-time Semantic Segmentation

Beit

2022

{BEiT}: {BERT} Pre-Training of Image Transformers

convnext

2022

A ConvNet for the 2020s

ddrnet

2022

Deep Dual-Resolution Networks for Real-Time and Accurate Semantic Segmentation of Traffic Scenes

pidnet

2022

PIDNet: A Real-time Semantic Segmentation Network Inspired from PID Controller

poolformer

2022

Metaformer is actually what you need for vision

segnext

2022

SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation

VPD

2023

Unleashing Text-to-Image Diffusion Models for Visual Perception

2、支持的骨干网络:

- [x] ResNet (CVPR'2016)
- [x] ResNeXt (CVPR'2017)
- [x] [HRNet (CVPR'2019)](configs/hrnet)
- [x] [ResNeSt (ArXiv'2020)](configs/resnest)
- [x] [MobileNetV2 (CVPR'2018)](configs/mobilenet_v2)
- [x] [MobileNetV3 (ICCV'2019)](configs/mobilenet_v3)
- [x] [Vision Transformer (ICLR'2021)](configs/vit)
- [x] [Swin Transformer (ICCV'2021)](configs/swin)
- [x] [Twins (NeurIPS'2021)](configs/twins)
- [x] [BEiT (ICLR'2022)](configs/beit)
- [x] [ConvNeXt (CVPR'2022)](configs/convnext)
- [x] [MAE (CVPR'2022)](configs/mae)
- [x] [PoolFormer (CVPR'2022)](configs/poolformer)
- [x] [SegNeXt (NeurIPS'2022)](configs/segnext)

3、支持的数据集:


- [x] [Cityscapes](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#cityscapes)
- [x] [PASCAL VOC](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#pascal-voc)
- [x] [ADE20K](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#ade20k)
- [x] [Pascal Context](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#pascal-context)
- [x] [COCO-Stuff 10k](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#coco-stuff-10k)
- [x] [COCO-Stuff 164k](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#coco-stuff-164k)
- [x] [CHASE_DB1](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#chase-db1)
- [x] [DRIVE](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#drive)
- [x] [HRF](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#hrf)
- [x] [STARE](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#stare)
- [x] [Dark Zurich](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#dark-zurich)
- [x] [Nighttime Driving](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#nighttime-driving)
- [x] [LoveDA](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#loveda)
- [x] [Potsdam](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#isprs-potsdam)
- [x] [Vaihingen](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#isprs-vaihingen)
- [x] [iSAID](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#isaid)
- [x] [Mapillary Vistas](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#mapillary-vistas-datasets)
- [x] [LEVIR-CD](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#levir-cd)
- [x] [BDD100K](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#bdd100K)
- [x] [NYU](https://github.com/open-mmlab/mmsegmentation/blob/main/docs/en/user_guides/2_dataset_prepare.md#nyu)

4、自定义个人任务:

当然如果以上无法满足,这里面提供了详细的教程与方便的接口,以供制作自己的数据集和设计自己的算法、主干网络、损失函数等。

5、参考文章:

  1. Welcome to MMSegmentation’s documentation! — MMSegmentation 1.2.2 documentation
  2. open-mmlab/mmsegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark. (github.com)

你可能感兴趣的:(实例分割,算法,深度学习,人工智能,计算机视觉,python)