zl程序教程

您现在的位置是:首页 >  云平台

当前栏目

深度学习: RPN (区域候选网络)

网络学习 深度 区域 候选 RPN
2023-06-13 09:12:23 时间

大家好,又见面了,我是你们的朋友全栈君。

Overview

RPN的本质是 “ 基于滑窗的无类别object检测器 ” :

RPN所在的位置:

Note

  • 只有在train时,cls+reg才能得到强监督信息(来源于ground truth)。即ground truth会告诉cls+reg结构,哪些才是真的前景,从而引导cls+reg结构学得正确区分前后景的能力;在inference阶段,就要靠cls+reg自力更生了。
  • 在train阶段,会输出约2000个proposal,但只会抽取其中256个proposal来训练RPN的cls+reg结构(其中,128个前景proposal用来训练cls+reg,128个背景proposal用来只训练cls);到了inference阶段,则直接输出最高score的300个proposal。此时由于没有了监督信息,所有RPN并不知道这些proposal是否为前景,整个过程只是惯性地推送一波无tag的proposal给后面的Fast R-CNN。
  • RPN的运用使得region proposal的额外开销就只有一个两层网络

放大之后是这样:

庖丁解牛

RPN由以下三部分构成:

  1. RPN头部 ,通过以下结构生成 anchor(其实就是一堆有编号有坐标的bbox):

论文中的这幅插图对应的就是 RPN头部

(曾经以为这张图就是整个RPN,于是百思不得其解,走了不少弯路。。。)

  1. RPN中部分类分支(cls)边框回归分支(bbox reg) 分别对这堆anchor进行各种计算:

Note: two stage型的检测算法在RPN 之后 还会进行 再一次分类任务边框回归任务,以进一步提升检测精度。

  1. RPN末端,通过对 两个分支的结果进行汇总,来实现对anchor的 初步筛除(先剔除越界的anchor,再根据cls结果通过NMS算法去重)和 初步偏移(根据bbox reg结果),此时输出的都改头换面叫 Proposal 了:

后话

RPN之后,proposal 成为 RoI (感兴趣区域) ,被输入 RoIPooling 或 RoIAlign 中进行 size上的归一化。当然,这些都是 RPN网络 之后 的操作了,严格来说并 不属于 RPN 的范围 了。

图中 绿框内RPN红圈内RoI 以及其对应的 Pooling 操作

Note

  • 如果只在最后一层 feature map 上映射回原图像,且初始产生的anchor被限定了尺寸下限,那么低于最小anchor尺寸的小目标虽然被anchor圈入,在后面的过程中依然容易被漏检。
  • 但是FPN的出现,大大降低了小目标的漏检率,使得RPN如虎添翼。
  • 关于RPN的具体结构,我自己也画了一张图解。有兴趣的可以看一下:论文阅读: Faster R-CNN

Source Code

作者的源码

#========= RPN ============

layer { 
   
  name: "rpn_conv/3x3"
  type: "Convolution"
  bottom: "conv5"
  top: "rpn/output"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  convolution_param { 
   
    num_output: 256
    kernel_size: 3 pad: 1 stride: 1
    weight_filler { 
    type: "gaussian" std: 0.01 }
    bias_filler { 
    type: "constant" value: 0 }
  }
}
layer { 
   
  name: "rpn_relu/3x3"
  type: "ReLU"
  bottom: "rpn/output"
  top: "rpn/output"
}
layer { 
   
  name: "rpn_cls_score"
  type: "Convolution"
  bottom: "rpn/output"
  top: "rpn_cls_score"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  convolution_param { 
   
    num_output: 18   # 2(bg/fg) * 9(anchors)
    kernel_size: 1 pad: 0 stride: 1
    weight_filler { 
    type: "gaussian" std: 0.01 }
    bias_filler { 
    type: "constant" value: 0 }
  }
}
layer { 
   
  name: "rpn_bbox_pred"
  type: "Convolution"
  bottom: "rpn/output"
  top: "rpn_bbox_pred"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  convolution_param { 
   
    num_output: 36   # 4 * 9(anchors)
    kernel_size: 1 pad: 0 stride: 1
    weight_filler { 
    type: "gaussian" std: 0.01 }
    bias_filler { 
    type: "constant" value: 0 }
  }
}
layer { 
   
   bottom: "rpn_cls_score"
   top: "rpn_cls_score_reshape"
   name: "rpn_cls_score_reshape"
   type: "Reshape"
   reshape_param { 
    shape { 
    dim: 0 dim: 2 dim: -1 dim: 0 } }
}
layer { 
   
  name: 'rpn-data'
  type: 'Python'
  bottom: 'rpn_cls_score'
  bottom: 'gt_boxes'
  bottom: 'im_info'
  bottom: 'data'
  top: 'rpn_labels'
  top: 'rpn_bbox_targets'
  top: 'rpn_bbox_inside_weights'
  top: 'rpn_bbox_outside_weights'
  python_param { 
   
    module: 'rpn.anchor_target_layer'
    layer: 'AnchorTargetLayer'
    param_str: "'feat_stride': 16"
  }
}
layer { 
   
  name: "rpn_loss_cls"
  type: "SoftmaxWithLoss"
  bottom: "rpn_cls_score_reshape"
  bottom: "rpn_labels"
  propagate_down: 1
  propagate_down: 0
  top: "rpn_cls_loss"
  loss_weight: 1
  loss_param { 
   
    ignore_label: -1
    normalize: true
  }
}
layer { 
   
  name: "rpn_loss_bbox"
  type: "SmoothL1Loss"
  bottom: "rpn_bbox_pred"
  bottom: "rpn_bbox_targets"
  bottom: 'rpn_bbox_inside_weights'
  bottom: 'rpn_bbox_outside_weights'
  top: "rpn_loss_bbox"
  loss_weight: 1
  smooth_l1_loss_param { 
    sigma: 3.0 }
}

#========= RoI Proposal ============

layer { 
   
  name: "rpn_cls_prob"
  type: "Softmax"
  bottom: "rpn_cls_score_reshape"
  top: "rpn_cls_prob"
}
layer { 
   
  name: 'rpn_cls_prob_reshape'
  type: 'Reshape'
  bottom: 'rpn_cls_prob'
  top: 'rpn_cls_prob_reshape'
  reshape_param { 
    shape { 
    dim: 0 dim: 18 dim: -1 dim: 0 } }
}
layer { 
   
  name: 'proposal'
  type: 'Python'
  bottom: 'rpn_cls_prob_reshape'
  bottom: 'rpn_bbox_pred'
  bottom: 'im_info'
  top: 'rpn_rois'
# top: 'rpn_scores'
  python_param { 
   
    module: 'rpn.proposal_layer'
    layer: 'ProposalLayer'
    param_str: "'feat_stride': 16"
  }
}
layer { 
   
  name: 'roi-data'
  type: 'Python'
  bottom: 'rpn_rois'
  bottom: 'gt_boxes'
  top: 'rois'
  top: 'labels'
  top: 'bbox_targets'
  top: 'bbox_inside_weights'
  top: 'bbox_outside_weights'
  python_param { 
   
    module: 'rpn.proposal_target_layer'
    layer: 'ProposalTargetLayer'
    param_str: "'num_classes': 21"
  }
}

#========= RCNN ============

layer { 
   
  name: "roi_pool_conv5"
  type: "ROIPooling"
  bottom: "conv5"
  bottom: "rois"
  top: "roi_pool_conv5"
  roi_pooling_param { 
   
    pooled_w: 6
    pooled_h: 6
    spatial_scale: 0.0625 # 1/16
  }
}
layer { 
   
  name: "fc6"
  type: "InnerProduct"
  bottom: "roi_pool_conv5"
  top: "fc6"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  inner_product_param { 
   
    num_output: 4096
  }
}
layer { 
   
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer { 
   
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param { 
   
    dropout_ratio: 0.5
    scale_train: false
  }
}
layer { 
   
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  inner_product_param { 
   
    num_output: 4096
  }
}
layer { 
   
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer { 
   
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param { 
   
    dropout_ratio: 0.5
    scale_train: false
  }
}
layer { 
   
  name: "cls_score"
  type: "InnerProduct"
  bottom: "fc7"
  top: "cls_score"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  inner_product_param { 
   
    num_output: 21
    weight_filler { 
   
      type: "gaussian"
      std: 0.01
    }
    bias_filler { 
   
      type: "constant"
      value: 0
    }
  }
}
layer { 
   
  name: "bbox_pred"
  type: "InnerProduct"
  bottom: "fc7"
  top: "bbox_pred"
  param { 
    lr_mult: 1.0 }
  param { 
    lr_mult: 2.0 }
  inner_product_param { 
   
    num_output: 84
    weight_filler { 
   
      type: "gaussian"
      std: 0.001
    }
    bias_filler { 
   
      type: "constant"
      value: 0
    }
  }
}
layer { 
   
  name: "loss_cls"
  type: "SoftmaxWithLoss"
  bottom: "cls_score"
  bottom: "labels"
  propagate_down: 1
  propagate_down: 0
  top: "cls_loss"
  loss_weight: 1
  loss_param { 
   
    ignore_label: -1
    normalize: true
  }
}
layer { 
   
  name: "loss_bbox"
  type: "SmoothL1Loss"
  bottom: "bbox_pred"
  bottom: "bbox_targets"
  bottom: 'bbox_inside_weights'
  bottom: 'bbox_outside_weights'
  top: "bbox_loss"
  loss_weight: 1
}

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/152319.html原文链接:https://javaforall.cn