关键点匹配——精选推荐

关键点匹配
要看的⽂章:SpyNet [31], PWC-Net [38] and LiteFlowNet [14],SelFlow(⾃监督)
能检测遮挡的光流
UnFlow: Unsupervised learning of optical flow with a bidirectional census loss
Occlusion aware unsupervised learning of optical flow
Unsupervised learning of multi-frame optical flow with occlusions
Ddflow: Learning optical flow with unlabeled data distillation
Back to basics:Unsupervised learning of optical flow via brightness constancy and motion smoothness
Unsupervised deep learning for optical flow estimation
. Unsupervised monocular depth estimation with left-right consistency.
Optical flow estimation with channel constancy. 2014ECCV
Image quality assessment:from error visibility to structural similarity. 2004
Learning dense correspondence via 3d-guided cycle consistency 2016 CVPR
Convolutional neural network architecture for geometric matching CVPR 2017
SIFT+RANSAC的进化版:
更好的特征描述:Superpoint: Self-supervised interest point detection and description. 2018
Geometric image correspondence verification by dense pixel matching 2019
Geodesc:Learning local descriptors by integrating geometry constraints. 2018
Working hard to know your neighbor’s margins: Local descriptor learning loss NIPS2017
R2d2: Reliable and repeatable detector and descripto NIPS 2019
L2-net: Deep learning of discriminative patch descriptor in euclidean space CVPR 2017
RANSAC:Learning two-view correspondences and geometry using order-aware network ICCV 2019
Deep fundamental matrix estimation ECCV 2018
Pointnet: Deep learning on point sets for 3d classification and segmentation CVPR 2017
Neural nearest neighbors networks NIPS 2018
光流:
Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation ⼤范围光流场估计2015ICCV
使⽤局部特征进⾏光流匹配:Large displacement optical flow CVPR 2009
: Epicflow: Edge-preserving interpolation of correspondences for optical flow
Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation
: Efficient coarse-to-fine patchmatch for large displacement optical flow
超越视觉相似性的光流:SIFTFlow,FlowWeb: Joint Image Set Alignment by
展璞计划
一体化Weaving Consistent, Pixel-wise Correspondences
Do Convnets Learn Correspondence?
Universal correspondence network
Neighbourhood consensus networks
深度光流:Geonet: Unsupervised learning of dense depth, optical flow and camera pose
Learning correspondence from the cycle-consistency of time CVPR 2019
: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume CVPR 2018
Dgc-net: Dense geometric correspondence network
Geometric image correspondence verification by dense pixel matching
making convolutional networks shift-invariant again
1、⽴体匹配算法主要可分为两⼤类:基于局部约束和基于全局约束的⽴体匹配算法.
(⼀)基于全局约束的⽴体匹配算法:在本质上属于优化算法,它是将⽴体匹配问题转化为寻全局能量函数的最优化问题,其代表算法主要有图割算法、置信度传播算法和协同优化算法等.全局算法能够获得较低的总误匹配率,但算法复杂度较⾼,很难满⾜实时的需求,不利于在实际⼯程中使⽤.
(⼆)基于局部约束的⽴体匹配算法:主要是利⽤匹配点周围的局部信息进⾏计算,由于其涉及到的信息量较少,匹配时间较短,因此受到了⼴泛关注,其代表算法主要有 SAD、SSD、NCC等
1.SuperGlue: Learning Feature Matching with Graph Neural Networks(2020)
⽹络结构:
代码:
(1)keypoint encoder
对应于⽂中
self.kenc = fig['descriptor_dim'], fig['keypoint_encoder'])
def__init__(self, feature_dim, layers):
其中,MLP是多个1维的卷积,类似于FC,从通道3(x,y,c)依次变为通道[32,64,128,256],再变为256 descriptor_dim。
layers.append( nn.Conv1d(channels[i -1], channels[i], kernel_size=1, bias=True))
desc0 = desc0 + self.kenc(kpts0, data['scores0'])
def forward(self, kpts, scores):
inputs =[anspose(1,2), scores.unsqueeze(1)]
丁pv病毒
der(torch.cat(inputs, dim=1))
2. Graph Neural Network with multiple self and cross-attention layers
默认值:fig['descriptor_dim']=256,fig['GNN_layers']=['self','cross']*9,
< = AttentionalGNN( fig['descriptor_dim'], fig['GNN_layers'])
self.layers = nn.ModuleList([ AttentionalPropagation(feature_dim,4)for _ in range(len(layer_names))])#len(layer_names)=18,
AttentionalPropagation(feature_dim,4)
self.attn = MultiHeadedAttention(num_heads, feature_dim)#MultiHeadedAttention(4, 256)
self.dim = d_model // num_heads #256//4=64民主法制
self.num_heads = num_heads #4
< = nn.Conv1d(d_model, d_model, kernel_size=1)#nn.Conv1d(256, 256, kernel_size=1)
self.proj = nn.ModuleList([)for _ in range(3)])
forward的时候:def forward(self, query, key, value):
query, key, value =[l(x).view(batch_dim, self.dim, self.num_heads,-1)for l, x in zip(self.proj,(query, key, value))]#(蘑菇石施工
self.mlp = MLP([feature_dim*2, feature_dim*2, feature_dim])#MLP([256*2, 256*2, 256])
desc0, desc1 = (desc0, desc1)
2.Flow2Stereo: Effective Self-Supervised Learning of Optical Flow and Stereo Matching
采⽤⾃监督的⽅式,能够使⽤两个时间(t,t+1)和两个视⾓的4张图像,估计出图像间的光流和⽴体视差。
(1)⾸先,计算两个时间(t,t+1)和两个视⾓的4张图像,他们之间所满⾜的四边形限制和三⾓形限制
(2)采⽤两阶段的策略,teacher model和student model,其中teacher model阶段的损失函数为Photometric loss(基于:对于没有遮挡的像素的亮度⼀致性假设),三⾓形限制和四边形限制;student model阶段的损失函数为⾃监督损失(基于teacher model预测的光流和confidence map)
四边形限制和三⾓形限制的计算:
抽屉式配电柜
两阶段的⽹络:
3. Ddflow: Learning optical flow with unlabeled data distillation
(1)在⽆监督学习中,使⽤亮度⼀致性构建损失函数是合理的,但是这条法则并不适⽤于有遮挡的像素
(2)所以,⾸先训练teacher model。其中,在训练teacher model时可以根据前项光流和后项光流的不⼀致性识别被遮挡的像素(这个假设不是完全成⽴的)。训练teacher model时只使⽤了亮度⼀致性损失
(3)训练完teacher model后,训练student model。对于有遮挡的地⽅,采⽤teacher model的预测结果作为label,计算损失。对于没有遮挡的地⽅,使⽤亮度⼀致性损失。
5.UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss
(1)定义被遮挡的区域:因为遮挡区域不满⾜亮度⼀致性假设,所以单独处理被遮挡的区域。我们假设对于没有遮挡的区域,前向
flow(I1->I2)应该是后向flow(I2->I1)的inverse,⽽对于遮挡的区域,即为此条假设不成⽴的地⽅:
(2)对于没有遮挡的区域,⾃监督的损失由3部分组成:亮度⼀致性损失ED,光流平滑性ES(smooth),前后向光流的⼀致相反性EC.对于被遮挡的区域,只计算光流平滑性损失。

本文发布于:2024-09-23 04:36:33,感谢您对本站的认可!

本文链接:https://www.17tex.com/xueshu/263421.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:算法   损失   光流   匹配
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议