DL-FWI内训Day3
发布于 202378|遵循 CC BY-NC-SA 4.0 许可

InversionNet-网络搭建

InversionNet是在2019年提出的一种完全端到端网络算法,在众多的端到端网络中InversionNet网络较为简单,原因是该算法采用了最基础的CNN网络。

InversionNet构建了一个编码器-解码器的卷积神经网络,来模拟地震数据和地下速度结构的对应关系。

InversionNet网络结构

结构分析

卷积层

网络中,每一个卷积层包含了3个部分:卷积计算、批归一化和激活函数。

  • 卷积承担输入信号的责任,同时担任滤波器的作用以提取有意义的特征。
  • 批量归一化(Batch Normalization)的理论依据是:如果网络的输入具有零均值、单位方差和去相关,那么深层网络的收敛速度会加快。因此,BN层的作用是对馈送到网络中中间层的数据子集进行归一化操作后输出。
  • LeakyReLU激活函数用于解决ReLU激活函数中神经元死亡的问题,在该网络中有较好的效果。
python
复制代码
1class ConvBlock(nn.Module): 2 def __init__( 3 self, 4 in_fea, 5 out_fea, 6 kernel_size=3, 7 stride=1, 8 padding=1, 9 norm=nn.BatchNorm2d, 10 relu_slop=0.2, 11 dropout=None, 12 ): 13 """ 14 Standard convolution operation 15 [Affiliated with InversionNet] 16 17 :param in_fea: Number of channels of input 18 :param out_fea: Number of channels of output 19 :param kernel_size: Size of the convolution kernel 20 :param stride: Step size of the convolution 21 :param padding: Zero-fill width 22 :param norm: The means of normalization 23 :param relu_slop: Parameters of relu 24 :param dropout: Whether to apply dropout 25 """ 26 super(ConvBlock, self).__init__() 27 layers = [ 28 nn.Conv2d( 29 in_channels=in_fea, 30 out_channels=out_fea, 31 kernel_size=kernel_size, 32 stride=stride, 33 padding=padding, 34 ) 35 ] 36 layers.append(norm(out_fea)) 37 layers.append(nn.LeakyReLU(relu_slop, inplace=True)) 38 if dropout: 39 layers.append(nn.Dropout2d(0.8)) 40 self.layers = nn.Sequential(*layers) 41 42 def forward(self, X): 43 return self.layers(X)

编码器

编码器的部分主要由卷积层构建,编码器从地震数据中提取特征,将它们压缩成单一的高维向量。由于地震数据时间维度的大小TTT远大于空间维度RRR,网络中先使用了非方形的卷积核(7×17\times 17×1、3×13\times 13×1等尺寸)以及步长(2×12\times 12×1)对地震波形图进行卷积计算。这样,经过卷积计算后的地震数据矩阵尺寸更接近于传统神经网络中使用的方形矩阵,便于后续的特征提取。

地震数据尺寸经过上述计算转化为方形后,再使用尺寸为3×33\times 33×3和8×88\times 88×8的卷积核进行卷积计算,最终将尺寸为None, 5, 32, 1000的地震数据编码为尺寸为None, 512, 1, 1的高维向量。这样的操作被认为是合理的压缩方式,因为我们通常认为没有必要保留时间和空间的相关关系。

解码器

高维向量一步步升维为最终速度模型图的过程由若干个反卷积(转置卷积)计算完成。类似可以实现升维效果的操作还有反池化,但在这里的地震反演任务中,反卷积较反池化有更好的效果。

反卷积:一种特殊的卷积,先通过padding来扩大图像尺寸,紧接着跟正向卷积一样,旋转卷积核180度,再进行卷积计算。

反池化:池化的逆操作,无法通过反池化的结果还原出全部的原始数据。因为池化的过程就是只保留主要信息,舍去部分信息。如想从池化后的这些主要信息恢复出全部信息,则存在信息缺失,这时只能通过补位来实现最大程度的信息完整。包含最大反池化和平均反池化。

python
复制代码
1class InversionNet(nn.Module): 2 def __init__(self, dim1=32, dim2=64, dim3=128, dim4=256, dim5=512, **kwargs): 3 """ 4 Network architecture of InversionNet 5 6 :param dim1: Number of channels in the 1st layer 7 :param dim2: Number of channels in the 2nd layer 8 :param dim3: Number of channels in the 3rd layer 9 :param dim4: Number of channels in the 4th layer 10 :param dim5: Number of channels in the 5th layer 11 :param sample_spatial: Scale parameters for sampling in space 12 """ 13 super(InversionNet, self).__init__() 14 self.convblock1 = ConvBlock(5, dim1, kernel_size=(7, 1), stride=(2, 1), padding=(3, 0)) 15 self.convblock2_1 = ConvBlock(dim1, dim2, kernel_size=(3, 1), stride=(2, 1), padding=(1, 0)) 16 self.convblock2_2 = ConvBlock(dim2, dim2, kernel_size=(3, 1), stride=1, padding=(1, 0)) 17 self.convblock3_1 = ConvBlock(dim2, dim2, kernel_size=(3, 1), stride=(2, 1), padding=(1, 0)) 18 self.convblock3_2 = ConvBlock(dim2, dim2, kernel_size=(3, 1), stride=1, padding=(1, 0)) 19 self.convblock4_1 = ConvBlock(dim2, dim3, kernel_size=(3, 1), stride=(2, 1), padding=(1, 0)) 20 self.convblock4_2 = ConvBlock(dim3, dim3, kernel_size=(3, 1), stride=1, padding=(1, 0)) 21 self.convblock5_1 = ConvBlock(dim3, dim3, kernel_size=(3, 1), stride=(2, 1), padding=(1, 0)) 22 self.convblock5_2 = ConvBlock(dim3, dim3, kernel_size=(3, 1), stride=1, padding=(1, 0)) 23 self.convblock6_1 = ConvBlock(dim3, dim4, kernel_size=(3, 3), stride=2, padding=1) 24 self.convblock6_2 = ConvBlock(dim4, dim4, kernel_size=(3, 3), stride=1, padding=1) 25 self.convblock7_1 = ConvBlock(dim4, dim4, kernel_size=(3, 3), stride=2, padding=1) 26 self.convblock7_2 = ConvBlock(dim4, dim4, kernel_size=(3, 3), stride=1, padding=1) 27 self.convblock8 = ConvBlock(dim4, dim5, kernel_size=8, padding=0) 28 29 self.deconv1_1 = DeconvBlock(dim5, dim5, kernel_size=5, stride=1, padding=0) 30 self.deconv1_2 = ConvBlock(dim5, dim5, kernel_size=3, stride=1) 31 self.deconv2_1 = DeconvBlock(dim5, dim4, kernel_size=4, stride=2, padding=1) 32 self.deconv2_2 = ConvBlock(dim4, dim4, kernel_size=3, stride=1) 33 self.deconv3_1 = DeconvBlock(dim4, dim3, kernel_size=4, stride=2, padding=1) 34 self.deconv3_2 = ConvBlock(dim3, dim3, kernel_size=3, stride=1) 35 self.deconv4_1 = DeconvBlock(dim3, dim2, kernel_size=4, stride=2, padding=1) 36 self.deconv4_2 = ConvBlock(dim2, dim2, kernel_size=3, stride=1) 37 self.deconv5_1 = DeconvBlock(dim2, dim1, kernel_size=4, stride=2, padding=1) 38 self.deconv5_2 = ConvBlock(dim1, dim1, kernel_size=3, stride=1) 39 self.deconv6 = ConvBlock_Tanh(dim1, 1) 40 41 def forward(self, x): 42 # Encoder Part 43 x = self.convblock1(x) # (None, 32, 500, 32) 44 x = self.convblock2_1(x) # (None, 64, 250, 32) 45 x = self.convblock2_2(x) # (None, 64, 250, 32) 46 x = self.convblock3_1(x) # (None, 64, 125, 32) 47 x = self.convblock3_2(x) # (None, 64, 125, 32) 48 x = self.convblock4_1(x) # (None, 128, 63, 32) 49 x = self.convblock4_2(x) # (None, 128, 63, 32) 50 x = self.convblock5_1(x) # (None, 128, 32, 32) 51 x = self.convblock5_2(x) # (None, 128, 32, 32) 52 x = self.convblock6_1(x) # (None, 256, 16, 16) 53 x = self.convblock6_2(x) # (None, 256, 16, 16) 54 x = self.convblock7_1(x) # (None, 256, 8, 8) 55 x = self.convblock7_2(x) # (None, 256, 8, 8) 56 x = self.convblock8(x) # (None, 512, 1, 1) 57 58 # Decoder Part 59 x = self.deconv1_1(x) # (None, 512, 5, 5) 60 x = self.deconv1_2(x) # (None, 512, 5, 5) 61 x = self.deconv2_1(x) # (None, 256, 10, 10) 62 x = self.deconv2_2(x) # (None, 256, 10, 10) 63 x = self.deconv3_1(x) # (None, 128, 20, 20) 64 x = self.deconv3_2(x) # (None, 128, 20, 20) 65 x = self.deconv4_1(x) # (None, 64, 40, 40) 66 x = self.deconv4_2(x) # (None, 64, 40, 40) 67 x = self.deconv5_1(x) # (None, 32, 80, 80) 68 x = self.deconv5_2(x) # (None, 32, 80, 80) 69 x = F.pad(x, [-2, -3, -2, -3], mode="constant", value=0) 70 # (None, 32, 75, 75) 125, 100 71 x = self.deconv6(x) # (None, 1, 75, 75) 72 return x
Comments