深度學(xué)習(xí)常用的圖形模板_第1頁
深度學(xué)習(xí)常用的圖形模板_第2頁
深度學(xué)習(xí)常用的圖形模板_第3頁
深度學(xué)習(xí)常用的圖形模板_第4頁
深度學(xué)習(xí)常用的圖形模板_第5頁
已閱讀5頁,還剩56頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡介

MLVisualsBydair.aiBasicMLVisualsSoftmaxConvolveSharpenSoftmaxConvolveSharpenPositionalEncodingMaskedMulti-HeadAttentionAdd&NormOutputEmbeddingMulti-HeadAttentionAdd&NormOutputs(shiftedright)PositionalEncodingMulti-HeadAttentionAdd&NormInputEmbeddingFeedForwardAdd&NormInputsFeedForwardAdd&NormLinearSoftmaxMulti-HeadAttentionAdd&NormInputEmbeddingOutputEmbeddingFeedForwardAdd&NormMaskedMulti-HeadAttentionAdd&NormMulti-HeadAttentionAdd&NormFeedForwardAdd&NormLinearSoftmaxInputsOutputs(shiftedright)PositionalEncodingPositionalEncodingTokenizeIlovecodingandwriting“Ilovecodingandwriting”InputLayerHiddenLayersOutputLayerX=A[0]a[4]A[1]A[3]X?a[1]1a[1]2a[1]3a[1]na[2]1a[2]2a[2]3a[2]na[3]1a[3]2a[3]3a[3]nA[2]A[4]InputLayerHiddenLayersOutputLayerX=A[0]a[4]A[1]A[3]X?a[1]1a[1]2a[1]3a[1]na[2]1a[2]2a[2]3a[2]na[3]1a[3]2a[3]3a[3]nA[2]A[4]InputLayerHiddenLayersOutputLayerX=A[0]a[4]A[1]A[3]X?[1a]1a[1]2a[1]3a[1]na[2]1a[2]2a[2]3a[2]na[3]1a[3]2a[3]3a[3]nA[2]A[4]NxNx3+b1+b2MxMMxM+b1+b2ReLUReLUa[l]MxMX2a[l-1]CONVoperationNxNx3+b1+b2MxMMxM+b1+b2ReLUReLUMxMX2CONVoperationNxNx3+b1+b2MxMMxM+b1+b2ReLUReLUMxMX2CONVoperationAbstractbackgroundsDAIR.AIGradientBackgroundsCommunityContributionsS=1S=2StridinginCONVNxNx192NxNx64NxNx32NxNx128NxNx1921x1Same3x3Same5x5SameMaxPoolSames=1InceptionModuleRetrainingw/oexpansiont-1tNo-Retrainingw/expansionPartialRetrainingw/expansionNo-Retrainingw/expansionPartialRetrainingw/expansiont-1ttt-1No-RetrainingexpansiontPartialRetrainingexpansiontt-1Retrainingexpansiont-1tt-1Size#bedZIPWealthFamily?Walk?SchoolPRICE?XYX?=0?=1HowdoesNNwork(InspriredfromCoursera)LogisticRegressionBasicNeuronModelSize$Size$LinearregressionReLU(x)IV128*128*1128*128*1CONV1CONV2CONV4CONV3CONV5CONV6CONV7I1128*128*1ENcoderDecoderV1EncoderDecoder128*128*1TrainingLargeNNMedNNSmallNNSVM,LRetcηAmountofDataWhydoesDeeplearningwork?a[1]1a[1]2a[1]3InputHiddenOutputX=A[0]a[1]4a[2]A[1]A[2]X?Onehiddenlayerneuralnetworka[1]1a[1]2x[1]a[2]x[2]x[2]x[3]x[1]NeuralnetworktemplatesTrainValidTestx1x2x1x2x1x2Train-Dev-Testvs.ModelfittingUnderfittingGoodfitOverfittingx[2]x[3]x[1]a[L]x1x2r=1x1x2DropOutNormalizationw1w1w2Jw1w2Jw1w2w2BeforeNormalizationAfterNormalizationEarlystoppingDevTrainErrit.x1x2w[1]w[2]w[L-2]w[L-1]w[L]FNTNTPFPDeepneuralnetworksUnderstandingPrecision&Recallw1w2SGDBGDw1w2SGDBatchvs.Mini-batch

GradientDescentBatch

GradientDescentvs.SGDx[2]x[3]x[1]p[1]p[2]SoftmaxPredictionwith2outputsMiscellaneous3641616323264128128256256128+256128164+1286432+643216+321616Convolution3x3MaxPooling2x2Convolution1x1SkipconnectionUpSampling2x2BlockcopiedDropout0.1Dropout0.2Dropout0.3Conv3-32Conv3-32Conv3-32Max-PoolConv3-32Conv3-128Max-PoolConv3-64Conv3-64Max-PoolInputConvConvMax-PoolMax-PoolFCLayer1SoftmaxFCLayer2Layer3Layer4Layer1Layer2Layer3Layer4InputConv3-32Conv3-32Conv3-32Max-PoolConv3-32Conv3-128Conv3-64Conv3-64Max-PoolLayer1Layer2Layer3FC-512OutputMax-PoolFC-512OutputPreviouslayer1x1convolutions1x1convolutions3x3convolutions1x1convolutions5x5convolutions3x3maxpooling1x1convolutionsFilterconcatenationPreviouslayer1x3conv,1padding1x5conv,2padding1x3conv,1padding1x7conv,3paddingFilterconcatenation1x3conv,1padding1x3conv,1paddingInputConvMax-PoolMax-PoolMax-PoolInceptionInceptionMax-PoolConvMax-PoolConvInceptionInceptionInceptionInceptionInceptionInceptionInceptionAvg-PoolConvFCFCSoftmaxAvg-PoolConvFCSoftmaxAvg-PoolConvFCFCSoftmaxAuxiliaryClassifierAuxiliaryClassifierPreviouslayer1x1conv.1x1conv.3x3conv.1x1conv.3x3conv.Pool1x1conv.Filterconcatenation3x3conv.Previouslayer1x1conv.1x1conv.1x1conv.3x3conv.Pool1x1conv.Filterconcatenation1x3conv.3x1conv.1x3conv.3x1conv.(a)(b)R1R2R3R1R2R3R1R1R1R2StackedlayersPreviousinputxF(x)y=F(x)StackedlayersPreviousinputxF(x)y=F(x)+xxidentity+InputConvAvg-PoolDenseBlock2DenseBlock3ConvAvg-PoolConvDenseBlock1Avg-PoolFCSoftmaxTransitionlayers3x3conv(a)addidentity3x3conv5x5conv3x3avgidentity3x3avg3x3avg3x3conv5x5convaddaddaddaddFilterconcatenationhihi-1...hi+1hihi-1...7x7conv5x5conv7

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論