九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
python讀取caffemodel文件

http://www.cnblogs.com/zjutzz/p/6185452.html

caffemodel是二進制的protobuf文件,利用protobuf的python接口可以讀取它,解析出需要的內(nèi)容

不少算法都是用預(yù)訓(xùn)練模型在自己數(shù)據(jù)上微調(diào),即加載“caffemodel”作為網(wǎng)絡(luò)初始參數(shù)取值,然后在此基礎(chǔ)上更新。使用方式往往是:同時給定solver的prototxt文件,以及caffemodel權(quán)值文件,然后從solver創(chuàng)建網(wǎng)絡(luò),并從caffemodel讀取網(wǎng)絡(luò)權(quán)值的初值。能否不加載solver的prototxt,只加載caffemodel并看看它里面都有什么東西?

利用protobuf的python接口(C++接口也可以,不過編寫代碼和編譯都略麻煩),能夠讀取caffemodel內(nèi)容。教程當然是參考protobuf官網(wǎng)的例子了。

階段1:完全模仿protobuf官網(wǎng)例子

我這里貼一個最noob的用法吧,用protobuf的python接口讀取caffemodel文件。配合jupyter-notebook命令開啟的jupyter筆記本,可以用tab鍵補全,比較方便:

# coding:utf-8# 首先請確保編譯了caffe的python接口,以及編譯后的輸出目錄<caffe_root>/python加載到了PYTHONPATH環(huán)境變量中. 或者,在代碼中向os.path中添加import caffe.proto.caffe_pb2 as caffe_pb2      # 載入caffe.proto編譯生成的caffe_pb2文件# 載入模型caffemodel_filename = '/home/chris/py-faster-rcnn/imagenet_models/ZF.v2.caffemodel'ZFmodel = caffe_pb2.NetParameter()        # 為啥是NetParameter()而不是其他類,呃,目前也還沒有搞清楚,這個是試驗的f = open(caffemodel_filename, 'rb')ZFmodel.ParseFromString(f.read())f.close()# noob階段,只知道print輸出print ZFmodel.name    print ZFmodel.input

階段2:根據(jù)caffe.proto,讀取caffemodel中的字段

這一階段從caffemodel中讀取出了大量信息。首先把caffemodel作為一個NetParameter類的對象看待,那么解析出它的名字(name)和各層(layer)。然后,解析每一層(layer)。如何確定layer表示所有層,能被遍歷呢?需要參考caffe.proto文件,發(fā)現(xiàn)layer定義為:

repeated LayerParameter layer = 100;

看到repeated關(guān)鍵字,可以確定layer是一個“數(shù)組”了。不斷地、迭代第查看caffe.proto中的各個字段,就可以解析了。

能否從caffemodel文件中解析出信息并輸出為網(wǎng)絡(luò)訓(xùn)練的train.prototxt文件呢?:顯然是可以的。這里以mnist訓(xùn)練10000次產(chǎn)生的caffemodel文件進行解析,將得到的信息拼接出網(wǎng)絡(luò)訓(xùn)練所使用的lenet_train.prototxt(輸出到stdout)(代碼實現(xiàn)比較naive,是逐個字段枚舉的方式進行輸出的,后續(xù)可以改進):

# coding:utf-8# author:ChrisZZ# description: 從caffemodel文件解析出網(wǎng)絡(luò)訓(xùn)練信息,以類似train.prototxt的形式輸出到屏幕import _init_pathsimport caffe.proto.caffe_pb2 as caffe_pb2caffemodel_filename = '/home/chris/work/py-faster-rcnn/caffe-fast-rcnn/examples/mnist/lenet_iter_10000.caffemodel'model = caffe_pb2.NetParameter()f=open(caffemodel_filename, 'rb')model.ParseFromString(f.read())f.close()layers = model.layerprint 'name: "%s"'%model.namelayer_id=-1for layer in layers:    layer_id = layer_id + 1    print 'layer {'    print '  name: "%s"'%layer.name    print '  type: "%s"'%layer.type        tops = layer.top    for top in tops:        print '  top: "%s"'%top        bottoms = layer.bottom    for bottom in bottoms:        print '  bottom: "%s"'%bottom        if len(layer.include)>0:        print '  include {'        includes = layer.include        phase_mapper={            '0': 'TRAIN',            '1': 'TEST'        }                for include in includes:            if include.phase is not None:                print '    phase: ', phase_mapper[str(include.phase)]        print '  }'        if layer.transform_param is not None and layer.transform_param.scale is not None and layer.transform_param.scale!=1:        print '  transform_param {'        print '    scale: %s'%layer.transform_param.scale        print '  }'    if layer.data_param is not None and (layer.data_param.source!="" or layer.data_param.batch_size!=0 or layer.data_param.backend!=0):        print '  data_param: {'        if layer.data_param.source is not None:            print '    source: "%s"'%layer.data_param.source        if layer.data_param.batch_size is not None:            print '    batch_size: %d'%layer.data_param.batch_size        if layer.data_param.backend is not None:            print '    backend: %s'%layer.data_param.backend        print '  }'            if layer.param is not None:        params = layer.param        for param in params:            print '  param {'            if param.lr_mult is not None:                print '    lr_mult: %s'% param.lr_mult            print '  }'        if layer.convolution_param is not None:        print '  convolution_param {'        conv_param = layer.convolution_param        if conv_param.num_output is not None:            print '    num_output: %d'%conv_param.num_output        if len(conv_param.kernel_size) > 0:            for kernel_size in conv_param.kernel_size:                print '    kernel_size: ',kernel_size        if len(conv_param.stride) > 0:            for stride in conv_param.stride:                print '    stride: ', stride        if conv_param.weight_filler is not None:            print '    weight_filler {'            print '      type: "%s"'%conv_param.weight_filler.type            print '    }'        if conv_param.bias_filler is not None:            print '    bias_filler {'            print '      type: "%s"'%conv_param.bias_filler.type            print '    }'        print '  }'        print '}'

產(chǎn)生的輸出如下:

name: "LeNet"layer {  name: "mnist"  type: "Data"  top: "data"  top: "label"  include {    phase:  TRAIN  }  transform_param {    scale: 0.00390625  }  data_param: {    source: "examples/mnist/mnist_train_lmdb"    batch_size: 64    backend: 1  }  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "conv1"  type: "Convolution"  top: "conv1"  bottom: "data"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 20    kernel_size:  5    stride:  1    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "pool1"  type: "Pooling"  top: "pool1"  bottom: "conv1"  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "conv2"  type: "Convolution"  top: "conv2"  bottom: "pool1"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 50    kernel_size:  5    stride:  1    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "pool2"  type: "Pooling"  top: "pool2"  bottom: "conv2"  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "ip1"  type: "InnerProduct"  top: "ip1"  bottom: "pool2"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "relu1"  type: "ReLU"  top: "ip1"  bottom: "ip1"  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "ip2"  type: "InnerProduct"  top: "ip2"  bottom: "ip1"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}layer {  name: "loss"  type: "SoftmaxWithLoss"  top: "loss"  bottom: "ip2"  bottom: "label"  convolution_param {    num_output: 0    weight_filler {      type: "constant"    }    bias_filler {      type: "constant"    }  }}

階段3:讀出caffemodel的所有字段

階段2是手工指定要打印輸出的字段,需要參照caffe.proto,一個個字段去找,遇到嵌套的情況需要遞歸查找,比較繁瑣。能否一口氣讀出caffemodel的所有字段呢?可以的,使用__str__就可以了,比如:

# coding:utf-8import _init_pathsimport caffe.proto.caffe_pb2 as caffe_pb2caffemodel_filename = '/home/chris/work/py-faster-rcnn/caffe-fast-rcnn/examples/mnist/lenet_iter_10000.caffemodel'model = caffe_pb2.NetParameter()f = open(caffemodel_filename, 'rb')model.ParseFromString(f.read())f.close()print model.__str__

得到的輸出幾乎就是網(wǎng)絡(luò)訓(xùn)練用的train.prototxt了,只不過里面還把blobs字段給打印出來了。這個字段里面有太多的內(nèi)容,是經(jīng)過多次迭代學(xué)習(xí)出來的卷積核以及bias的數(shù)值。這些字段應(yīng)當忽略。以及,__str__輸出的首尾有不必要的字符串也要去掉,不妨將__str__輸出到文件,然后用sed刪除不必要的內(nèi)容。除了過濾掉blobs字段包含的內(nèi)容,還去掉了"phase: TRAIN"這個不必要顯示的內(nèi)容,處理完后再寫回同一文件。代碼如下(依然以lenet訓(xùn)練10000次的caffemodel為例):

# coding:utf-8import _init_pathsimport caffe.proto.caffe_pb2 as caffe_pb2caffemodel_filename = '/home/chris/work/py-faster-rcnn/caffe-fast-rcnn/examples/mnist/lenet_iter_10000.caffemodel'model = caffe_pb2.NetParameter()f = open(caffemodel_filename, 'rb')model.ParseFromString(f.read())f.close()import sysold=sys.stdoutsave_filename = 'lenet_from_caffemodel.prototxt' sys.stdout=open( save_filename, 'w')print model.__str__sys.stdout=oldf.close()import oscmd_1 = 'sed -i "1s/^.\{38\}//" ' + save_filename     # 刪除第一行前面38個字符cmd_2 = "sed -i '$d' " + save_filename      # 刪除最后一行os.system(cmd_1)os.system(cmd_2)# 打開剛剛存儲的文件,輸出里面的內(nèi)容,輸出時過濾掉“blobs”塊和"phase: TRAIN"行。f=open(save_filename, 'r')lines = f.readlines()f.close()wr = open(save_filename, 'w')now_have_blobs = Falsenu = 1for line in lines:    #print nu    nu = nu + 1    content = line.strip('\n')    if (content == '  blobs {'):        now_have_blobs = True    elif (content == '  }' and now_have_blobs==True):        now_have_blobs = False        continue    if (content == '  phase: TRAIN'):        continue            if (now_have_blobs):        continue    else:        wr.write(content+'\n')wr.close()

現(xiàn)在,查看下得到的lenet_from_caffemodel.prototxt文件內(nèi)容,也就是從caffemodel文件解析出來的字段并過濾后的結(jié)果:

name: "LeNet"layer {  name: "mnist"  type: "Data"  top: "data"  top: "label"  include {    phase: TRAIN  }  transform_param {    scale: 0.00390625  }  data_param {    source: "examples/mnist/mnist_train_lmdb"    batch_size: 64    backend: LMDB  }}layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 20    kernel_size: 5    stride: 1    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "pool1"  type: "Pooling"  bottom: "conv1"  top: "pool1"  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layer {  name: "conv2"  type: "Convolution"  bottom: "pool1"  top: "conv2"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  convolution_param {    num_output: 50    kernel_size: 5    stride: 1    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "pool2"  type: "Pooling"  bottom: "conv2"  top: "pool2"  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layer {  name: "ip1"  type: "InnerProduct"  bottom: "pool2"  top: "ip1"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  inner_product_param {    num_output: 500    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "relu1"  type: "ReLU"  bottom: "ip1"  top: "ip1"}layer {  name: "ip2"  type: "InnerProduct"  bottom: "ip1"  top: "ip2"  param {    lr_mult: 1.0  }  param {    lr_mult: 2.0  }  inner_product_param {    num_output: 10    weight_filler {      type: "xavier"    }    bias_filler {      type: "constant"    }  }}layer {  name: "loss"  type: "SoftmaxWithLoss"  bottom: "ip2"  bottom: "label"  top: "loss"  loss_weight: 1.0}

可以說,得到的這個lenet_from_caffemodel.prototxt就是用于網(wǎng)絡(luò)訓(xùn)練的配置文件了。
這里其實還存在一個問題:caffemodel->__str__->文件,這個文件會比caffemodel大很多,因為各種blobs數(shù)據(jù)占據(jù)了太多空間。當把要解析的caffemodel從lenet_iter_10000.caffemodel換成imagenet數(shù)據(jù)集上訓(xùn)練的ZFnet的權(quán)值文件ZF.v2.caffemodel,這個文件本身就有200多M(lenet那個只有不到2M),再運行本階段的python代碼嘗試得到網(wǎng)絡(luò)結(jié)構(gòu),會報錯提示說內(nèi)存不足??磥?,這個解析方法還需要改進。

階段4:不完美的解析,但是肯定夠用

既然階段3的嘗試失敗,那就回到階段2的方法,手動指定需要解析的字段,獲取其內(nèi)容,然后打印輸出。對照著caffe.proto,把一些參數(shù)的默認值過濾掉,以及blobs過濾掉。
此處以比lenet5更復(fù)雜的ZFnet(論文:Visualizing and Understanding Convolutional Networks)來解析,因為在py-faster-rcnn中使用到了這個網(wǎng)絡(luò),而其配置文件中又增加了RPN和ROIPooling等層,想要知道到底增加了那些層以及換掉了哪些參數(shù),不妨看看ZFnet的原版使用了哪些層:

# coding:utf-8# author:ChrisZZ# description: 從caffemodel文件解析出網(wǎng)絡(luò)訓(xùn)練信息,以類似train.prototxt的形式輸出到屏幕import _init_pathsimport caffe.proto.caffe_pb2 as caffe_pb2#caffemodel_filename = '/home/chris/work/fuckubuntu/caffe-fast-rcnn/examples/mnist/lenet_iter_10000.caffemodel'caffemodel_filename = '/home/chris/work/py-faster-rcnn/data/imagenet_models/ZF.v2.caffemodel'    model = caffe_pb2.NetParameter()f=open(caffemodel_filename, 'rb')model.ParseFromString(f.read())f.close()layers = model.layerprint 'name: ' + model.namelayer_id=-1for layer in layers:    layer_id = layer_id + 1        res=list()        # name    res.append('layer {')    res.append('  name: "%s"' % layer.name)        # type    res.append('  type: "%s"' % layer.type)            # bottom    for bottom in layer.bottom:        res.append('  bottom: "%s"' % bottom)        # top    for top in layer.top:        res.append('  top: "%s"' % top)        # loss_weight    for loss_weight in layer.loss_weight:        res.append('  loss_weight: ' + loss_weight)        # param    for param in layer.param:        param_res = list()        if param.lr_mult is not None:            param_res.append('    lr_mult: %s' % param.lr_mult)        if param.decay_mult!=1:            param_res.append('    decay_mult: %s' % param.decay_mult)        if len(param_res)>0:            res.append('  param{')            res.extend(param_res)            res.append('  }')        # lrn_param    if layer.lrn_param is not None:        lrn_res = list()        if layer.lrn_param.local_size!=5:            lrn_res.append('    local_size: %d' % layer.lrn_param.local_size)        if layer.lrn_param.alpha!=1:            lrn_res.append('    alpha: %f' % layer.lrn_param.alpha)        if layer.lrn_param.beta!=0.75:            lrn_res.append('    beta: %f' % layer.lrn_param.beta)        NormRegionMapper={'0': 'ACROSS_CHANNELS', '1': 'WITHIN_CHANNEL'}        if layer.lrn_param.norm_region!=0:            lrn_res.append('    norm_region: %s' % NormRegionMapper[str(layer.lrn_param.norm_region)])        EngineMapper={'0': 'DEFAULT', '1':'CAFFE', '2':'CUDNN'}        if layer.lrn_param.engine!=0:            lrn_res.append('    engine: %s' % EngineMapper[str(layer.lrn_param.engine)])        if len(lrn_res)>0:            res.append('  lrn_param{')            res.extend(lrn_res)            res.append('  }')        # include    if len(layer.include)>0:        include_res = list()        includes = layer.include        phase_mapper={            '0': 'TRAIN',            '1': 'TEST'        }                for include in includes:            if include.phase is not None:                include_res.append('    phase: ', phase_mapper[str(include.phase)])                if len(include_res)>0:            res.append('  include {')            res.extend(include_res)            res.append('  }')        # transform_param    if layer.transform_param is not None:        transform_param_res = list()        if layer.transform_param.scale!=1:                       transform_param_res.append('    scale: %s'%layer.transform_param.scale)        if layer.transform_param.mirror!=False:            transform_param.res.append('    mirror: ' + layer.transform_param.mirror)        if len(transform_param_res)>0:            res.append('  transform_param {')            res.extend(transform_param_res)            res.res.append('  }')    # data_param    if layer.data_param is not None and (layer.data_param.source!="" or layer.data_param.batch_size!=0 or layer.data_param.backend!=0):        data_param_res = list()                if layer.data_param.source is not None:            data_param_res.append('    source: "%s"'%layer.data_param.source)        if layer.data_param.batch_size is not None:            data_param_res.append('    batch_size: %d'%layer.data_param.batch_size)        if layer.data_param.backend is not None:            data_param_res.append('    backend: %s'%layer.data_param.backend)                if len(data_param_res)>0:            res.append('  data_param: {')            res.extend(data_param_res)            res.append('  }')            # convolution_param    if layer.convolution_param is not None:        convolution_param_res = list()        conv_param = layer.convolution_param        if conv_param.num_output!=0:            convolution_param_res.append('    num_output: %d'%conv_param.num_output)        if len(conv_param.kernel_size) > 0:            for kernel_size in conv_param.kernel_size:                convolution_param_res.append('    kernel_size: %d' % kernel_size)        if len(conv_param.pad) > 0:            for pad in conv_param.pad:                convolution_param_res.append('    pad: %d' % pad)        if len(conv_param.stride) > 0:            for stride in conv_param.stride:                convolution_param_res.append('    stride: %d' % stride)        if conv_param.weight_filler is not None and conv_param.weight_filler.type!='constant':            convolution_param_res.append('    weight_filler {')            convolution_param_res.append('      type: "%s"'%conv_param.weight_filler.type)            convolution_param_res.append('    }')        if conv_param.bias_filler is not None and conv_param.bias_filler.type!='constant':            convolution_param_res.append('    bias_filler {')            convolution_param_res.append('      type: "%s"'%conv_param.bias_filler.type)            convolution_param_res.append('    }')                if len(convolution_param_res)>0:            res.append('  convolution_param {')            res.extend(convolution_param_res)            res.append('  }')        # pooling_param    if layer.pooling_param is not None:        pooling_param_res = list()        if layer.pooling_param.kernel_size>0:            pooling_param_res.append('    kernel_size: %d' % layer.pooling_param.kernel_size)            pooling_param_res.append('    stride: %d' % layer.pooling_param.stride)            pooling_param_res.append('    pad: %d' % layer.pooling_param.pad)            PoolMethodMapper={'0':'MAX', '1':'AVE', '2':'STOCHASTIC'}            pooling_param_res.append('    pool: %s' % PoolMethodMapper[str(layer.pooling_param.pool)])                if len(pooling_param_res)>0:            res.append('  pooling_param {')            res.extend(pooling_param_res)            res.append('  }')        # inner_product_param    if layer.inner_product_param is not None:        inner_product_param_res = list()        if layer.inner_product_param.num_output!=0:            inner_product_param_res.append('    num_output: %d' % layer.inner_product_param.num_output)                if len(inner_product_param_res)>0:            res.append('  inner_product_param {')            res.extend(inner_product_param_res)            res.append('  }')        # drop_param    if layer.dropout_param is not None:        dropout_param_res = list()        if layer.dropout_param.dropout_ratio!=0.5 or layer.dropout_param.scale_train!=True:            dropout_param_res.append('    dropout_ratio: %f' % layer.dropout_param.dropout_ratio)            dropout_param_res.append('    scale_train: ' + str(layer.dropout_param.scale_train))                if len(dropout_param_res)>0:            res.append('  dropout_param {')            res.extend(dropout_param_res)            res.append('  }')        res.append('}')        for line in res:        print line

此處貼出ZFnet原版網(wǎng)絡(luò)的prototxt描述文件:

name: "ImageNet_Zeiler_spm"layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  convolution_param {    num_output: 96    kernel_size: 7    pad: 1    stride: 2    weight_filler {      type: "gaussian"    }  }}layer {  name: "relu1"  type: "ReLU"  bottom: "conv1"  top: "conv1"}layer {  name: "norm1"  type: "LRN"  bottom: "conv1"  top: "norm1"  lrn_param{    local_size: 3    alpha: 0.000050    norm_region: WITHIN_CHANNEL  }}layer {  name: "pool1"  type: "Pooling"  bottom: "norm1"  top: "pool1"  pooling_param {    kernel_size: 3    stride: 2    pad: 0    pool: MAX  }}layer {  name: "conv2"  type: "Convolution"  bottom: "pool1"  top: "conv2"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  convolution_param {    num_output: 256    kernel_size: 5    pad: 0    stride: 2    weight_filler {      type: "gaussian"    }  }}layer {  name: "relu2"  type: "ReLU"  bottom: "conv2"  top: "conv2"}layer {  name: "norm2"  type: "LRN"  bottom: "conv2"  top: "norm2"  lrn_param{    local_size: 3    alpha: 0.000050    norm_region: WITHIN_CHANNEL  }}layer {  name: "pool2"  type: "Pooling"  bottom: "norm2"  top: "pool2"  pooling_param {    kernel_size: 3    stride: 2    pad: 0    pool: MAX  }}layer {  name: "conv3"  type: "Convolution"  bottom: "pool2"  top: "conv3"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  convolution_param {    num_output: 384    kernel_size: 3    pad: 1    stride: 1    weight_filler {      type: "gaussian"    }  }}layer {  name: "relu3"  type: "ReLU"  bottom: "conv3"  top: "conv3"}layer {  name: "conv4"  type: "Convolution"  bottom: "conv3"  top: "conv4"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  convolution_param {    num_output: 384    kernel_size: 3    pad: 1    stride: 1    weight_filler {      type: "gaussian"    }  }}layer {  name: "relu4"  type: "ReLU"  bottom: "conv4"  top: "conv4"}layer {  name: "conv5"  type: "Convolution"  bottom: "conv4"  top: "conv5"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  convolution_param {    num_output: 256    kernel_size: 3    pad: 1    stride: 1    weight_filler {      type: "gaussian"    }  }}layer {  name: "relu5"  type: "ReLU"  bottom: "conv5"  top: "conv5"}layer {  name: "pool5_spm6"  type: "Pooling"  bottom: "conv5"  top: "pool5_spm6"  pooling_param {    kernel_size: 3    stride: 2    pad: 0    pool: MAX  }}layer {  name: "pool5_spm6_flatten"  type: "Flatten"  bottom: "pool5_spm6"  top: "pool5_spm6_flatten"}layer {  name: "fc6"  type: "InnerProduct"  bottom: "pool5_spm6_flatten"  top: "fc6"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  inner_product_param {    num_output: 4096  }}layer {  name: "relu6"  type: "ReLU"  bottom: "fc6"  top: "fc6"}layer {  name: "drop6"  type: "Dropout"  bottom: "fc6"  top: "fc6"}layer {  name: "fc7"  type: "InnerProduct"  bottom: "fc6"  top: "fc7"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  inner_product_param {    num_output: 4096  }}layer {  name: "relu7"  type: "ReLU"  bottom: "fc7"  top: "fc7"}layer {  name: "drop7"  type: "Dropout"  bottom: "fc7"  top: "fc7"}layer {  name: "fc8"  type: "InnerProduct"  bottom: "fc7"  top: "fc8"  param{    lr_mult: 1.0  }  param{    lr_mult: 2.0  }  inner_product_param {    num_output: 1000  }}layer {  name: "prob"  type: "Softmax"  bottom: "fc8"  top: "prob"}

根據(jù)得到的prototxt文件,容易繪制出原版ZFnet對應(yīng)的網(wǎng)絡(luò)結(jié)構(gòu)圖:(可參考這篇博客:http://www.cnblogs.com/zjutzz/p/5955218.html

本站僅提供存儲服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
【caffe速成】caffe圖像分類從模型自定義到測試
(RegionProposal Network)RPN網(wǎng)絡(luò)結(jié)構(gòu)及詳解
[Caffe]:關(guān)于caffe新手入門
圖像語義分割入門:FCN/U-Net網(wǎng)絡(luò)解析
卷積神經(jīng)網(wǎng)絡(luò)與caffe Convolution層及參數(shù)設(shè)置
利用Caffe做回歸(regression)
更多類似文章 >>
生活服務(wù)
熱點新聞
分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服