Windows下采用caffe进行
2016-8-29
本文介绍在Windows环境下,采用采用caffe进行VGG人脸识别深度神经网络模型的fine-tune(微调训练)。运行环境为:windows7+vs2013+matlab2015a+caffe(微软版)。
1. 代码和文件准备 代码caffe:github/BVLC/caffe/tree/windows,这个只要采用VS2013编译好就可以用了,配置编译按照网页上的说明进行即可完成。 vgg-face模型:ac.uk/~vgg/software/vgg_face/。该网页包括caffe,matconvnet,torch三个版本,下载caffe版本即可。
收集人脸图像数据及对应的标签数据,vgg处理的图片的尺寸是224*224,因此不符合尺寸要
求的要对尺寸进行修正。然后采用caffe中的convert_imageset工具将人脸图像及对应的标签转换为leveldb格式的库文件。并采用caffe中的compute_image_mean工具计算leveldb库文件中所包含人脸的均值文件face_mean.binaryproto。
将训练数据和评估数据的库文件分别放置在下述位置:
J:/caffe-windows/vggface_mycmd/vggface_train_leveldb
J:/caffe-windows/vggface_mycmd/vggface_val_leveldb
3. 训练网络
第三部分就开始训练网络了,首先需要到vggface的上下载vggface的caffe模型(还包括matconvnet模型,试过finetuning太慢了,torch没有试过),下载好了,就会有两个文件,一个是VGG_FACE_deploy.prototxt,一个是VGG_FACE.caffemodel(深度网络的模型文件)。
先建立一个.prototxt文件,命名为vggface_train_test.prototx (深度网络的模型框架定义文件)即可,把之前VGG_FACE_deploy.prototxt的所有的复制过来,然后加入数据层。VGG_
FACE.caffemodel(深度网络的模型文件)与vggface_train_test.prototx (深度网络的模型框架定义文件)要相配套。
(1). vggface_train_test.prototxt西北师范大学学报
name: "VGG_ILSVRC_16_layers"
layers {
name: "data"
type: DATA
include {
phase: TRAIN
}
transform_param {
crop_size: 224
mean_value: 104
mean_value: 117
mean_value: 123
mirror: true
}
data_param {
source: "J:/caffe-windows/vggface_mycmd/vggface_train_leveldb"
batch_size: 3
backend: LEVELDB
}
top: "data"
top: "label"
}
layers {
name: "data"
type: DATA
include {
phase: TEST
}
transform_param {
crop_size: 224
mean_value: 104
mean_value: 117
mean_value: 123
mirror: false
}
data_param {
source: "vggface_mycmd/vggface_val_leveldb"
batch_size: 3
backend: LEVELDB
}
top: "data"
top: "label"
}
layers {
bottom: "data"
top: "conv1_1"
name: "conv1_1"
type: CONVOLUTION
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv1_1"
木马检测 top: "conv1_1"
name: "relu1_1"
type: RELU
}
layers {
bottom: "conv1_1"
top: "conv1_2"
name: "conv1_2"
type: CONVOLUTION
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv1_2"
top: "conv1_2"
name: "relu1_2"
type: RELU
}
layers {
bottom: "conv1_2"
top: "pool1"
name: "pool1"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: "pool1"
top: "conv2_1"
name: "conv2_1"
type: CONVOLUTION
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv2_1"
top: "conv2_1"
name: "relu2_1"
type: RELU
}
layers {
bottom: "conv2_1"
top: "conv2_2"
name: "conv2_2"
type: CONVOLUTION
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv2_2"
top: "conv2_2"
name: "relu2_2"
type: RELU
巴特沃斯滤波器}
layers {
bottom: "conv2_2"
top: "pool2"
name: "pool2"
type: POOLING
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layers {
bottom: "pool2"
top: "conv3_1"
name: "conv3_1"
type: CONVOLUTION
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv3_1"
top: "conv3_1"
name: "relu3_1"
voip业务
type: RELU
}
layers {
bottom: "conv3_1"
top: "conv3_2"
name: "conv3_2"
type: CONVOLUTION
convolution_param {
num_output: 256
机制砂 pad: 1
kernel_size: 3
}
}
layers {
bottom: "conv3_2"
top: "conv3_2"
name: "relu3_2"
type: RELU
}
layers {
bottom: "conv3_2"
top: "conv3_3"
name: "conv3_3"
type: CONVOLUTION
convolution_param {
num_output: 256
pad: 1
>国家重点实验室