深度学习实战---猫狗大战(pytorch实现)

深度学习实战---猫狗⼤战(pytorch实现)
数据准备
微软的数据集已经分好类,直接使⽤就⾏,
数据划分
我们将猫和狗的图⽚分别移动到训练集和验证集中,其中90%的数据作为训练集,10%的图⽚作为验证集,使⽤ve()来移动图⽚。
新建⽂件夹train,test,将数据集放⼊train中,利⽤代码将10%的数据移动到test中
⽂件移动代码
import os
import shutil
source_path = r"E:\猫狗⼤战数据集\PetImages"
train_dir = os.path.join(source_path, "train")
test_dir = os.path.join(source_path,"test")
train_dir_list = os.listdir(train_dir)
for dir in train_dir_list:
category_dir_path = os.path.join(train_dir, dir)
image_file_list = os.listdir(category_dir_path)
num = int(0.1*len(image_file_list))
#移动10%⽂件到对应⽬录
for i in range(num):
移动后
数据可视化
import matplotlib.pyplot as plt
import numpy
import os
from PIL import Image #读取图⽚模块
from matplotlib.image import imread
source_path = r"E:\猫狗⼤战数据集\PetImages"
#分别从Dog,Cat⽂件夹中选取10张图⽚显⽰
train_Dog_dir = os.path.join(source_path, "train","Dog")
train_Cat_dir = os.path.join(source_path, "train","Cat")
Dog_image_list = os.listdir(train_Dog_dir)
Cat_image_list = os.listdir(train_Cat_dir)
show_image = [os.path.join(train_Dog_dir,Dog_image_list[i]) for i in range(10)] d([os.path.join(train_Cat_dir,Cat_image_list[i]) for i in range(10)]) for i in show_image:
print(i)蜂窝纸板托盘
plt.figure()
for i in range(1,20):
plt.subplot(4,5,i)
img = Image.open(show_image[i-1])
plt.imshow(img)
plt.show()
效果图:
可以看出图⽚的尺⼨不同,在数据预处理时需要将图⽚resize,
使⽤预训练模型(resnet)进⾏训练
from    torchvision import datasets, transforms
import torch.utils.data
as nn
dels as models
import torch.optim as optim
from visdom import Visdom
if __name__ == '__main__':
#数据处理
#数据处理
data_transform = transforms.Compose([
transforms.Resize(128),
transforms.CenterCrop(128),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
])
train_dataset = datasets.ImageFolder(root=r'E:/猫狗⼤战数据集/PetImages/train/', transform=data_transform)    train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=4, shuffle=True, num_workers=4)
test_dataset = datasets.ImageFolder(root=r'E:/猫狗⼤战数据集/PetImages/test/', transform=data_transform)    test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=4, shuffle=True, num_workers=4)
#损失函数
criteon = nn.CrossEntropyLoss()
#加载预训练模型
transfer_model = snet18(pretrained=True)
dim_in = transfer_model.fc.in_features
transfer_model.fc = nn.Linear(dim_in, 2)
#优化器adam
optimizer = optim.Adam(transfer_model.parameters(), lr=0.01)
#加载模型到GPU
transfer_model = transfer_model.cuda()
viz = Visdom()
viz.line([[0.0,0.0]],[0.],win='train',opts=dict(title="train_loss&&acc", legend=['loss','acc']))
viz.line([[0.0,0.0]], [0.], win='test', opts=dict(title="test loss&&acc.",legend=['loss', 'acc']))
global_step =0水封井
#模型训练
ain()
for epoch in range(10):
train_acc_num =0
test_acc_num =0
for batch_idx,(data,target) in enumerate(train_loader):
data, target = data.cuda(), target.cuda()
#投⼊数据,得到预测值
logits = transfer_model(data)
_,pred = torch.max(logits.data,1)
#print(pred, target)
loss = criteon(logits, target)
<_grad()
loss.backward()
optimizer.step()
#准确度计算
沉没度
train_acc_num += pred.eq(target).float().sum().item()
#print("准确数:",train_acc_num," ",batch_idx, " ",len(data))
train_acc = train_acc_num/((batch_idx+1)*len(data))
#print(train_acc)
#print(train_acc.item())
global_step +=1
viz.line([[loss.item(), train_acc]],[global_step],win='train',update='append')
if batch_idx %200 ==0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f},acc:{}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
秸秆腐熟剂epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(),train_acc))
test_loss =0
for data, target in test_loader:
cap3data, target = data.cuda(), target.cuda()
logits = transfer_model(data)
test_loss += criteon(logits,target).item()
_, pred = torch.max(logits.data, 1)
# 准确度计算
test_acc_num += pred.eq(target).float().sum().item()
viz.line([[test_loss / len(test_loader.dataset), test_acc_num / len(test_loader.dataset)]],
[global_step], win='test', update='append')
test_acc = train_acc_num / len(test_loader.dataset)
viz.images(data.view(-1, 3, 128, 128), win='x')
<(str(pred.detach().cpu().numpy()), win='pred',
opts=dict(title='pred'))
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, test_acc, len(test_loader.dataset),
100. * test_acc / len(test_loader.dataset)))
Train Epoch: 0 [0/22498 (0%)]    Loss: 1.061759,acc:0.25
Train Epoch: 0 [800/22498 (4%)]    Loss: 0.708053,acc:0.5174129353233831 Train Epoch: 0 [1600/22498 (7%)]    Loss: 0.403057,acc:0.5155860349127181 Train Epoch: 0 [2400/22498 (11%)]    Loss: 0.721054,acc:0.5033277870216306 Train Epoch: 0 [3200/22498 (14%)]    Loss: 0.629318,acc:0.5037453183520599
其中visdom是模型可视化模块
>headcall

本文发布于:2024-09-22 07:21:41,感谢您对本站的认可!

本文链接:https://www.17tex.com/tex/3/215949.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:数据   训练   模型   件夹
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议