51.2_训练MNIST

51.2 训练MNIST

接下来训练MNIST。代码如下所示(省略了导入语句的代码)。

steps/step51.py

max_epoch = 5  
batch_size = 100  
hidden_size = 1000  
train_set =dezerodatasets.MNIST(train=True)  
test_set =dezerodatasets.MNIST(train=False)  
trainloader = DataLoader(train_set, batch_size)  
testloader = DataLoader(test_set, batch_size, shuffle=False)  
model = MLP((hidden_size, 10))  
optimizer = optimizers.SGD().setup(model)  
for epoch in range(max_epoch):  
    sum_loss, sum_acc = 0, 0  
    for x, t in trainloader:
$\begin{array}{l}\mathrm{y} = \mathrm{model}(\mathrm{x})\\ \mathrm{loss} = \mathrm{F}. \mathrm{softmax\_cross\_entropy}(\mathrm{y},\mathrm{t})\\ \mathrm{acc} = \mathrm{F}. \mathrm{accuracy}(\mathrm{y},\mathrm{t})\\ \mathrm{model}. \mathrm{cleargrads}()\\ \mathrm{loss}. \mathrm{backward}()\\ \mathrm{optimizer.update()}\\ \mathrm{sum\_loss} += \mathrm{float}(\mathrm{loss.data}) * \mathrm{len}(\mathrm{t})\\ \mathrm{sum\_acc} += \mathrm{float}(\mathrm{acc.data}) * \mathrm{len}(\mathrm{t})\\ \end{array}$ $\begin{array}{rl}&{\mathrm{sum\_loss}=float(loss.data)*len(t)}\\&{\mathrm{sum\_acc}=float(acc.data)*len(t)}\\&{\mathrm{print('epoch:{}}\text{'format}(epoch+1))}\\&{\mathrm{print('train loss:{}}\text{:.4f}\text{, accuracy:{}}\text{:.4f}\text{'format(}}\\&{\mathrm{sum\_loss}/len(train_set),sum\_acc/len(train_set))}\\&{\mathrm{sum\_loss, sum\_acc=0,\theta,0}}\\&{\mathrm{with~dezero.no\_grad():}}\\&{\mathrm{for~x,t~in~test\_loader:}}\\&{\mathrm{y~= model(x)}}\\&{\mathrm{loss~= F. softmax\_cross\_entropy(y,t)}}\\&{\mathrm{acc~= F. accuracy(y,t)}}\\&{\mathrm{sum\_loss}=float(loss.data)*len(t)}\\&{\mathrm{sum\_acc}=float(acc.data)*len(t)}\\&{\mathrm{print('test loss:{}}\text{:.4f}\text{, accuracy:{}}\text{:.4f}\text{'format(}}\\&{\mathrm{sum\_loss}/len(test_set),sum\_acc/len(test_set))}\end{array}$

运行结果

epoch: 1  
train loss: 1.9103, accuracy: 0.5553  
test loss: 1.5413, accuracy: 0.6751  
epoch: 2  
train loss: 1.2765, accuracy: 0.7774  
test loss: 1.0366, accuracy: 0.8035  
epoch: 3  
train loss: 0.9195, accuracy: 0.8218  
test loss: 0.7891, accuracy: 0.8345  
epoch: 4  
train loss: 0.7363, accuracy: 0.8414  
test loss: 0.6542, accuracy: 0.8558  
epoch: 5  
train loss: 0.6324, accuracy: 0.8542  
test loss: 0.5739, accuracy: 0.8668

与上一个步骤相比,变化在于现在使用了MNIST数据集,以及修改了超参数的值。仅做这些修改就能训练MNIST了。由此,测试数据集上的识

别精度达到了约 86%86\% 。虽然增加轮数可以提高精度,但似乎还有别的方法可以从根本上改善精度。在本步骤的最后,我们会创建一个精度更高的模型。