pytorch实现SGD与Linear Regression

pytorch实现SGD与Linear Regression

终于开始学pytorch啦!感觉一开始还是得熟悉python奇怪的代码风格(可能因为是面向对象的编程方式不太熟悉emmm

一切伟大的目标都有一个微不足道的开始

一、pytorch实现梯度下降

import torch
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = torch.Tensor([1.0])
w.requires_grad = True #需要求梯度的tensor的requires_grad必须要设置为True
def forward(x):
    return x * w
def loss(x, y):
    y_pred = forward(x)
    return (y_pred - y) ** 2
print("predict (before training)", 4, forward(4).item())
for epoch in range(100):
    for x, y in zip(x_data, y_data):
        l = loss(x, y)
        l.backward()
        print('\tgrad:', x, y, w.grad.item())
        w.data = w.data - 0.01 * w.grad.data #一定是对data进行操作,否则将会申城新的计算图。而且grad域本身也是一个tensor
        w.grad.data.zero_()#梯度清零,否则梯度会累积
    print("progress:", epoch, l.item())
print("predict (after training)",4,forward(4).item)

计算图的建立

二、pytorch实现线性回归

1. 准备数据集

import torch
x_data = torch.Tensor([1.0], [2.0], [3.0])
y_data = torch.Tensor([2.0], [4.0], [6.0])

2. 使用从nn.Module继承而来的class来设计模型

Module

class LinearModel(torch.nn.Module):#从nn.Module继承过来,是所有神经网络模型的基础
    def __init__(self):
        super(LinearModel, self).__init__()#调用父类型的__init__()
        self.linear = torch.nn.Linear(1, 1)# nn.Linear包含两个Tensor成员:权重以及偏置量
    def forward(self, x):#由于在__call__()中调用了forward函数所以必须要实现forward函数
        y_pred = self.linear(x)#可调用的对象 可理解为进行了一次y=wx+b的过程
        return y_pred
model = LinearModel()#callable model(x)

class torch.nn.Linear(in_features, out_features, bias = True)

  • Applies a linear transformation to the incoming data: y=Ax+b
    line number refers to the sample number
    column number refers to feature number
  • in_features: size of EACH input sample
  • out_features: size of EACH output sample
  • bias: if set to False, the layer will not learn an additive bias.

可调用的类:

#-------------------------------------
def func(*args, **kwargs):
    print(args)
    print(kwargs)
func(1, 2, 4, 3, x = 3, y = 5)
# (1, 2, 4, 3)
# {'x' : 3, 'y' : 5}
#以此处理不定参数
#-------------------------------------
class Foobar:
    def __init__(self):
        pass
    def __call__(self, *args, **kwargs):
        print('Hello' + str(args[0]))
foobar = Foobar()
foobar(1, 2, 3)
# Hello1

3. 使用PyTorch API构建loss function以及optimizer

criterion = torch.mnn.MSELoss(size_average = False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)# model.parameters()是可训练的部分,lr是learning rate
class torch.nn.MSELoss(size_average = True, reduce = True)
  1. Creates a criterion that measures the mean squared error between n elements in the input x and target y

  2. The loss can be described as:
    l(x,y)=L={l_1,\dots,l_N}^T,l_n=(x_n-y_n)^2
    where N is the batch size

class torch.optim.SGD(params, lr, momentum = 0, dampening = 0, weight_decay = 0, nesterov = False)

Implements stochastic gradient descnet(optimally with momentum)

4. 训练模型过程

for epoch in range(100):
    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    print(epoch, loss)#loss是一个对象,但不会产生计算图
    optimizer.zero_grad()# 梯度归零,否则累计
    loss.backward() # back propogation
    optimizer.step() # update
#Output weight and bias
print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
# Test Model
x_test = torch.Tensor([4.0])
y_test = model(x_test)
print('y_pred = ', y_test.data)

几种不同的优化器

  • torch.optim.Adagrad
  • torch.optim.Adam
  • torch.optim.Adamax
  • torch.optim.ASGD
  • torch.optim.LBFGS
  • torch.optim.RMSprop
  • torch.optim.Rprop
  • torch.optim.SGD

留下回复