Pytorch之PolyRegression

it2022-05-05  137

深度学习任务2

任务内容:

学习《线性单元和梯度下降》相关内容,拟合一元二次方程“ y = 2 x 2 + 1 \mathrm{y}=2 x^{2}+1 y=2x2+1”。输入:随机生成一系列 x,并根据公式“ y = 2 x 2 + 1 \mathrm{y}=2 x^{2}+1 y=2x2+1”得到对应的 y,对于 每个 y 添加一些细微的随机扰动使其稍微偏离原位置。将所有的 ( x , y ) (\mathrm{x}, \mathrm{y}) (x,y)当作用于 拟合的样本点。 之后尝试使用线性单元或是pytorch框架中的线性层对以上数据点进行拟合, 得到一条拟合曲线。输出:所有的样本点与拟合的曲线截图参考 https://www.zybuluo.com/hanbingtao/note/448086

任务要求:

样本点不少于 100 个尝试使用 pytorch 完成本次任务编写项目文档,放入项目运行截图如果有兴趣,可以尝试使用不同学习率与优化函数的搭配,观察拟合的过程

程序:

"""多项式回归代码实现""" import torch from torch.autograd import Variable import torch.nn as nn import torch.optim as optim import matplotlib.pyplot as plt import numpy as np def make_features(x): """Builds features i.e. a matrix with columns [x, x^2, x^3].""" x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1) return torch.cat([x ** i for i in range(1, 3)], 1) def f(x): """Approximated function.""" return x.mm(W_target) + b_target[0]+torch.rand(100, 1)*0.4 def get_batch(batch_size=20): """Builds a batch i.e. (x, f(x)) pair.""" random = torch.randn(batch_size) random = np.sort(random) random = torch.Tensor(random) x = make_features(random) y = f(x) if torch.cuda.is_available(): return Variable(x).cuda(), Variable(y).cuda() else: return Variable(x), Variable(y) # Define model class poly_model(nn.Module): def __init__(self): super(poly_model, self).__init__() self.poly = nn.Linear(2, 1) def forward(self, x): out = self.poly(x) return out if __name__ == '__main__': W_target = torch.FloatTensor([0, 2]).unsqueeze(1) b_target = torch.FloatTensor([1]) if torch.cuda.is_available(): model = poly_model().cuda() else: model = poly_model() criterion = nn.MSELoss() optimizer = optim.SGD(model.parameters(), lr=1e-2) epoch = 0 while True: # Get data batch_x, batch_y = get_batch() # Forward pass output = model(batch_x) loss = criterion(output, batch_y) print_loss = loss.item() # Reset gradients optimizer.zero_grad() # Backward pass loss.backward() # update parameters optimizer.step() epoch += 1 if print_loss < 1e-2: break print("Loss: {:.6f} after {} batches".format(loss.item(), epoch)) print( "==> Learned function: y = {:.2f} + {:.2f}*x^2 ".format(model.poly.bias[0], model.poly.weight[0][1] )) print("==> Actual function: y = {:.2f} + {:.2f}*x^2 ".format(b_target[0], W_target[1][0])) predict = model(batch_x) batch_x = batch_x.cpu() batch_y = batch_y.cpu() x = batch_x.numpy()[:, 0] plt.plot(x, batch_y.numpy(), 'ro') predict = predict.cpu() predict = predict.data.numpy() plt.plot(x, predict, 'b') plt.show() 输出结果,如下: Loss: 0.009942 after 2345 batches ==> Learned function: y = 1.21 + 1.96*x^2 ==> Actual function: y = 1.00 + 2.00*x^2

绘制的拟合曲线,如图:


最新回复(0)