How to configure a PyTorch optimizer?
Published on Aug. 22, 2023, 12:18 p.m.
To configure a PyTorch optimizer, you can create an instance of an optimizer class from the torch.optim
module and pass it the parameters of your model that should be updated during training. Here is an example of how to configure the Stochastic Gradient Descent (SGD) optimizer:
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(10, 20)
self.fc2 = nn.Linear(20, 1)
self.relu = nn.ReLU()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
model = MyModel()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
In this code, we create an instance of the SGD optimizer by passing it the model parameters and learning rate. We also specify a momentum value of 0.9, which determines how much the optimizer should take into account past gradients when updating the model parameters. There are other optimizers available in PyTorch as well, such as Adam, RMSprop and Adagrad, and their configuration is similar.
Once you have defined and configured your optimizer, you can use it to update the model parameters in a training loop.
for epoch in range(num_epochs):
for input_data, target in dataset:
output = model(input_data)
loss = loss_fn(output, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
In this code, we use the optimizer’s zero_grad
method to reset the gradients to zero, compute the gradients using the backward
method, and update the model parameters using the step
method.