Skip to content

Conversation

davidkyle
Copy link
Member

The main motivation for this change is to enable round trip testing of the Java and C++ processes with a small TorchScript model where downloading a large BERT model in the integration tests is not practical. Additionally this represents another use-case which is beneficial for evolving the design.

PyTorch tensors have a type, BERT uses uint 64 whereas this model expects float 32, to distinguish between the 2 a new field has been added to the input JSON. The JSON document is used only for internal communications between Java and C++ and can be changed freely provided both sides are upgraded together, for this reason I have gone for a practical rather than elegant structure.

test_run.json describes the input and output

The app knows little about the model is it evaluating, some introspection is possible but not much. In future a more complex command document may be required to express has the inputs and outputs should be processed.

cc @dimitris-athanasiou

@davidkyle
Copy link
Member Author

The model is trained to find the linear weights of the polynomial w1.x + w2.x^2 + w3.x^3 to approximate the sin function and is taken from https://pytorch.org/tutorials/beginner/pytorch_with_examples.html.

PyTorch linear layers perform the transform y=xA^T +b where A^T is the transpose.

Unfortunately the trained model is a very loose approximation and the model parameters are not the same every time the model is trained so when testing the expected result some wide margins are required.

import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for t in range(2000):

    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(xx)

    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]

# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')


traced_model = torch.jit.trace(model, (xx))
torch.jit.save(traced_model, "simplemodel.pt")

Copy link
Contributor

@dimitris-athanasiou dimitris-athanasiou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@davidkyle davidkyle merged commit 2a44ffb into elastic:feature/pytorch-inference Mar 23, 2021
@davidkyle davidkyle deleted the add-simple-model branch March 23, 2021 11:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants