What's a good mojito recipe? I don't like my mojitos too sweat so keep the sugar to a minimum.
Here's a simple mojito recipe that you can adjust to your liking:

Ingredients:
- 2 oz white rum
- 1 oz lime juice
- 1 tsp granulated sugar (or less, to taste)
- 6-8 fresh mint leaves
- Club soda
- Ice

Directions:
1. In a cocktail shaker, muddle the mint leaves with the sugar and lime juice until the leaves are bruised and the sugar has dissolved.
2. Add the rum and some ice to the shaker and shake well.
3. Fill a glass with ice and strain the contents of the shaker into the glass.
4. Top the glass with club soda and give it a stir.
5. Garnish with a sprig of mint and a lime wedge, if desired.

Note: If you prefer your mojitos less sweet, you can reduce the amount of sugar or skip it altogether. You can also adjust the amount of rum and lime juice to your liking. Enjoy!
Can you write me a complete LSTM neural network from scratch while using the PyTorch module?
Sure! Here's an example implementation of a simple LSTM neural network using PyTorch:

```py
import torch
import torch.nn as nn

class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, output_size):
        super().__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)
        
    def forward(self, x):
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        
        out, _ = self.lstm(x, (h0, c0))
        out = out[:, -1, :]
        out = self.fc(out)
        
        return out
```
This implementation defines an LSTM class that inherits from nn.Module. The constructor takes four arguments: input_size, hidden_size, num_layers, and output_size. These parameters determine the size of the input and output layers, the number of LSTM layers, and the number of output neurons, respectively.

The __init__ method sets up the LSTM layer using nn.LSTM, and sets up a fully connected layer using nn.Linear. The forward method takes the input tensor x and passes it through the LSTM layer, then through the fully connected layer to produce the output tensor.

To use this LSTM, you can instantiate an object of the LSTM class and pass input data to it as follows:

```py
input_size = 10
hidden_size = 20
num_layers = 2
output_size = 1

model = LSTM(input_size, hidden_size, num_layers, output_size)
x = torch.randn(1, 3, 10) # input tensor of shape (batch_size, sequence_length, input_size)
output = model(x)
```
This will create an LSTM with 2 layers, 10 input neurons, 20 hidden neurons, and 1 output neuron. The input tensor x has shape (1, 3, 10), which means that there is one sample with three timesteps and 10 input values at each timestep. The output tensor output has shape (1, 1), which means that there is one sample with one output value.

Note that this implementation is a simple example and may need to be modified to suit your specific use case.