how to use nn.embedding
The `nn.Embedding` module in PyTorch is used for converting sparse one-hot vector representations of words into dense continuous-valued vectors, also known as word embeddings. Here's how you can use it:

1. Initialize the `nn.Embedding` layer with the desired number of words (`num_embeddings`) and the size of the word embeddings (`embedding_dim`):
```python
import torch
import torch.nn as nn

embedding = nn.Embedding(num_embeddings, embedding_dim)
```
2. The `nn.Embedding` layer takes in a tensor of integers as input, where each integer represents a word, and maps it to the corresponding word embedding.
```python
inputs = torch.LongTensor([1, 2, 3, 4, 5])
embedded = embedding(inputs)
print(embedded.shape)
# Output: torch.Size([5, embedding_dim])
```
Note that the input tensor must be of type `torch.LongTensor` and its values must be indices into the word embeddings, not the words themselves.