Note
This is strictly for personal and educational exploration uses. Evolutionary Scale AI's ESM3 terms of service strictly prohibits the commercial production of an API that uses ESM3. Please review their terms and conditions before using this code.
This guide will walk you through the process of setting up an ESM3 Inference Server. This server will allow you to interact with the ESM3 model through a REST API. This guide will cover the following steps:
-
Getting Hugging Face Transformers access token for the model.
-
Modal
-
Running the
.ipynb
file to send a request to the server.
ESM3 is a protein language model developed by Evolutionary Scale AI. It is able to reason protein structures. It is trained on a combination of structure, function and seqeuence data. As a generative masked model, you can present partial sequences to the model and it will generate the rest of the sequence given a set of blanks. The general model architecture is presented below.
Ensure you have uv
installed as your package manager. Use:
uv pip install -r requirements.txt
to install the required packages.
Then run the following command to start your envrionment:
source .venv/bin/activate
Hugging Face hosts the ESM3 model. You will need to get an access token and accept the terms of service laid out on Hugging Face for this model.
You will need to store a secret for your hugging face token. This is stored on the Modal Dashboard.
To run the Modal Server ephermerally, use the following command:
modal serve src/esm3_hacking/endpoint.py
You are now able to make inference requests to the ephemeral server.
This is self-explanatory. You will need to run the .ipynb
file to send a request to the server.