Error while loading the ml model initially using lru_cache #11166
-
First Check
Commit to Help
Example Codefrom fastapi import Depends, FastAPI, File, UploadFile, Response
import uvicorn
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
from gfpgan import GFPGANer
import torch
from functools import lru_cache
@lru_cache()
def loading_model():
real_esrgan_model_path = "D:/Image Super Resolution/Models/Real-ESRGAN/weights/RealESRGAN_x4plus.pth"
gfpgan_model_path = "D:/Image Super Resolution/Models/Real-ESRGAN/env/Lib/site-packages/gfpgan/weights/GFPGANv1.3.pth"
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
netscale = 4
upsampler = RealESRGANer(scale=netscale,model_path=real_esrgan_model_path,dni_weight=0.5,model=model,tile=0,tile_pad=10,pre_pad=0,half=False)
face_enhancer = GFPGANer(model_path=gfpgan_model_path,upscale=4,arch='clean',channel_multiplier=2,bg_upsampler=upsampler)
return face_enhancer
app = FastAPI(title="Image Restoration", version="0.0.1", debug=True)
def hd_process(file):
filename = file.filename.split('.')[0]
save_path = os.path.join("temp_images", f"{filename}.jpg")
content = file.file.read()
with open(save_path, 'wb') as image_file:
image_file.write(content)
img_array = cv2.imread(save_path, cv2.IMREAD_UNCHANGED)
face_enhancer = loading_model()
with torch.no_grad():
_, _, output = face_enhancer.enhance(img_array, has_aligned=False, only_center_face=False, paste_back=True)
output_rgb = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
del face_enhancer
torch.cuda.empty_cache()
return output_rgb
@app.post("/image-restoration")
def process_image(file: UploadFile = File(...)):
output_rgb= hd_image(file)
pil_image = Image.fromarray(np.uint8(output_rgb))
img_byte_io = io.BytesIO()
pil_image.save(img_byte_io, format="JPEG")
hd_image = img_byte_io.getvalue()
return Response(content=hd_image, media_type="image/jpg)DescriptionI have created an API for Real-ESRGAN using FastAPI, and it is working properly for multiple user requests. However, when I am initially loading the models (Real-ESRGAN and GFPGAN) using lru_cache (functools) to decrease the inference time, I am encountering following two errors during execution. 1. Sometimes I have getting faces of one user request mixed up with another user request. 2. In some requests, I have getting following error. What is the problem extracted during the initial model loading? I believe there is something common among multiple user requests, and this is causing the error. Operating SystemWindows Operating System DetailsNo response FastAPI Version0.109.2 Pydantic Version2.6.1 Python Version3.10.6 Additional ContextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
FastAPI executes endpoints defined as |
Beta Was this translation helpful? Give feedback.

FastAPI executes endpoints defined as
def(withoutasync) in the thread pool.I guess the library you are using is not thread-safe and this leads to such problems.
As a solution, you may consider using Locking mechanism to ensure only one request uses model at a time.