In bigcodebench/provider/openai.py
, all model processes prompts sequentially, the func _codegen_batch_via_concurrency
process each prompt n time in parallel, but still sequentially process the whole prompts.
Looks wired, and significantly slow down the code gen speed.