Skip to content

rdagent fin_quant 自动拼接的挂载路径为 /tmp/full:D:\workspace\qlib_workspace\workspace_cache:rw,导致 Docker 报错“too many colons”。 #990

Open
@wdannw

Description

@wdannw

在 Windows 10 + Docker Desktop 环境下,rdagent fin_quant 自动拼接的挂载路径为 /tmp/full:D:\workspace\qlib_workspace\workspace_cache:rw,导致 Docker 报错“too many colons”。我已测试 docker run -v E:/workspace/qlib_workspace/workspace_cache:/test ubuntu ls /test 没有问题,.env 里路径也写成了绝对路径和 Linux 路径,但依然报错。请官方修复 Windows 下的路径拼接逻辑。

(rdagent) E:>rdagent fin_quant
2025-06-26 14:14:42.610 | INFO | rdagent.oai.backend.litellm::41 - backend='rdagent.oai.backend.LiteLLMAPIBackend' chat_model='deepseek/deepseek-chat' embedding_model='litellm_proxy/BAAI/bge-large-en-v1.5' reasoning_effort=None reasoning_think_rm=True log_llm_chat_content=True use_azure=False chat_use_azure=False embedding_use_azure=False chat_use_azure_token_provider=False embedding_use_azure_token_provider=False managed_identity_client_id=None max_retry=10 retry_wait_seconds=20 dump_chat_cache=False use_chat_cache=False dump_embedding_cache=False use_embedding_cache=False prompt_cache_path='E:\prompt_cache.db' max_past_message_include=10 timeout_fail_limit=10 violation_fail_limit=1 use_auto_chat_cache_seed_gen=False init_chat_cache_seed=42 openai_api_key='' chat_openai_api_key=None chat_openai_base_url=None chat_azure_api_base='' chat_azure_api_version='' chat_max_tokens=None chat_temperature=0.5 chat_stream=True chat_seed=None chat_frequency_penalty=0.0 chat_presence_penalty=0.0 chat_token_limit=100000 default_system_prompt="You are an AI assistant who helps to answer user's questions." system_prompt_role='system' embedding_openai_api_key='' embedding_openai_base_url='' embedding_azure_api_base='' embedding_azure_api_version='' embedding_max_str_num=50 use_llama2=False llama2_ckpt_dir='Llama-2-7b-chat' llama2_tokenizer_path='Llama-2-7b-chat/tokenizer.model' llams2_max_batch_size=8 use_gcr_endpoint=False gcr_endpoint_type='llama2_70b' llama2_70b_endpoint='' llama2_70b_endpoint_key='' llama2_70b_endpoint_deployment='' llama3_70b_endpoint='' llama3_70b_endpoint_key='' llama3_70b_endpoint_deployment='' phi2_endpoint='' phi2_endpoint_key='' phi2_endpoint_deployment='' phi3_4k_endpoint='' phi3_4k_endpoint_key='' phi3_4k_endpoint_deployment='' phi3_128k_endpoint='' phi3_128k_endpoint_key='' phi3_128k_endpoint_deployment='' gcr_endpoint_temperature=0.7 gcr_endpoint_top_p=0.9 gcr_endpoint_do_sample=False gcr_endpoint_max_token=100 chat_use_azure_deepseek=False chat_azure_deepseek_endpoint='' chat_azure_deepseek_key='' chat_model_map={}
2025-06-26 14:14:43.840 | INFO | rdagent.utils.env:prepare:714 - Building the image from dockerfile: D:\Anaconda\envs\rdagent\lib\site-packages\rdagent\scenarios\qlib\docker
⠋ Successfully tagged local_qlib:latest

2025-06-26 14:14:43.888 | INFO | rdagent.utils.env:prepare:732 - Finished building the image from dockerfile: D:\Anaconda\envs\rdagent\lib\site-packages\rdagent\scenarios\qlib\docker
2025-06-26 14:14:43.894 | INFO | rdagent.utils.env:prepare:897 - We are downloading!
2025-06-26 14:14:44.459 | INFO | rdagent.utils.env:_f:791 - GPU Devices are available.
2025-06-26 14:14:44.465 | WARNING | rdagent.utils.env:__run_ret_code_with_retry:197 - Error while running the container: Error while running the container: 500 Server Error for http+docker://localnpipe/v1.49/containers/create: Internal Server Error ("mount denied:
the source path "/tmp/full:E:\workspace\qlib_workspace\workspace_cache:rw"
too many colons"), current try index: 1, 4 retries left.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions