Skip to content

Intel Arc Pro b70 Not Detected When Using the Intel GPU Image #9540

@arbrick

Description

@arbrick

LocalAI version:

  • image: localai/localai:master-gpu-intel
  • id: bf7689146833

Environment, CPU architecture, OS, and Version:

The docker container is running within an LXC container on a proxmox host with the latest 6.17.13 kernel.

HW specs:

  • CPU: AMD Ryzen 5 5600 6-Core Processor
  • GFX0: Intel Corporation DG2 [Arc A380] (rev 05)
  • GFX1: Intel Corporation Battlemage G21 [Intel Graphics]

OS Info:

Proxmox host

  • uname: Linux shadesmar 6.17.13-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.13-2 (2026-03-13T08:06Z) x86_64 GNU/Linux
  • os-release: Debian GNU/Linux 13 (trixie)

LXC container

  • uname: Linux localai 6.17.13-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.13-2 (2026-03-13T08:06Z) x86_64 GNU/Linux
  • os-release: Debian GNU/Linux 13 (trixie)

Docker container

  • uname: Linux b9d5c5b40f01 6.17.13-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.13-2 (2026-03-13T08:06Z) x86_64 x86_64 x86_64 GNU/Linux
  • os-release: Ubuntu 24.04.3 LTS

Describe the bug

Intel Arc Pro B70 is not recognized by the intel driver stack in the docker container. The a380 is detected and used just fine, but the drivers can't detect the b70. sycl-ls doesn't see it at all.

installing the compute packages as instructed in these intel docs fixes it:
https://dgpu-docs.intel.com/driver/client/overview.html

To Reproduce

  • Install a b70
  • install docker + compose
  • start the compose file
  • exec into container
    • observe sycl-ls doesn't show the b70:
      • root@a16191091266:/# sycl-ls [opencl:cpu][opencl:0] Intel(R) OpenCL, AMD Ryzen 5 5600 6-Core Processor OpenCL 3.0 (Build 0) [2025.20.10.0.10_160000
    • install the new drivers
    • observe sycl-ls now shows the b70
      • root@a16191091266:/# sycl-ls [level_zero:gpu][level_zero:0] Intel(R) oneAPI Unified Runtime over Level-Zero V2, Intel(R) Graphics [0xe223] 20.2.0 [1.14.37020+3] [opencl:cpu][opencl:0] Intel(R) OpenCL, AMD Ryzen 5 5600 6-Core Processor OpenCL 3.0 (Build 0) [2025.20.10.0.10_160000] [opencl:gpu][opencl:1] Intel(R) OpenCL Graphics, Intel(R) Graphics [0xe223] OpenCL 3.0 NEO [26.05.37020.3]

docker compose file

services:
  localai:
    image: localai/localai:master-gpu-intel
    container_name: localai

    # ── Intel GPU passthrough ─────────────────────────────────────────────────
    # /dev/dri exposes all DRM render nodes (card0, renderD128, etc.)
    # privileged is the safest catch-all; replace with targeted devices if
    # you know your exact render node (e.g. --device /dev/dri/renderD128).
    devices:
      - /dev/dri:/dev/dri
    # group_add: render grants access to the render group without full root.
    group_add:
      - 992

    depends_on:
      postgres:
        condition: service_healthy

    ports:
      - "8080:8080"

    environment:
      # ── General ────────────────────────────────────────────────────────────
      MODELS_PATH: /models
      LOCALAI_DATA_PATH: /data   # agent state, knowledge base, config

      # ── Intel / SYCL ───────────────────────────────────────────────────────
      # Force the SYCL/oneAPI backend for all llama.cpp model loads.
      LOCALAI_SINGLE_ACTIVE_BACKEND: "true"   # one backend at a time → saves VRAM
      THREADS: "1"               # SYCL full-GPU offload works best with 1 CPU thread

      # ── Performance / context ──────────────────────────────────────────────
      CONTEXT_SIZE: "4096"

      # ── VRAM watchdog (prevents stale models eating GPU memory) ────────────
      LOCALAI_WATCHDOG_IDLE: "true"
      LOCALAI_WATCHDOG_IDLE_TIMEOUT: "15m"
      LOCALAI_WATCHDOG_BUSY: "true"
      LOCALAI_WATCHDOG_BUSY_TIMEOUT: "5m"

      # ── Agent pool defaults ────────────────────────────────────────────────
      # These set pool-level defaults; individual agents can override them.
      LOCALAI_AGENT_POOL_DEFAULT_MODEL: "hermes-3-llama3.1-8b"
      LOCALAI_AGENT_POOL_EMBEDDING_MODEL: "granite-embedding-107m-multilingual"
      LOCALAI_AGENT_POOL_ENABLE_SKILLS: "true"
      LOCALAI_AGENT_POOL_ENABLE_LOGS: "true"

      # ── PostgreSQL-backed vector store ─────────────────────────────────────
      LOCALAI_AGENT_POOL_VECTOR_ENGINE: "postgres"
      LOCALAI_AGENT_POOL_DATABASE_URL: "postgresql://localrecall:localrecall@postgres:5432/localrecall?sslmode=disable"

    volumes:
      # Downloaded model weights — survives container upgrades
      - localai_models:/models
      # Agent definitions, knowledge base (chromem vectors), skills, logs
      - localai_data:/data
      # LocalAI config files (backends, galleries, etc.)
      - localai_config:/etc/localai
      # Bind-mount a local ./model-configs/ folder to inject custom YAML
      # configs without rebuilding the image. Create the folder first.
      - ./model-configs:/models/configs:ro

    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 30s
      timeout: 10s
      retries: 10
      start_period: 120s    # allow extra time for first-run model downloads

    restart: unless-stopped

  postgres:
    image: quay.io/mudler/localrecall:v0.5.2-postgresql
    container_name: localai-postgres
    environment:
      POSTGRES_USER: localrecall
      POSTGRES_PASSWORD: localrecall
      POSTGRES_DB: localrecall
    volumes:
      - localai_pgdata:/var/lib/postgresql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U localrecall"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

# =============================================================================
# Named volumes — Docker manages these; data persists across `docker compose down`
# Use `docker compose down -v` only if you want to wipe everything.
# =============================================================================
volumes:
  localai_models:
    driver: local
  localai_data:
    driver: local
  localai_config:
    driver: local
  localai_pgdata:
    driver: loca

Expected behavior

  • b70 detected and usable

Logs

Additional context

Installing the drivers like this fixed it:

apt-get update
apt-get install -y software-properties-common
add-apt-repository -y ppa:kobuk-team/intel-graphics
apt-get install -y libze-intel-gpu1 libze1 intel-metrics-discovery intel-opencl-icd clinfo intel-gsc

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions