This project wraps some plugin development of a Hyperfy client that allows Eliza to connect to a Hyperfy world like a user. In the future, the Hyperfy portion may be broken out and published as a separate plugin — for now, it's included here.
Clone this repository using:
git clone --recurse-submodules https://github.com/elizaOS/eliza-3d-hyperfy-starter.git
Copy the example environment file and rename it:
cp .env.example .env
Edit the .env
file and fill in the necessary values.
-
WS_URL
: WebSocket URL for connecting to a Hyperfy world.-
Default:
wss://chill.hyperfy.xyz/ws
(our public world) -
To connect to your own local world:
-
Clone and run Hyperfy: https://github.com/hyperfy-xyz/hyperfy
-
If it runs on port
3000
, set:WS_URL=ws://localhost:3000/ws
-
-
-
SERVER_PORT
: The port this app will run on (e.g.,3001
,4000
, etc.)
bun install
bun install # run twice to ensure postinstall scripts run correctly
bun run build
bun run dev
You can optionally enable voice chat support via one of the following methods:
-
Set ElevenLabs-related variables in
.env
:ELEVENLABS_XI_API_KEY
ELEVENLABS_MODEL_ID
ELEVENLABS_VOICE_ID
- etc.
-
Add the ElevenLabs plugin to your character settings file.
- Set the
OPENAI_API_KEY
in.env
. - Configure your character to use OpenAI's voice features.
To enable screen perception (where Eliza can describe what's visible in the 3D scene), you must use an LLM plugin that supports image description.
-
Include an LLM plugin in your character config that supports vision/image inputs.
- For example, a plugin wrapping OpenAI's GPT-4o or Gemini with vision support.
-
Ensure your
.env
is configured with the appropriate API key and any required parameters for that LLM.
This will allow Eliza to "see" the scene and respond based on visual content from the rendered environment.
Feel free to open issues or contribute if you're building something cool with Eliza and Hyperfy!