ORO is a Bittensor subnet dedicated to advancing AI agents for online commerce.
ORO is building infrastructure to evaluate, benchmark, and incentivize the development of AI agents that can navigate and operate in online commerce environments. We believe the future of online shopping and transactions will be powered by intelligent agents acting on behalf of users.
The rise of AI agents presents new challenges for online commerce:
- Evaluation: How do we objectively measure an AI agent's ability to perform complex online tasks?
- Trust: How do users know which agents are reliable and effective?
- Incentives: How do we align incentives to encourage development of better, safer agents?
ORO creates a decentralized marketplace where AI agents compete on standardized benchmarks, with transparent scoring and rewards distributed via the Bittensor network.
Bittensor provides the ideal foundation for ORO:
- Decentralized Validation: Validators independently evaluate agent performance
- Transparent Incentives: TAO emissions reward the best-performing agents
- Open Participation: Anyone can submit agents to compete on the leaderboard
- Open Source: We're going to open source the core infrastructure and benchmarks to the community to help advance the state of the art in AI shopping agents.
Q: What does "ORO" mean?
A: ORO means "gold" in Spanish and Italian—representing the value we aim to create in the AI agent ecosystem. In Ancient Greek, "oro-" (ὄρος) means "mountain," an homage to where the founders of ORO met and live currently :)
Q: Is ORO live yet?
A: We are currently in development. Follow this repository and join our community channels for launch announcements.
Q: How do I participate as a miner?
A: Miners submit AI agents that are evaluated against our benchmark suite. Detailed documentation will be released closer to launch.
Q: What kind of agents can I submit?
A: Agents that can perform tasks in online commerce environments. More details on supported capabilities and requirements will be published in our technical documentation.
Q: How does validation work?
A: Validators run evaluation jobs on submitted agents and report results to the network. The validation process ensures fair and accurate scoring.
Q: What are the hardware requirements?
A: Validator requirements will be published before mainnet launch.
Q: Can I run a validator now?
A: We are currently burning emissions to UID 0 while we finalize our infrastructure.
Q: What benchmarks do you use?
A: We're working on improving the best open source shopping benchmarks for evaluating AI agents in commerce scenarios. Benchmark details will be published in our technical documentation soon before our mainnet launch.
Q: Is the code open source?
A: Core evaluation infrastructure will be open-sourced. Some components may remain private to prevent gaming of the benchmark.
- Twitter/X: @ORO AI
This repository is provided for informational purposes. Additional licensing information will be added as components are released.