idyl.inference.
idyl.inference.subnet.idyl.network
Operated by Idyl Labs
Distributed compute for the idyl.inference endpoint. Open-source models served across the network.
The idyl.inference subnet is the home for open-source model inference on the network. Operated by Idyl Labs, it hosts the idyl.inference product — the OpenAI-compatible endpoint at api.inference.idyl.dev.
Hardware is contributed by a mix of independent operators and Idyl Labs itself. Models are pre-deployed across providers; routing handles distribution underneath the public API. Deployers never touch the subnet directly — they call the endpoint and the network handles the rest.
This subnet is the substrate behind every request to idyl.inference.
How the subnet operates.
The rules are explicit. The operator publishes them, enforces them, and updates them with notice.
What runs here.
Idyl Labs.
Idyl Labs operates this subnet. Below is the operator profile and any other subnets they run on Idyl.
Idyl Labs.
Idyl Labs operates the foundational subnets on the network. The idyl.inference subnet is the public, operator-managed home for open-source inference, served through a single OpenAI-compatible endpoint.
List a subnet.
If you already run compute infrastructure with a defined audience and policy, you can apply to list it here. Reviewed for mission clarity, capacity, and operating maturity.
Listing criteria.
Subnets should already be operating, with a defined audience, rules for participation, and a real operator behind them.
- 01 Operating with active providers
- 02 Mission and policy clearly stated
- 03 Capacity available (or controlled invite)
- 04 Operator is reachable and responsive