Run AI inference, batch jobs, media processing, simulations, and more, on a global network of underutilized hardware.
Simple plans starting at free. No capacity planning.
How it works
You describe the workload. Idyl matches it to available hardware across the network and runs it. No region selection, no instance types, no queue.
Workloads run inside subnets — purpose-driven networks built around a mission, a product, or a team. You choose the subnet. The subnet shapes the compute.
As more people join a subnet and contribute machines, available compute grows. Your plan stretches further as the community behind it gets bigger.
Deploy in under a minute
$ idyl login --subnet inference
Authenticated. Connected to subnet inference.
$ idyl deploy inference/llama-70b:latest --replicas 256
→ Matched 256 providers (160× A100, 96× H100)
→ Deployed
→ Status: running
→ Dashboard: https://console.idyl.cloud/inference/llama-70b That's it. Your workload is running on distributed hardware. You didn't provision anything.
Workloads
Integration
Your frontend, API, database, auth — they stay where they are. The compute that actually costs money — inference, transcoding, batch processing — moves to idyl.
Your frontend, API, database, auth — all stay where they are. Nothing changes about your product surface.
AI inference, media processing, batch jobs, simulations — the compute that costs real money moves to idyl.
As your workloads grow, providers join your subnet. More demand means more capacity and lower cost — automatically.
Your product sits on top. The community — operators, providers, developers — powers the subnet underneath. idyl handles the rest.
Pricing
Pick a plan that fits. See exactly what you're using. Top up when you choose to — not because you have to.
Simple tiers starting at free. Each plan includes compute. Need more? Buy add-ons on your terms. Your dashboard shows usage in real time — you always know where you stand.
In traditional cloud, you reserve capacity and pay whether you use it or not. On idyl, your workloads run on hardware the community already contributes. You pay for a plan to access compute, not for idle machines.
As more people join your subnet, available compute grows. Your plan buys more over time, not less. The longer you're on idyl, the further your money goes.
The network
Developers deploy workloads. Providers contribute hardware. Operators shape the subnet. Pick your role — or be all three.
Build
Deploy workloads to any subnet. Run inference, batch jobs, media processing — anything that needs compute. Point at a subnet, deploy, done.
Learn more →Earn
Contribute the machines you already have. Earn from real workloads, not speculation. Any GPU, CPU, or accelerator.
Learn more →Control
Launch a subnet around a mission. Define who provides compute, who deploys, and how the network grows. The mission attracts the people. The people bring the machines.
Learn more →Network effect
Idyl is a network that improves with use. As more providers contribute hardware to a subnet, available compute increases. As compute increases, your plan buys more. The community makes it better for everyone — developers get more capacity, providers earn more, and operators build something that sustains itself.
Products running on the idyl network today — from AI inference to media processing. Open to anyone.
Getting Started
Create an account. No credit card, no commitment.
Install the CLI. Run idyl login to connect.
Push a workload. The network handles the rest.
Free tier. No credit card. Deploy in under a minute.