LogoLogo
  • Developers
    • Macrocosmos SDK
      • Get Started
      • API Keys
      • Subnet 1 Apex API
      • Subnet 13 Gravity API
    • Tools
      • Macrocosmos MCP
  • SUBNETS
    • Subnet 1 Apex
      • Subnet 1 Getting Started
      • Subnet 1 Mining Setup Guide
      • Subnet 1 Incentive Mechanism
    • Subnet 9 IOTA
      • Subnet 9 Incentive Mechanism
      • Subnet 9 Mining Setup Guide
      • Subnet 9 Validating
    • Subnet 13 Data Universe
      • Subnet 13 Getting Started
      • Subnet 13 Data Universe API
      • Subnet 13 Incentive Mechanism
    • Subnet 25 Mainframe
      • Subnet 25 Getting Started
      • Subnet 25 Mainframe API
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
      • Subnet 25 Incentive Mechanism
  • Subnet 37 Finetuning
    • Subnet 37 Getting Started
    • Subnet 37 Mining Setup Guide
    • Subnet 37 Validating Setup Guide
    • Subnet 37 Incentive Mechanism
    • Subnet 37 Competitions
  • CONSTELLATION - USER GUIDES
    • Apex User Guide
      • Navigating Apex
      • FAQs
    • Gravity User Guide
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula User Guide
      • Explore Nebula
      • Analyzing data
  • About us
    • About Macrocosmos
    • Bittensor
      • DTAO
    • News and updates
    • Macromedia
    • Subnet Status Update
Powered by GitBook
On this page
  1. SUBNETS

Subnet 9 IOTA

Bittensor can build the best models

PreviousSubnet 1 Incentive MechanismNextSubnet 9 Incentive Mechanism

Last updated 5 days ago

In August 2024, Bittensor’s Subnet 9 (SN9) demonstrated that a distributed network of incentivized, permissionless actors could each pretrain large language models (LLMs) ranging from 700 million to 14 billion parameters, while surpassing established baselines. While that work validated blockchain-based decentralized pretraining as viable, it contained core issues: every miner had to fit an entire model locally, and “winner-takes-all” rewards encouraged model hoarding.

Here we introduce IOTA (Incentivised Orchestrated Training Architecture), an architecture that addresses these limitations by transforming SN9’s previously isolated competitors into a single cooperating unit that can scale arbitrarily while still rewarding each contributor fairly. IOTA is a data- and pipeline-parallel training algorithm designed to operate on a network of heterogeneous, unreliable devices in adversarial and trustless environments. The result is a permissionless system that is capable of pretraining frontier-scale models without per-node GPU bloat, and tolerates unreliable devices and aligns participants through transparent token economics.

Various solutions attempt to solve key technical hurdles regarding distributed training but lack an incentive model, while others provide economic incentives but have yet to achieve the training performance of a coordinated cluster. IOTA bridges this gap by combining novel techniques that jointly tackle all three limitations.

Other related resources

The technical primer doc provides a detailed view of our pre-training efforts.

Have a look at the to get the updates on the training process.

For more details on how to contribute you can have a looks at , and .

If you have any questions or require support, please message us in the channel for subnet 9, or our own server.

INCENTIVISED ORCHESTRATED TRAINING ARCHITECTURE (IOTA)
Miners Dashboard
mining instructions
validating instructions
Bittensor Discord
Macrocosmos Discord
Website
Dashboard
GitHub
Substack
Bittensor Discord
Macrocosmos Discord
Cosmonauts - Macrocosmos Telegram
Macrocosmos X
Centralised vs decentralised LLM training