LogoLogo
  • Developers
    • Macrocosmos SDK
      • Get Started
      • API Keys
      • Subnet 1 Apex API
      • Subnet 13 Gravity API
    • Tools
      • Macrocosmos MCP
  • SUBNETS
    • Subnet 1 Apex
      • Subnet 1 Getting Started
      • Subnet 1 Mining Setup Guide
      • Subnet 1 Incentive Mechanism
    • Subnet 9 IOTA
      • Subnet 9 Incentive Mechanism
      • Subnet 9 Mining Setup Guide
      • Subnet 9 Validating
    • Subnet 13 Data Universe
      • Subnet 13 Getting Started
      • Subnet 13 Data Universe API
      • Subnet 13 Incentive Mechanism
    • Subnet 25 Mainframe
      • Subnet 25 Getting Started
      • Subnet 25 Mainframe API
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
      • Subnet 25 Incentive Mechanism
  • Subnet 37 Finetuning
    • Subnet 37 Getting Started
    • Subnet 37 Mining Setup Guide
    • Subnet 37 Validating Setup Guide
    • Subnet 37 Incentive Mechanism
    • Subnet 37 Competitions
  • CONSTELLATION - USER GUIDES
    • Apex User Guide
      • Navigating Apex
      • FAQs
    • Gravity User Guide
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula User Guide
      • Explore Nebula
      • Analyzing data
  • About us
    • Bittensor
      • DTAO
    • News and updates
    • Macromedia
    • Subnet Status Update
Powered by GitBook
On this page

Subnet 37 Finetuning

Bittensor can train the best models

PreviousSubnet 25 Incentive MechanismNextSubnet 37 Getting Started

Last updated 2 months ago

The drive behind subnet 37 is to bring the entire AI development pipeline into Bittensor, with finetuning as a critical step in the path towards this. We believe it will strongly support Bittensor’s founding vision - that AI can be built in an economical, safe and decentralized way.

Finetuning is costly, time consuming and highly limited by expertise. It requires hundreds of GPU hours, typically requiring SOTA hardware. But perhaps most importantly, it requires expert engineers, who are often scarce.

Subnet 37 addresses these challenges by outsourcing the procurement process of computational resources and incentivising the best AI developers in the world to monetise their skills by competing to produce top models. This is also in collaboration with subnet 9, Pre-training.

Our vision is to build an open-sourced catalog of models, each optimised for specialised tasks such as chatbots, math-solvers, programming assistants, recommendation bots, and more. Models in the catalog are already available to download on HuggingFace and will soon power several apps.

We aim to integrate the models we create into subnet 1 as base models for future agentic assistants. We see subnets 37 and evolving in tandem to produce ever-improving AI assistants. This will provide the additional benefit of user-feedback through subnet 1’s chat application, which will be used to continuously refine the models and their capabilities.

For more details about the subnet 37 R&D work, take a look at our Substack articles:

Related resources

subnet 1
Fine-tuning, finely tuned: How SN37 is delivering SOTA fine-tuning on Bittensor
Fine-tuning, harmonized: Taoverse and Macrocosmos team up on SN37
Website
Dashboard
GitHub
Substack
Bittensor Discord
Macrocosmos Discord
Cosmonauts - Macrocosmos Telegram
Macrocosmos X