LogoLogo
  • Developers
    • Macrocosmos SDK
      • Get Started
      • API Keys
      • Subnet 1 Apex API
      • Subnet 13 Gravity API
    • Tools
      • Macrocosmos MCP
  • SUBNETS
    • Subnet 1 Apex
      • Subnet 1 Getting Started
      • Subnet 1 Mining Setup Guide
      • Subnet 1 Incentive Mechanism
    • Subnet 9 IOTA
      • Subnet 9 Incentive Mechanism
      • Subnet 9 Mining Setup Guide
      • Subnet 9 Validating
    • Subnet 13 Data Universe
      • Subnet 13 Getting Started
      • Subnet 13 Data Universe API
      • Subnet 13 Incentive Mechanism
    • Subnet 25 Mainframe
      • Subnet 25 Getting Started
      • Subnet 25 Mainframe API
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
      • Subnet 25 Incentive Mechanism
  • Subnet 37 Finetuning
    • Subnet 37 Getting Started
    • Subnet 37 Mining Setup Guide
    • Subnet 37 Validating Setup Guide
    • Subnet 37 Incentive Mechanism
    • Subnet 37 Competitions
  • CONSTELLATION - USER GUIDES
    • Apex User Guide
      • Navigating Apex
      • FAQs
    • Gravity User Guide
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula User Guide
      • Explore Nebula
      • Analyzing data
  • About us
    • Bittensor
      • DTAO
    • News and updates
    • Macromedia
    • Subnet Status Update
Powered by GitBook
On this page
  1. Subnet 37 Finetuning

Subnet 37 Incentive Mechanism

Subnet 37 incentive overview

PreviousSubnet 37 Validating Setup GuideNextSubnet 37 Competitions

Last updated 2 months ago

Subnet 37 rewards miners for producing finetuned models according to the competition’s defined parameters. It acts like a continuous benchmark where miners are rewarded for scoring the best on the evaluation criteria of the competition.

The reward mechanism works as follows:

  1. Miners train and periodically publish competition-specific models to HuggingFace and commit the metadata for that model to the Bittensor chain.

  2. Validators download the models from HuggingFace for each miner based on the Bittensor chain metadata and continuously evaluate them. For each competition, only the top model will receive incentive. Validators also log results to wandb. .

Note that competitions are specified independently with a defined split of emissions from the subnet. Competitions each have unique parameters that define which model(s), tokenizer(s), size(s), and sequence length(s) that miners are evaluated against. Validators will be able to use their bandwidths to fund any competition of their choosing, including both Macrocosmos ‘public good’ competitions and user-defined competitions.

Additionally, each competition can define one or more Evaluation Tasks that allow for specifying the data source along with the method to evaluate the resulting data. The normalization and weighting of each task is also supported, so we can precisely tune the contribution of each task towards ultimately receiving incentive for the competition.

The subnet launched with a competition to produce the best chatbot by finetuning the top pre-trained model from subnet 9. That evaluation was performed on fresh, authenticated, synthetic data generated by subnet 18. We have since moved to using subnet 1 as the source of our evaluation data as we find that the synthetic data is a consistently higher quality. We will expand to support additional data sources and other competitions that allow for an unlocked tokenizer (as an example).

We have also introduced logic to better synchronize evaluation data across validators, using the hash of a recent block on the chain as a seed for some of the random selection/generation of data. We have a delay for picking up new submissions that ensures this information can’t be abused by submitting a model with this foreknowledge.

See here for more details