LogoLogo
  • Developers
    • Macrocosmos SDK
      • Get Started
      • API Keys
      • Subnet 1 Apex API
      • Subnet 13 Gravity API
    • Tools
      • Macrocosmos MCP
  • SUBNETS
    • Subnet 1 Apex
      • Subnet 1 Getting Started
      • Subnet 1 Mining Setup Guide
      • Subnet 1 Incentive Mechanism
    • Subnet 9 IOTA
      • Subnet 9 Incentive Mechanism
      • Subnet 9 Mining Setup Guide
      • Subnet 9 Validating
    • Subnet 13 Data Universe
      • Subnet 13 Getting Started
      • Subnet 13 Data Universe API
      • Subnet 13 Incentive Mechanism
    • Subnet 25 Mainframe
      • Subnet 25 Getting Started
      • Subnet 25 Mainframe API
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
      • Subnet 25 Incentive Mechanism
  • Subnet 37 Finetuning
    • Subnet 37 Getting Started
    • Subnet 37 Mining Setup Guide
    • Subnet 37 Validating Setup Guide
    • Subnet 37 Incentive Mechanism
    • Subnet 37 Competitions
  • CONSTELLATION - USER GUIDES
    • Apex User Guide
      • Navigating Apex
      • FAQs
    • Gravity User Guide
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula User Guide
      • Explore Nebula
      • Analyzing data
  • About us
    • Bittensor
      • DTAO
    • News and updates
    • Macromedia
    • Subnet Status Update
Powered by GitBook
On this page
  • Competition B7_MULTICHOICE
  • Goal
  • Evaluation
  • Definitions
  • Competition INSTRUCT_8B
  • Deprecated Competitions
  • Competition 1: SN9_MODEL
  1. Subnet 37 Finetuning

Subnet 37 Competitions

Subnet 37's competition system

PreviousSubnet 37 Incentive MechanismNextApex User Guide

Last updated 2 months ago

Competition B7_MULTICHOICE

Goal

The purpose of this competition is to finetune the top models from our to produce a chatbot.

Evaluation

Models submitted here are evaluated on a set of tasks, where each is worth a sub-portion of the overall score. The current evaluations are:

  1. SYNTHENTIC_MMLU: In this task, the model is evaluated on a synthetic MMLU-like dataset from This is a multiple choice dataset with a large array of questions, spanning a domain of topics and difficulty levels, akin to MMLU. Currently, the dataset is generated using Wikipedia as the source-of-truth, though this will be expanded over time to include more domain-focused sources.

  2. WORD_SORTING: In this task, the model is given a list of words and are required to sort them alphabetically. .

  3. FINEWEB: In this task, the model's cross entropy loss is computed on a small sample of the fineweb dataset.

  4. IF_EVAL: In this task, the model is evaluated on a sythentic version of the . The prompt contains a list of rules the response must follow. The full list of possible rules is listed in

Definitions

.

Competition INSTRUCT_8B

The goal of this competition is to train a SOTA instruct 8B model. This competition provides more freedom to miners than others: there are no restrictions on the tokenizer used and miners are allowed to use a wider range of architectures.

The evaluation tasks are the same as the B7_MULTICHOICE competition

Deprecated Competitions

Competition 1: SN9_MODEL

This was the competition for the finetuning subnet.

Its purpose was to finetune the top models from subnet 9 to produce a chatbot.

Models submitted to this competition were evaluated using a synthetic Q&A dataset from the . Specifically, models were evaluated based on their average loss of their generated answers.

pretraining subnet
subnet 1.
See the code here
See here for details.
IFEval dataset
rule.py
See here for more information on definitions
See the code for more information.
cortex subnet