LogoLogo
  • Developers
    • Macrocosmos SDK
      • Get Started
      • API Keys
      • Subnet 1 Apex API
      • Subnet 13 Gravity API
    • Tools
      • Macrocosmos MCP
  • SUBNETS
    • Subnet 1 Apex
      • Subnet 1 Getting Started
      • Subnet 1 Mining Setup Guide
      • Subnet 1 Incentive Mechanism
    • Subnet 9 IOTA
      • Subnet 9 Incentive Mechanism
      • Subnet 9 Mining Setup Guide
      • Subnet 9 Validating
    • Subnet 13 Data Universe
      • Subnet 13 Getting Started
      • Subnet 13 Data Universe API
      • Subnet 13 Incentive Mechanism
    • Subnet 25 Mainframe
      • Subnet 25 Getting Started
      • Subnet 25 Mainframe API
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
      • Subnet 25 Incentive Mechanism
  • Subnet 37 Finetuning
    • Subnet 37 Getting Started
    • Subnet 37 Mining Setup Guide
    • Subnet 37 Validating Setup Guide
    • Subnet 37 Incentive Mechanism
    • Subnet 37 Competitions
  • CONSTELLATION - USER GUIDES
    • Apex User Guide
      • Navigating Apex
      • FAQs
    • Gravity User Guide
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula User Guide
      • Explore Nebula
      • Analyzing data
  • About us
    • Bittensor
      • DTAO
    • News and updates
    • Macromedia
    • Subnet Status Update
Powered by GitBook
On this page
  • Miner
  • System Requirements
  • Getting started
  • Prerequisites
  • Running the Miner
  • Env File
  • Starting the Miner
  • Flags
  • Training from pre-existing models
  • Manually uploading a model
  1. Subnet 37 Finetuning

Subnet 37 Mining Setup Guide

Mining on subnet 37

PreviousSubnet 37 Getting StartedNextSubnet 37 Validating Setup Guide

Last updated 2 months ago

Miner

Miners train locally and periodically publish their best model to HuggingFace. They then commit the metadata for that model to the Bittensor chain.

Miners can only have one model associated with them on the chain for evaluation by validators at a time.

The communication between the miner and validator happens asynchronously, and therefore miners don't need to be running continuously. Validators will use whichever metadata was most recently published by the miner to know which model to download from HuggingFace.

System Requirements

Miners will need enough disk space to store their model. Each uploaded model () cannot be over 15 GB, although, it's recommended to have at least 50 GB of disk space.

Miners will need enough processing power to train their model. We recommend using a large GPU with at least 48 GB of VRAM. To be competitive, you'll likely need clusters of GPUs.

Getting started

Prerequisites

  1. Get a HuggingFace Account:

Miners and validators use HuggingFace in order to share model state information. Miners will upload to HuggingFace and therefore must have an account, along with a user access token which can be found by following .

Make sure any repo you create for uploading is public so that the validators can download from it for evaluation.

  1. Get a Wandb Account:

Miners and validators use to download data from . You then need a user access token, which can be once logged in.

  1. Clone the repo:

Input the following text to do so:

git clone https://github.com/macrocosm-os/finetuning.git
  1. Setup your Python virtual environment or Conda environment:

  1. Install the requirements:

cd finetuning
python -m pip install -e .

Note: We require Python 3.9 or higher.

  1. Make sure you've created a wallet and registered a hotkey:

  1. (Optional) Run a Subtensor instance:

Your node will run better if you are connecting to a local Bittensor chain entrypoint node, rather than using Opentensor's. We recommend running a local node and passing the --subtensor.network local flag to your running miners/validators. To install and run a local subtensor node, follow the commands below with Docker and Docker-Compose previously installed:

git clone https://github.com/opentensor/subtensor.git
cd subtensor
docker compose up --detach

Running the Miner

The mining script has a shell that does some initial setup, but isn't fully implemented. You will need to implement the specific training logic yourself prior to running it.

Env File

Create a .env file in the finetuning directory and add the following to it:

HF_ACCESS_TOKEN="YOUR_HF_ACCESS_TOKEN"
WANDB_ACCESS_TOKEN="YOUR_WANDB_ACCESS_TOKEN"

Starting the Miner

To start your miner the most basic command is

python neurons/miner.py --wallet.name coldkey --wallet.hotkey hotkey --hf_repo_id my-username/my-project --avg_loss_upload_threshold YOUR_THRESHOLD
  • --wallet.name: should be the name of the coldkey that contains the hotkey your miner is registered with.

  • --wallet.hotkey: should be the name of the hotkey that your miner is registered with.

  • --hf_repo_id: should be the namespace/model_name that matches the HuggingFace repo you want to upload to. It must be public so the validators can download from it.

  • --avg_loss_upload_threshold: should be the minimum average loss or deviation before you want your miner to upload the model.

  • --competition_id: this is competition you wish to mine for; run --list_competitions to get a list of available options.

Flags

The Miner offers some flags to customize properties, such as how to train the model and which HuggingFace repo to upload to.

You can view the full set of flags by running

python ./neurons/miner.py -h

Some flags you may find useful:

  • --offline: when this is set, you can run the miner without being registered and it won't attempt to upload the model.

  • --wandb_entity + --wandb_project: when both flags are set, the miner will log its training to the provided wandb project.

  • --device: by default the miner will use your GPU, but you can specify with this flag if you have multiple.

Training from pre-existing models

  • --load_best: when this is set, you will download and train the model from the current best miner on the network.

  • --load_uid: when passing a UID you will download and train the model from the matching miner on the network.

  • --load_model_dir: the path to a local model directory [saved via HuggingFace API].


Manually uploading a model

In some cases, you may have failed to upload a model or wish to upload a model without further training.

Due to rate limiting by the Bittensor chain you may only upload a model every 20 minutes.

You can manually upload with the following command:

python scripts/upload_model.py --load_model_dir <path to model> --hf_repo_id my-username/my-project --wallet.name coldkey --wallet.hotkey hotkey

The , and the .

Create a , and get a .

As of Oct 1st, 2024, the subnet works with models matching the outputs and evaluates them against synthetic data from .

The specific requirements for each competition can be found .

The finetune/mining.py file has several methods that you may find useful. See the Jupyter notebook for ideas.

See the for more information on how the evaluation occurs.

The Miner requires a .env file with your HuggingFace access token in order to upload models, and a Wandb access token in order to download training data from .

these instructions
Wandb
subnet 1
found here
virtual environment details are here
Conda environment details are here
Bittensor wallet here
subnet 9
subnet 1
here
examples
Validator Psuedocode
subnet 1