Subnet 1: Base Miner Setup
A walkthrough for setting up and running the SN1 base miner from the macrocosm-os/prompting repository. It is intended for educational purposes and should not be used on mainnnet
⚠️ Disclaimer
Do not run this miner on mainnet.
The base miner is solely for educational and testing purposes. Running this miner on mainnet will not yield any rewards. Any expenses incurred during registration or infrastructure setup will not be reimbursed
🖥️ Compute Requirements
VRAM: None
vCPU: 8 cores
RAM: 8 GB
Storage: 80 GB
Installation
Clone the Repository:
Run the Installation Script:
Before running the miner, you need to set up miner environment variables
Configure `.env.miner` file
Create a
.env.miner
File:
Edit
.env.miner
with Appropriate Values:
Fill in the appropriate details e.g wallet name , hot key and port (so that validators can connect) Ensure that the wallet and hotkey are properly registered on the testnet.
⚙️ Running the Miner
After configuring your environment variables, start the miner using the following command:
Base Miner Functionalities
The SN1 base miner is designed to handle two primary tasks: Web Retrieval and Inference.
1. Web Retrieval (stream_web_retrieval
)
The miner receives a query from validators, such as "What is the biggest event in 2025?" The miner's responsibility is to search the web for relevant information that answers this query.
The process involves:
Searching for websites that contain information pertinent to the query.
Extracting the content and identifying the most relevant section that answers the question.
Formatting the results into a structured response.
The implementation is located in neurons/miners/epistula_miner/web_retrieval.py
. The function returns a list of dictionaries containing:
url
: The website URL.content
: The full text content of the page.relevant
: A concise excerpt that directly answers the query.
2. Inference Task (run_inference
)
The Inference task involves utilizing a smaller version of the LLaMA model (casperhansen/llama-3.2-3b-instruct-awq
) to perform language model inference.
The process includes:
Receiving tasks directed to the
/v1/chat/completions
endpoint.Determining the task type (e.g., inference or web retrieval).
Invoking the appropriate method based on the task:
The implementation is located in neurons/miners/epistula_miner/miner.py
. The self.llm
attribute loads the LLaMA 3B model.
Note: The 3B model is suitable only for testnet but not for mainnet, where state-of-the-art models are prevalent, like: mrfakename/mistral-small-3.1-24b-instruct-2503-hf
and hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4
Miner Availability Check
Validators also assess miner availability for tasks. Miners indicate how suited they are by setting task availability flags:
If a miner determines it is unsuitable for a task, it sets the corresponding flag to False
. This ensures that tasks are assigned to miners best equipped to handle them.
Last updated