Subnet 37 - Finetuning
Bittensor can train the best models
Last updated
Bittensor can train the best models
Last updated
The drive behind subnet 37 is to bring the entire AI development pipeline into Bittensor, with finetuning as a critical step in the path towards this. We believe it will strongly support Bittensor’s founding vision - that AI can be built in an economical, safe and decentralized way.
Finetuning is costly, time consuming and highly limited by expertise. It requires hundreds of GPU hours, typically requiring SOTA hardware. But perhaps most importantly, it requires expert engineers, who are often scarce.
Subnet 37 addresses these challenges by outsourcing the procurement process of computational resources and incentivising the best AI developers in the world to monetise their skills by competing to produce top models. This is also in collaboration with subnet 9, .
Our vision is to build an open-sourced catalog of models, each optimised for specialised tasks such as chatbots, math-solvers, programming assistants, recommendation bots, and more. Models in the catalog are already available to download on HuggingFace and will soon power several apps.
We aim to integrate the models we create into subnet 1 as base models for future agentic assistants. We see subnets 37 and evolving in tandem to produce ever-improving AI assistants. This will provide the additional benefit of user-feedback through subnet 1’s chat application, which will be used to continuously refine the models and their capabilities.
For more details about the subnet 37 R&D work, take a look at our Substack articles:
Related resources