LogoLogo
  • Welcome
  • News and updates
  • Subnet Status Update
  • Macromedia
  • Bittensor
    • DTAO
  • SUBNETS
    • Subnet 1 - Apex
      • Subnet 1: How to use APEX
      • Subnet 1: Incentive Mechanism
      • Subnet 1: Base Miner Setup
      • Subnet 1: Roadmap
    • Subnet 9 - Pre-training
      • Subnet 9: How to use Pre-training
      • Subnet 9: Incentive Mechanism
      • Subnet 9: Roadmap
    • Subnet 13 - Data Universe
      • Subnet 13: How to use Data Universe
      • Subnet 13: Incentive Mechanism
      • Subnet 13: Roadmap
      • Subnet 13 API
    • Subnet 25 - Mainframe
      • Subnet 25: How to use Mainframe
      • Subnet 25: Incentive Mechanism
      • Subnet 25: Roadmap
  • Subnet 37 - Finetuning
    • Subnet 37: How to use Fine-Tuning
    • Subnet 37: Miners
    • Subnet 37: Validators
    • Subnet 37: Incentive Mechanism
    • Subnet 37: Competitions
    • Subnet 37: Roadmap
  • CONSTELLATION - USER GUIDES
    • Apex
      • Navigating Apex
      • FAQs
    • Gravity
      • Scraping data
      • Managing and Collecting your data
      • FAQs
    • Nebula
      • Explore Nebula
      • Analyzing data
  • Developers
    • API Documentation
      • Accessing API Keys
      • SN1 - APEX
        • Endpoints
        • Example Usage
        • Agentic Tasks
        • Supported Models
      • SN13 - Data Universe
        • Endpoints
        • Example usage
        • Scraping Youtube Data
      • SN25 - Mainframe
        • API Keys
        • Folding API
          • Running Folding API Server
          • Endpoints
        • Organic API
          • Endpoints
    • Macrocosmos SDK
      • Installation
      • API Keys
      • APEX
      • Gravity
    • Tools
      • Macrocosmos MCP
      • Developer Projects
Powered by GitBook
On this page
  • On Demand API
  • List Hugging Face Repositories
  1. Developers
  2. API Documentation
  3. SN13 - Data Universe

Example usage

On Demand API


# function to get data:
import requests
import json


MAINNET_API_URL = "https://sn13.api.macrocosmos.ai"

MAINNET_API_KEY = "" #input api key

# Create a headers for API access 

headers = {
    'Content-Type': 'application/json',
    # The server uses `verify_api_key` Depends, so we should pass it as a header
    'X-API-KEY': MAINNET_API_KEY  
}


# function to get data:
import requests

# API endpoint for on demand requests
url = f'{MAINNET_API_URL}/api/v1/on_demand_data_request'


def get_data(data: dict):
    try:
        response = requests.post(url, headers=headers, json=data)
    
        # Check if request was successful
        response.raise_for_status()
    
        # Parse and print the response
        result = response.json()
        print("Success!")
        #print("Data:", result.get("data"))
        #print("Meta:", result.get("meta"))
        return result
    
    except requests.exceptions.HTTPError as e:
        print(f"HTTP Error: {e}")
        print(f"Response content: {e.response.text}")
    except requests.exceptions.RequestException as e:
        print(f"Error: {e}")


# Get data 
data = {
    "source": "x",  # Based on DataSource enum in the code
    "usernames": ["@DecentralDev_"], #input username
    "keywords": ["Macrocosmos"],
    "limit": 1000,  # Default is 100, max is 1000
    "start_date": "2025-01-10T00:00:00Z" # Datetime should be in ISO format ( maybe I need to change it)
}

result = get_data(data)

# Print response:
data = result.get('data') # get data

print(f"You made: {len(data)} tweet posts since 10th of March")
# print first tweet
data[0]

List Hugging Face Repositories


import requests

# Config
MAINNET_API_URL = "https://sn13.api.macrocosmos.ai"
MAINNET_API_KEY = "" #input api key

# Headers
headers = {
    'Content-Type': 'application/json',
    'X-API-KEY': MAINNET_API_KEY
}

# Endpoint
url = f"{MAINNET_API_URL}/api/v1/list_repo_names"

# Function to fetch repo names
def list_repo_names():
    try:
        response = requests.get(url, headers=headers, timeout=30)
        response.raise_for_status()
        result = response.json()

        # Expecting a dictionary with 'count' and 'repo_names'
        count = result.get("count", 0)
        repos = result.get("repo_names", [])

        print(f"✅ Retrieved {count} repositories.\n")

        for name in repos:
            print("-", name)

        return result

    except requests.exceptions.HTTPError as e:
        print(f"❌ HTTP Error: {e}")
        print(f"Response content: {e.response.text}")
    except requests.exceptions.RequestException as e:
        print(f"❌ Request Error: {e}")

# Run it
list_repo_names()

PreviousEndpointsNextScraping Youtube Data

Last updated 1 month ago