Using Macrocosmos MCP with Claude Desktop or Cursor
Macrocosmos MCP
Using Macrocosmos MCP with Claude Desktop or Cursor
Macrocosmos MCP (Model Context Protocol) allows you to integrate with Data Universe APIs directly into Claude for Desktop, Cursor, or your custom LLM pipeline. Query X (Twitter) and Reddit data on demand from your AI environment!
Prerequisites
Python 3.10+
uv package manager
Claude Desktop or Cursor installed
Install UV Package Manager
curl-LsSfhttps://astral.sh/uv/install.sh|sh
Or via pip:
pip3installuv
Quickstart
Get your API key from Macrocosmos. There is a free tier with $5 of credits to start.
Install uv using the command above or see the uv repo for additional install methods.
Configure Claude Desktop
Run the following command to open your Claude configuration file:
Update with this configuration:
Open Claude Desktop and look for the hammer icon — this confirms your MCP server is running. You'll now have SN13 tools available inside Claude.
Configure Cursor
Option 1: Via UI (Recommended)
Go to Cursor Settings
Navigate to MCP settings and select Add New Global MCP Server
Enter the configuration details
Option 2: Manual JSON
Add the same configuration:
⚠️ Note: In some cases, manually editing this file doesn't activate the MCP server in Cursor. If this happens, use the UI method above for best results.
Use Agent Mode
In Cursor, make sure you're using Agent Mode in the chat. Agents have the ability to use any MCP tool — including custom ones and those from SN13.
Available Tools
Quick Query Tool
query_on_demand_data - Real-time Social Media Queries
Fetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1,000 results.
Parameter
Type
Description
source
string
REQUIRED. Platform: 'X' or 'REDDIT' (case-sensitive)
usernames
list
Up to 5 usernames. For X: @ is optional. Not available for Reddit
keywords
list
Up to 5 keywords/hashtags. For Reddit: subreddit names (e.g., 'r/MachineLearning')
start_date
string
ISO format (e.g., '2024-01-01T00:00:00Z'). Defaults to 24h ago
end_date
string
ISO format. Defaults to now
limit
int
Max results 1-1000. Default: 10
keyword_mode
string
'any' (default) or 'all' for strict matching
Example prompts:
"What has @elonmusk been posting about today?"
"Get me the latest posts from r/bittensor about dTAO"
"Fetch 50 tweets about #AI from the last week"
Large-Scale Collection Tools (Gravity)
Use Gravity tools when you need to collect large datasets over 7 days (more than 1,000 results).
create_gravity_task - Start 7-Day Data Collection
Parameter
Type
Description
tasks
list
REQUIRED. List of task objects
name
string
Optional name for the task
email
string
Email for notification when complete
Task object structure:
⚠️ Important: For X (Twitter), topics MUST start with # or $ (e.g., #ai, $BTC). Plain keywords are rejected!
User: "What's the sentiment about $TAO on Twitter today?"
→ Uses query_on_demand_data to fetch recent tweets
→ Returns up to 1,000 results instantly
User: "I need to collect a week's worth of #AI tweets for analysis"
1. create_gravity_task → Returns gravity_task_id
2. get_gravity_task_status → Monitor progress, get crawler_ids
3. build_dataset → When ready, build the dataset (stops crawler)
4. get_dataset_status → Get download URL for Parquet file