Subnet 13 Gravity API
Gravity is a decentralized data collection platform powered by SN13 (Data Universe) on the Bittensor network.
Quickstart
Choose GravityClient
for sync tasks. Use AsyncGravityClient
if async fits better.
Check examples/gravity_workflow_example.py for a complete working example of a data collection CLI you can use for your next big project or to plug right into your favorite data product.
📎 Supported Platforms
reddit
twitter
(X)Youtube - coming soon
More platforms will be supported as subnet capabilities expand.
pip install macrocosmos
Macrocosmos SDK should be version 1.1.1+. For upgrade use
pip install macrocosmos==1.1.1
Demo Video
Gravity API Endpoints
Create a task for Data Collection
Each task gets registered on the network. Miners begin work right away. The task stays live for 7 days. After that, the dataset gets built automatically. You’ll get an email with a download link. Use any email you like.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Create a new gravity task
const task = await client.createGravityTask({
gravityTasks: [
{ platform: 'x', topic: '#ai' },
{ platform: 'reddit', topic: 'r/MachineLearning' }
],
name: 'My Data Collection Task',
notificationRequests: [
{ type: 'email', address: '[email protected]', redirectUrl: 'https://example.com/datasets' }
]
});
Body
gravityTasks
List of GravityTask
objects
List of task objects. Each must include a topic
and a platform
(x
, reddit
, etc.)
name
string
Optional name for the Gravity task. Helpful for organizing jobs.
notificationRequests
List of NotificationRequest
objects
List of notification configs. Supports type
, address
, and redirect_url
.
Response
{
"gravityTaskId": "multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49"
}
Get status of task
If you wish to get further information about the crawlers, you can use the include_crawlers
flag or make separate GetCrawler()
calls since returning in bulk can be slow.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
/ List all gravity tasks
const tasks = await client.getGravityTasks({
includeCrawlers: true
});
// Get a specific crawler
const crawler = await client.getCrawler({
crawlerId: 'crawler-id'
});
Body
gravity_task_id
string
The unique identifier of the Gravity task you want to inspect.
include_crawlers
bool
Whether to include details of the associated crawler jobs. Defaults to False
.
Response
{
"gravityTaskStates": [
{
"gravityTaskId": "multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49",
"name": "My First Gravity Task",
"status": "Running",
"startTime": "2025-05-30T15:56:20.201500586Z",
"crawlerIds": [
"crawler-0-multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49",
"crawler-1-multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49"
]
}
]
}
Build dataset
No need to wait 7 days. You can request your dataset early. Add a notification to get alerted when it's ready.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Build a dataset from a crawler
const dataset = await client.buildDataset({
crawlerId: 'your-crawler-id',
notificationRequests: [
{ type: 'email',
address: '[email protected]',
redirect_url: 'https://app.macrocosmos.ai/'
}
],
maxRows: 100,
});
Body
crawlerId
string
The ID of the completed crawler job you want to convert into a dataset.
notificationRequests
List of NotificationRequest
objects
A list of notification objects (e.g., email or webhook). Includes type
, address
, and redirect_url
.
maxRows
int
The maximum number of rows to include in the dataset
Response
{
"datasetId": "dataset-71e97cfa-xxxx-xxxx-xxxx-33cd91be9028",
"dataset": {
"crawlerWorkflowId": "crawler-0-multicrawler-b56179b1-xxxx-xxxx-xxxx-0ffd616ad830",
"status": "Running",
"statusMessage": "Initializing",
"totalSteps": "10"
}
}
Get status of a build
Watch your dataset build with GetDataset()
. Once built, the task gets de-registered.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Get a dataset
const datasetStatus = await client.getDataset({
datasetId: 'your-dataset-id'
});
Body
datasetId
string
The ID of the dataset
Response
{
"dataset": {
"crawlerWorkflowId": "crawler-0-multicrawler-b56179b1-xxxx-xxxx-xxxx-0ffd616ad830",
"createDate": "2025-06-04T10:31:38.747918Z",
"expireDate": "2025-07-04T10:31:38.747933Z",
"files": [
{
"fileName": "x_ai_0.parquet",
"fileSizeBytes": "261100",
"lastModified": "2025-06-04T10:31:28.770Z",
"numRows": "478",
"s3Key": "example-s3-key",
"url": "example-url"
}
],
"status": "Completed",
"statusMessage": "Dataset ready for download",
"steps": [
{
"progress": 1,
"step": "1",
"stepName": "Registering dataset"
},
{
"progress": 1,
"step": "2",
"stepName": "Collecting crawler information"
},
{
"progress": 1,
"step": "3",
"stepName": "Collecting available data sources"
},
{
"progress": 1,
"step": "4",
"stepName": "Validating data sources"
},
{
"progress": 1,
"step": "5",
"stepName": "Collating data"
},
{
"progress": 1,
"step": "6",
"stepName": "Creating dataset path"
},
{
"progress": 1,
"step": "7",
"stepName": "Extracting data"
},
{
"progress": 1,
"step": "8",
"stepName": "Consolidate dataset"
},
{
"progress": 1,
"step": "9",
"stepName": "Publish dataset"
},
{
"progress": 1,
"step": "10",
"stepName": "Cleaning up"
}
],
"totalSteps": "10",
"nebula": {
"fileSizeBytes": "795061",
"url": "example-url"
}
}
}
Cancel requests
Use CancelDataset()
to stop a build. If it's done, that call will purge the dataset.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Cancel a gravity task
const cancelResult = await client.cancelGravityTask({
gravityTaskId: 'your-gravity-task-id'
});
// Cancel a dataset build
// const cancelDataset = await client.cancelDataset({
// datasetId: 'your-dataset-id'
// });
Body
gravityTaskId
(datasetId
)
string
Gravity task (dataset) Id
Response
{
"message": "success"
}
Streaming API
Run precise, real-time queries using the synchronous Sn13Client
to query historical or current data based on users, keywords, and time range on platforms like X (Twitter) and Reddit.
import { Sn13Client } from 'macrocosmos';
// Initialize the client
const client = new Sn13Client({apiKey: 'your-api-key'});
// Get the onDemandData response
const response = await client.onDemandData({
source: 'X', // or 'Reddit'
usernames: ['nasa', 'spacex'], // Optional, up to 5 users
keywords: ['photo', 'space', 'mars'], // Optional, up to 5 keywords
startDate: '2024-04-01', // Defaults to 24h range if not specified
endDate: '2025-04-25', // Defaults to current time if not specified
limit: 3 // Optional, up to 1000 results
});
Body
source
string
Data source (X
or Reddit
).
usernames
Array of strings
Default: []
Number of items: <= 10 items
List of usernames to fetch data from. If Default, random usernames selected.
keywords
Array of strings
Default: []
Number of items: <= 5 items
List of keywords to search for. If Default, random keywords are selected.
startDate
string
[Optional]
Start date (ISO format).
endDate
string
[Optional]
End date (ISO format).
limit
integer
[Optional]
Default: 100
Options: [1,...,1000]
Maximum number of items to return.
Response
{
"status": "success",
"data": [
{
"content": "Falcon 9 launches the Bandwagon-3 rideshare mission to orbit from Florida",
"datetime": "2025-04-22T03:00:38+00:00",
"label": null,
"media": [
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuBagAADw92.jpg"
},
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuDa4AEr1RV.jpg"
},
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuBbUAAU7Rd.jpg"
},
{
"type": "video",
"url": "https://pbs.twimg.com/amplify_video_thumb/1914512114154409984/img/lD1axdjW7cRnRol6.jpg"
}
],
"source": "X",
"tweet": {
"conversation_id": "1914514653763584254",
"hashtags": [],
"id": "1914514653763584254",
"is_quote": false,
"is_reply": false,
"is_retweet": false,
"like_count": 10689,
"quote_count": 76,
"reply_count": 677,
"retweet_count": 2058
},
"uri": "https://x.com/SpaceX/status/1914514653763584254",
"user": {
"display_name": "SpaceX",
"followers_count": 39073448,
"following_count": 121,
"id": "34743251",
"username": "@SpaceX",
"verified": false
}
},
{
"content": "Falcon 9 launches NROL-145 from California, completing our first of the new national security missions awarded in October 2024",
"datetime": "2025-04-20T17:05:09+00:00",
"label": null,
"media": [
{
"type": "photo",
"url": "https://pbs.twimg.com/media/Go_mYDJbIAA9mbK.jpg"
},
{
"type": "video",
"url": "https://pbs.twimg.com/amplify_video_thumb/1914001831661084672/img/ydKPVd7KoS6B6U_l.jpg"
}
],
"source": "X",
"tweet": {
"conversation_id": "1914002408545615936",
"hashtags": [],
"id": "1914002408545615936",
"is_quote": false,
"is_reply": false,
"is_retweet": false,
"like_count": 8190,
"quote_count": 71,
"reply_count": 495,
"retweet_count": 1802
},
"uri": "https://x.com/SpaceX/status/1914002408545615936",
"user": {
"display_name": "SpaceX",
"followers_count": 39073448,
"following_count": 121,
"id": "34743251",
"username": "@SpaceX",
"verified": false
}
}
],
"meta": {
"consistent_miners": 2,
"inconsistent_miners": 0,
"items_returned": 2,
"miner_hotkey": "5CacbhmQxhAVGWgrYvCypqhR3n3mNmmWEA8JYzAVghmTDYZy",
"miner_uid": 179,
"miners_queried": 5,
"miners_responded": 5,
"source": "consistent",
"validated_miners": 0
}
}
Last updated