Subnet 13 Gravity API
Gravity is a decentralized data collection platform powered by SN13 (Data Universe) on the Bittensor network.
Quickstart
Choose GravityClient
for sync tasks. Use AsyncGravityClient
if async fits better.
Check examples/gravity_workflow_example.py for a complete working example of a data collection CLI you can use for your next big project or to plug right into your favorite data product.
📎 Supported Platforms
reddit
twitter
(X)Youtube - coming soon
More platforms will be supported as subnet capabilities expand.
pip install macrocosmos
Macrocosmos SDK should be version 1.1.3+. For upgrade use
pip install macrocosmos==1.1.3
Demo Video
Gravity API Endpoints
Create a task for Data Collection
The task after the launch gets registered on the network within 20 min. The data is starting to be collected and delivered by miners from the moment of the registration on the Blockchain. The task stays live for 7 days to allow the most data to be collected. After that, the dataset gets built automatically. If you provided an email you’ll get a notification with a download link.
To check the status of the task and the amount of data collected at any time use the endpoint Get status of the task. To start building the dataset prior the 7 days completion, use the endpoint Build dataset.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Create a new gravity task
const task = await client.createGravityTask({
gravityTasks: [
{ platform: 'x', topic: '#ai' },
{ platform: 'reddit', topic: 'r/MachineLearning' }
],
name: 'My Data Collection Task',
notificationRequests: [
{ type: 'email', address: '[email protected]', redirectUrl: 'https://example.com/datasets' }
]
});
Body
gravityTasks
List of GravityTask
objects
List of task objects. Each must include a topic
and a platform
(x
, reddit
, etc.)
name
string
Optional name for the Gravity task. Helpful for organizing jobs.
notificationRequests
List of NotificationRequest
objects
List of notification configs. Supports type
, address
, and redirect_url
.
Response
{
"gravityTaskId": "multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49"
}
Get status of task
To check the status of the task and the amount of data collected at any time use the endpoint Get status of the task.
If you wish to get further information about the crawlers, you can use the include_crawlers
flag or make separate GetCrawler()
calls since returning in bulk can be slow.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
/ List all gravity tasks
const tasks = await client.getGravityTasks({
includeCrawlers: true
});
// Get a specific crawler
const crawler = await client.getCrawler({
crawlerId: 'crawler-id'
});
Body
gravity_task_id
string
The unique identifier of the Gravity task you want to inspect.
include_crawlers
bool
Whether to include details of the associated crawler jobs. Defaults to False
.
Response
{
"gravityTaskStates": [
{
"gravityTaskId": "multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49",
"name": "My First Gravity Task",
"status": "Running",
"startTime": "2025-05-30T15:56:20.201500586Z",
"crawlerIds": [
"crawler-0-multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49",
"crawler-1-multicrawler-9f518ae4-xxxx-xxxx-xxxx-8b73d7cd4c49"
]
}
]
}
Build dataset
No need to wait 7 days until the task is complete. If you already got enough data, you can request your dataset early. Add a notification to get alerted when the dataset is built. Once built, the task gets completed and de-registered.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Build a dataset from a crawler
const dataset = await client.buildDataset({
crawlerId: 'your-crawler-id',
notificationRequests: [
{ type: 'email',
address: '[email protected]',
redirect_url: 'https://app.macrocosmos.ai/'
}
],
maxRows: 100,
});
Body
crawlerId
string
The ID of the completed crawler job you want to convert into a dataset.
notificationRequests
List of NotificationRequest
objects
A list of notification objects (e.g., email or webhook). Includes type
, address
, and redirect_url
.
maxRows
int
The maximum number of rows to include in the dataset
Response
{
"datasetId": "dataset-71e97cfa-xxxx-xxxx-xxxx-33cd91be9028",
"dataset": {
"crawlerWorkflowId": "crawler-0-multicrawler-b56179b1-xxxx-xxxx-xxxx-0ffd616ad830",
"status": "Running",
"statusMessage": "Initializing",
"totalSteps": "10"
}
}
Get status of a build
Watch your dataset build with GetDataset()
. Once built, the task gets completed and de-registered.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Get a dataset
const datasetStatus = await client.getDataset({
datasetId: 'your-dataset-id'
});
Body
datasetId
string
The ID of the dataset
Response
{
"dataset": {
"crawlerWorkflowId": "crawler-0-multicrawler-b56179b1-xxxx-xxxx-xxxx-0ffd616ad830",
"createDate": "2025-06-04T10:31:38.747918Z",
"expireDate": "2025-07-04T10:31:38.747933Z",
"files": [
{
"fileName": "x_ai_0.parquet",
"fileSizeBytes": "261100",
"lastModified": "2025-06-04T10:31:28.770Z",
"numRows": "478",
"s3Key": "example-s3-key",
"url": "example-url"
}
],
"status": "Completed",
"statusMessage": "Dataset ready for download",
"steps": [
{
"progress": 1,
"step": "1",
"stepName": "Registering dataset"
},
{
"progress": 1,
"step": "2",
"stepName": "Collecting crawler information"
},
{
"progress": 1,
"step": "3",
"stepName": "Collecting available data sources"
},
{
"progress": 1,
"step": "4",
"stepName": "Validating data sources"
},
{
"progress": 1,
"step": "5",
"stepName": "Collating data"
},
{
"progress": 1,
"step": "6",
"stepName": "Creating dataset path"
},
{
"progress": 1,
"step": "7",
"stepName": "Extracting data"
},
{
"progress": 1,
"step": "8",
"stepName": "Consolidate dataset"
},
{
"progress": 1,
"step": "9",
"stepName": "Publish dataset"
},
{
"progress": 1,
"step": "10",
"stepName": "Cleaning up"
}
],
"totalSteps": "10",
"nebula": {
"fileSizeBytes": "795061",
"url": "example-url"
}
}
}
Cancel requests
Use CancelDataset()
to stop a build. If it's done, that call will purge the dataset.
import { GravityClient } from 'macrocosmos';
// Initialize the client
const client = new GravityClient({ apiKey: 'your-api-key' });
// Cancel a gravity task
const cancelResult = await client.cancelGravityTask({
gravityTaskId: 'your-gravity-task-id'
});
// Cancel a dataset build
// const cancelDataset = await client.cancelDataset({
// datasetId: 'your-dataset-id'
// });
Body
gravityTaskId
(datasetId
)
string
Gravity task (dataset) Id
Response
{
"message": "success"
}
Streaming API ( On Demand Data API)
Run precise, real-time queries using the synchronous Sn13Client
to query historical or current data based on users, keywords, and time range on platforms like X (Twitter) Reddit, and YouTube.
The Streaming API is limited to 1000 posts per request.
As of data-universe release v1.9.8:
All keywords in the OnDemandData request will be present in the returned post/comment data.
For Reddit requests, the first keyword in the list corresponds to the requested subreddit, and subsequent keywords are treated as normal.
For YouTube requests, only one username should be supplied - corresponding to the channel name - while keywords are ignored (empty list).
import { Sn13Client } from 'macrocosmos';
// Initialize the client
const client = new Sn13Client({apiKey: 'your-api-key'});
// Get the onDemandData response
const response = await client.onDemandData({
source: 'X', // or 'Reddit', 'YouTube'
usernames: ['nasa', 'spacex'], // Optional, up to 5 users
keywords: ['photo', 'space', 'mars'], // Optional, up to 5 keywords
startDate: '2024-04-01', // Defaults to 24h range if not specified
endDate: '2025-04-25', // Defaults to current time if not specified
limit: 3 // Optional, up to 1000 results
});
Body
source
string
Data source (X
or Reddit
).
usernames
Array of strings
Default: []
Number of items: <= 10 items
List of usernames to fetch data from. Searches for posts from any of the given usernames.
If usernames
are not included, they will not be constrained in the search parameters.
For YouTube:
Items in the usernames
field should correspond to the YouTube Channel name.
keywords
Array of strings
Default: []
Number of items: <= 5 items
List of keywords to search for. Searches for posts where all given keywords are present.
If keywords
are not included in the query, they will not be constrained in the search parameters.
For Reddit:
The first keyword indicates the subreddit (r/all for cross-subreddit queries), and subsequent keywords are text matches.
For YouTube:
YouTube keyword queries are not currently accepted. Channel names should be placed under the usernames
field.
startDate
string
[Optional]
Start date (ISO format). Defaults to 24 hours prior to the request time if not specified. Datetimes without time information will be set to midnight (00:00:00) by default. Datetimes without timezone information will be set to UTC by default.
endDate
string
[Optional]
End date (ISO format). Defaults to the request time if not specified.
Datetimes without time information will be set to midnight (00:00:00) by default. Datetimes without timezone information will be set to UTC by default.
limit
integer
[Optional]
Default: 100
Options: [1,...,1000]
Maximum number of items to return.
Response
{
"status": "success",
"data": [
{
"content": "Falcon 9 launches the Bandwagon-3 rideshare mission to orbit from Florida",
"datetime": "2025-04-22T03:00:38+00:00",
"label": null,
"media": [
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuBagAADw92.jpg"
},
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuDa4AEr1RV.jpg"
},
{
"type": "photo",
"url": "https://pbs.twimg.com/media/GpG2kuBbUAAU7Rd.jpg"
},
{
"type": "video",
"url": "https://pbs.twimg.com/amplify_video_thumb/1914512114154409984/img/lD1axdjW7cRnRol6.jpg"
}
],
"source": "X",
"tweet": {
"conversation_id": "1914514653763584254",
"hashtags": [],
"id": "1914514653763584254",
"is_quote": false,
"is_reply": false,
"is_retweet": false,
"like_count": 10689,
"quote_count": 76,
"reply_count": 677,
"retweet_count": 2058
},
"uri": "https://x.com/SpaceX/status/1914514653763584254",
"user": {
"display_name": "SpaceX",
"followers_count": 39073448,
"following_count": 121,
"id": "34743251",
"username": "@SpaceX",
"verified": false
}
},
{
"content": "Falcon 9 launches NROL-145 from California, completing our first of the new national security missions awarded in October 2024",
"datetime": "2025-04-20T17:05:09+00:00",
"label": null,
"media": [
{
"type": "photo",
"url": "https://pbs.twimg.com/media/Go_mYDJbIAA9mbK.jpg"
},
{
"type": "video",
"url": "https://pbs.twimg.com/amplify_video_thumb/1914001831661084672/img/ydKPVd7KoS6B6U_l.jpg"
}
],
"source": "X",
"tweet": {
"conversation_id": "1914002408545615936",
"hashtags": [],
"id": "1914002408545615936",
"is_quote": false,
"is_reply": false,
"is_retweet": false,
"like_count": 8190,
"quote_count": 71,
"reply_count": 495,
"retweet_count": 1802
},
"uri": "https://x.com/SpaceX/status/1914002408545615936",
"user": {
"display_name": "SpaceX",
"followers_count": 39073448,
"following_count": 121,
"id": "34743251",
"username": "@SpaceX",
"verified": false
}
}
],
"meta": {
"consistent_miners": 2,
"inconsistent_miners": 0,
"items_returned": 2,
"miner_hotkey": "5CacbhmQxhAVGWgrYvCypqhR3n3mNmmWEA8JYzAVghmTDYZy",
"miner_uid": 179,
"miners_queried": 5,
"miners_responded": 5,
"source": "consistent",
"validated_miners": 0
}
}
Request Examples
import macrocosmos as mc
client = mc.Sn13Client(api_key="your-api-key")
response = client.sn13.OnDemandData(
source='YouTube', # Searches YouTube
keywords=["mrbeast"], # For videos from Mr Beast
start_date='2024-08-01', # From midnight 2024-08-01 UTC
# To the time this request was made.
limit=10 # For 10 items maximum
)
print(response)
import macrocosmos as mc
client = mc.Sn13Client(api_key="your-api-key")
response = client.sn13.OnDemandData(
source='Reddit', # Searches Reddit
keywords=["r/astronomy", "space"], # For posts/comments mentioning 'space', in the r/astronomy subreddit
# In the default time range of the past 24 hours
limit=50 # For 50 items maximum
)
print(response)
import macrocosmos as mc
client = mc.Sn13Client(api_key="your-api-key")
response = client.sn13.OnDemandData(
source='Reddit', # Searches Reddit
keywords=["r/all", "space"], # For posts/comments mentioning 'space', across all subreddits
start_date='2025-04-01', # From midnight 2025-04-01 UTC
end_date='2025-04-02', # To midnight 2025-04-02 UTC
limit=50 # For 50 items maximum
)
print(response)
Last updated