Subnet 13 Gravity API
Gravity is a decentralized data collection platform powered by SN13 (Data Universe) on the Bittensor network.
Last updated
Gravity is a decentralized data collection platform powered by SN13 (Data Universe) on the Bittensor network.
Last updated
Choose GravityClient
for sync tasks. Use AsyncGravityClient
if async fits better.
Check for a complete working example of a data collection CLI you can use for your next big project or to plug right into your favorite data product.
📎 Supported Platforms
reddit
twitter
(X)
Youtube - coming soon
More platforms will be supported as subnet capabilities expand.
Macrocosmos SDK should be version 1.0.5+. For upgrade use
Each task gets registered on the network. Miners begin work right away. The task stays live for 7 days. After that, the dataset gets built automatically. You’ll get an email with a download link. Use any email you like.
Body
gravityTasks
List of GravityTask
objects
List of task objects. Each must include a topic
and a platform
(x
, reddit
, etc.)
name
string
Optional name for the Gravity task. Helpful for organizing jobs.
notificationRequests
List of NotificationRequest
objects
List of notification configs. Supports type
, address
, and redirect_url
.
Response
If you wish to get further information about the crawlers, you can use the include_crawlers
flag or make separate GetCrawler()
calls since returning in bulk can be slow.
Body
gravity_task_id
string
The unique identifier of the Gravity task you want to inspect.
include_crawlers
bool
Whether to include details of the associated crawler jobs. Defaults to False
.
Response
No need to wait 7 days. You can request your dataset early. Add a notification to get alerted when it's ready.
Body
crawlerId
string
The ID of the completed crawler job you want to convert into a dataset.
notificationRequests
List of NotificationRequest
objects
A list of notification objects (e.g., email or webhook). Includes type
, address
, and redirect_url
.
maxRows
int
The maximum number of rows to include in the dataset
Response
Watch your dataset build with GetDataset()
. Once built, the task gets de-registered.
datasetId
string
The ID of the dataset
Use CancelDataset()
to stop a build. If it's done, that call will purge the dataset.
gravityTaskId
(datasetId
)
string
Gravity task (dataset) Id
Run precise, real-time queries using the synchronous Sn13Client
to query historical or current data based on users, keywords, and time range on platforms like X (Twitter) and Reddit.
source
string
Data source (X
or Reddit
).
usernames
Array of strings
Default: []
Number of items: <= 10 items
List of usernames to fetch data from. If Default, random usernames selected.
keywords
Array of strings
Default: []
Number of items: <= 5 items
List of keywords to search for. If Default, random keywords are selected.
startDate
string
[Optional]
Start date (ISO format).
endDate
string
[Optional]
End date (ISO format).
limit
integer
[Optional]
Default: 100
Options: [1,...,1000]
Maximum number of items to return.