Subnet 13 Gravity API

Gravity is a decentralized data collection platform powered by SN13 (Data Universe) on the Bittensor network.

Quickstart

Get started guide

Choose GravityClient for sync tasks. Use AsyncGravityClient if async fits better. Check examples/gravity_workflow_example.py for a complete working example of a data collection CLI you can use for your next big project or to plug right into your favorite data product.

📎 Supported Platforms

  • reddit

  • twitter (X)

  • Youtube

More platforms will be supported as subnet capabilities expand.

pip install macrocosmos

Macrocosmos SDK should be version 2.1.0. For upgrade use

pip install macrocosmos==3.0.0

Demo Video

Gravity API Endpoints

Create a task for Data Collection

The task after the launch gets registered on the network within 20 min. The data is starting to be collected and delivered by miners from the moment of the registration on the Blockchain. The task stays live for 7 days to allow the most data to be collected. After that, the dataset gets built automatically. If you provided an email you’ll get a notification with a download link.

To check the status of the task and the amount of data collected at any time use the endpoint Get status of the task. To start building the dataset prior the 7 days completion, use the endpoint Build dataset.

Body

Name
Type
Description

gravityTasks

List of GravityTask objects

List of task objects. Each must include a topic and a platform (x, reddit, etc.)

name

string

Optional name for the Gravity task. Helpful for organizing jobs.

notificationRequests

List of NotificationRequest objects

List of notification configs. Supports type, address, and redirect_url.

Response

Get status of task

To check the status of the task and the amount of data collected at any time use the endpoint Get status of the task.

If you wish to get further information about the crawlers, you can use the include_crawlers flag or make separate GetCrawler() calls since returning in bulk can be slow.

Body

Name
Type
Description

gravity_task_id

string

The unique identifier of the Gravity task you want to inspect.

include_crawlers

bool

Whether to include details of the associated crawler jobs. Defaults to False.

Response

Build dataset

No need to wait 7 days until the task is complete. If you already got enough data, you can request your dataset early. Add a notification to get alerted when the dataset is built. Once built, the task gets completed and de-registered.

Body

Name
Type
Description

crawlerId

string

The ID of the completed crawler job you want to convert into a dataset.

notificationRequests

List of NotificationRequest objects

A list of notification objects (e.g., email or webhook). Includes type, address, and redirect_url.

maxRows

int

The maximum number of rows to include in the dataset

Response

Get status of a build

Watch your dataset build with GetDataset(). Once built, the task gets completed and de-registered.

Body

Name
Type
Description

datasetId

string

The ID of the dataset

Response

Cancel requests

Use CancelDataset() to stop a build. If it's done, that call will purge the dataset.

Body

Name
Type
Description

gravityTaskId (datasetId)

string

Gravity task (dataset) Id

Response

Streaming API ( On Demand Data API)

Run precise, real-time queries using the synchronous Sn13Client to query historical or current data based on users, keywords, and time range on platforms like X (Twitter) Reddit, and YouTube.

The Streaming API is limited to 1000 posts per request.

As of the latest data-universe release:

  • Users may select two post-filtering modes via the keyword_mode parameter:

    • "any" : Returns posts that contain any combination of the listed keywords.

    • "all" : Returns posts that contain all of the keywords (default, if field omitted).

  • For Reddit requests, the first keyword in the list corresponds to the requested subreddit, and subsequent keywords are treated as normal.

  • For YouTube requests, the username field value must correspond to the YouTube channel name.

  • URL mode is mutually exclusive with usernames and keywords fields. If url is provided, usernames and keywords must be empty.

Body

Name
Type
Description

source

string

Data source (X or Reddit).

usernames

Array of strings

Default: []

Number of items: <= 10 items

List of usernames to fetch data from. Searches for posts from any of the given usernames.

If usernames are not included, they will not be constrained in the search parameters. For YouTube: The item in the usernames field should correspond to the YouTube Channel name. Only one keyword (URL) OR one username (channel name) is allowed per request, not both.

keywords

Array of strings

Default: []

Number of items: <= 5 items

List of keywords to search for. Searches for posts where all given keywords are present.

If keywords are not included in the query, they will not be constrained in the search parameters. For Reddit: The first keyword indicates the subreddit (r/all for cross-subreddit queries), and subsequent keywords are text matches. For YouTube: Enter a video URL as a YouTube keyword to request the transcript from that video. Only one keyword (URL) OR one username (channel name) is allowed per request, not both.

startDate

string

[Optional]

Start date or datetime (ISO format). Defaults to 24 hours prior to the request time if not specified. Datetimes without time information will be set to midnight (00:00:00) by default.

Datetimes without timezone information will be set to UTC by default.

endDate

string

[Optional]

End date or datetime (ISO format). Defaults to the request time if not specified.

Datetimes without time information will be set to midnight (00:00:00) by default.

Datetimes without timezone information will be set to UTC by default.

limit

integer

[Optional]

Default: 100

Options: [1,...,1000]

Maximum number of items to return.

keywordMode

string

[Optional] Default: all Options: all , any Selects the post-filtering mode:

  • "any": Returns posts that contain any combination of the listed keywords.

  • "all": Returns posts that contain all of the keywords

url

string

[Optional] Single url for URL search mode (X or YouTube) If url is provided, usernames and keywords must be empty or omitted.

Response

Request Examples

Last updated