Package Logo
daiko_pump_fun_substreams
daiko_pump_fun_substreams@v0.2.3
Total Downloads
341
Published
1 weeks ago
Publisher
User Avatar posaune0423

Readme

Pumpfun Substreams

A Substreams package that retrieves events and instructions from both Pump.fun Bonding Curve and Pump AMM programs in real-time.

flowchart LR
    subgraph Solana["Solana Mainnet"]
        PUMP["fa:fa-rocket pump.fun<br/>Bonding Curve"]
        AMM["fa:fa-exchange-alt Pump AMM"]
    end

    subgraph Substreams["Substreams"]
        MAP["fa:fa-filter map_db_out"]
    end

    subgraph Sink["substreams-sink-sql"]
        PG["fa:fa-database PostgreSQL"]
        PGWEB["fa:fa-globe pgweb"]
    end

    PUMP --> MAP
    AMM --> MAP
    MAP --> PG
    PG --> PGWEB

Overview

Program Program ID Description
pump.fun Bonding Curve 6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P Token creation and initial trading (bonding curve)
Pump AMM pAMMBay6oceH9fJKBRHGP5D4bD4sWpmSwMn52FMfXEA AMM trading after graduation

Code Generation Architecture

This project uses two code generation mechanisms:

1. Protobuf → Rust (proto/src/pb/)

User-defined protobuf messages in proto/ directory are compiled to Rust code using substreams protogen.

proto/
├── program.proto      # Domain-specific messages (TradeEvent, TokenCreated, etc.)
└── database.proto     # DatabaseChanges for SQL sink

↓ substreams protogen

src/pb/
├── mod.rs
├── substreams.v1.program.rs        # Generated from program.proto
├── sf.substreams.sink.database.v1.rs  # Generated from database.proto
└── ... (other generated files)

Regenerate proto files:

substreams protogen substreams.yaml --exclude-paths=sf/substreams/rpc,sf/substreams/v1,google

2. Anchor IDL → Rust (idls/src/idl/)

Program IDL (Interface Definition Language) files from Anchor-based Solana programs are used to generate type-safe Rust bindings via the declare_program!() macro.

idls/
├── pump.json         # pump.fun Bonding Curve program IDL
└── pump_amm.json     # Pump AMM program IDL

↓ declare_program!() macro (compile-time)

src/idl/
└── mod.rs            # Declares pump and pump_amm modules

How it works:

The declare_program!() macro from anchor-lang automatically reads the IDL JSON files at compile time and generates:

  • events module: Type-safe event structs (e.g., CreateEvent, TradeEvent)
  • types module: Custom types defined in the program
  • accounts module: Account data types
  • program module: Program ID constant
// src/idl/mod.rs
pub mod pump {
    use anchor_lang::declare_program;
    declare_program!(pump);  // Reads idls/pump.json
    pub use self::pump::*;
}

pub mod pump_amm {
    use anchor_lang::declare_program;
    declare_program!(pump_amm);  // Reads idls/pump_amm.json
    pub use self::pump_amm::*;
}

Usage in code:

use crate::idl::pump::events::CreateEvent;
use crate::idl::pump_amm::events::BuyEvent;

// Decode events from instruction data
let create_event = CreateEvent::try_from_slice(&event_data)?;

Reference: Anchor declare_program documentation

Architecture Principles

This project follows the Substreams best practices:

Block → map (facts extraction)
      → store (minimal state)
      → map (normalize → DatabaseChanges)
      → SQL sink
      → DB aggregates
      → Rule engine (outside Substreams)

Key principles:

  • map modules: Stateless, pure functions for event extraction
  • store modules: Minimal state only (e.g., holder_count, known_mints)
  • Substreams extracts facts only: No business logic, no alert rules
  • Database handles aggregation: 24h volume, price history, etc.

Available Events

Pump.fun Bonding Curve (from idls/pump.json)

Event Description
CreateEvent New token creation
TradeEvent Buy/sell transactions
CompleteEvent Bonding curve completion (Graduate)
SetParamsEvent Parameter settings

Pump AMM (from idls/pump_amm.json)

Event Description
BuyEvent AMM purchase
SellEvent AMM sale
CreatePoolEvent Pool creation
DepositEvent LP addition
WithdrawEvent LP removal
CollectCoinCreatorFeeEvent Creator fee collection

Normalized Output (from proto/program.proto)

Message Description
TokenCreated Normalized token creation event
TradeEvent Unified trade event (Bonding + AMM)
TokenAccountBalanceChange Balance changes for holder_count
WalletLabel Detection facts (sniper, bundler, insider)

Module Dependency Graph

flowchart TD
    subgraph Input["Solana Blocks"]
        BLOCKS["solana:blocks_without_votes"]
    end

    subgraph Maps["Map Modules"]
        CREATE["map_pump_create"]
        BONDING["map_pump_bonding_trades"]
        AMM["map_pump_amm_trades"]
        TAC["map_token_account_changes"]
    end

    subgraph Stores["Store Modules"]
        MINTS["store_known_mints"]
        TIME["store_token_created_time"]
        CREATOR["store_token_creator"]
        HOLDER["store_holder_count"]
    end

    subgraph Detect["Detection Modules"]
        SNIPER["map_detect_sniper"]
        BUNDLER["map_detect_bundler"]
        INSIDER["map_detect_insider"]
    end

    subgraph Sink["Sink Module"]
        DB["map_db_out"]
    end

    BLOCKS --> CREATE
    BLOCKS --> BONDING
    BLOCKS --> AMM
    BLOCKS --> TAC

    CREATE --> MINTS
    CREATE --> TIME
    CREATE --> CREATOR
    MINTS --> TAC
    TAC --> HOLDER

    BONDING --> SNIPER
    TIME --> SNIPER
    BONDING --> BUNDLER
    TAC --> INSIDER
    CREATOR --> INSIDER

    CREATE --> DB
    BONDING --> DB
    AMM --> DB
    TAC --> DB
    SNIPER --> DB
    BUNDLER --> DB
    INSIDER --> DB
    HOLDER --> DB

DAG Overview (Substreams Pipeline)

The complete data flow from Solana blocks to PostgreSQL:

flowchart LR
  subgraph A["Solana"]
    B["Blocks (without votes)"]
  end

  subgraph S["Substreams"]
    M1["map_pump_create"]
    M2["map_pump_bonding_trades"]
    M3["map_pump_amm_trades"]
    M4["map_token_account_changes"]
    ST1["store_known_mints"]
    ST2["store_token_created_time"]
    ST3["store_token_creator"]
    ST4["store_holder_count"]
    D1["map_detect_sniper"]
    D2["map_detect_insider"]
    D3["map_detect_bundler"]
    OUT["map_db_out (DatabaseChanges)"]
  end

  subgraph DB["PostgreSQL (sink)"]
    T["tables"]
  end

  B --> M1
  B --> M2
  B --> M3
  B --> M4

  M1 --> ST1
  M1 --> ST2
  M1 --> ST3
  M4 --> ST4

  M2 --> D1
  ST2 --> D1
  M4 --> D2
  ST3 --> D2
  M2 --> D3

  M1 --> OUT
  M2 --> OUT
  M3 --> OUT
  M4 --> OUT
  D1 --> OUT
  D2 --> OUT
  D3 --> OUT
  ST4 --> OUT

  OUT --> T

Quick Start

Prerequisites

# Install Substreams CLI
brew install streamingfast/tap/substreams

# Install substreams-sink-sql
brew install streamingfast/tap/substreams-sink-sql

# Install buf (development only)
brew install bufbuild/buf/buf

1. Authentication Setup

# Substreams registry authentication (interactive)
substreams registry login

# Optional (non-interactive): provide the token via env or a local .substreams.env file
# echo 'SUBSTREAMS_API_TOKEN=...' > .substreams.env

2. Build

# Build WASM
cargo build --target wasm32-unknown-unknown --release

# Create package
substreams pack substreams.yaml

3. Test (GUI)

source .substreams.env

# Test the main sink module (DatabaseChanges output)
substreams gui substreams.yaml map_db_out

# Or test individual modules
substreams gui substreams.yaml map_pump_create          # Token creations
substreams gui substreams.yaml map_pump_bonding_trades  # Bonding curve trades
substreams gui substreams.yaml map_pump_amm_trades      # AMM trades
substreams gui substreams.yaml map_detect_sniper        # Sniper detection

Data Synchronization to PostgreSQL

This repo ships a DatabaseChanges sink module (map_db_out) and writes to Postgres via substreams-sink-sql.

Local (Docker) setup

Services:

  • Postgres: localhost:5432
  • pgweb: http://localhost:8081
# 1) Start Postgres + pgweb
docker compose up -d

# 2) Apply schema
# WARNING: schema.sql DROPS the `public` schema (reset script). Safe for local/dev only.
docker exec -i daiko-pumpfun-substreams-postgres psql -U dev -d main < schema.sql

# 3) Provide Substreams API token (or ensure you are logged in via `substreams registry login`)
source .substreams.env

# 4) Create sink internal tables (cursor/history)
substreams-sink-sql setup \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history

Local smoke test (short range)

If you just want to confirm "events are flowing and rows are written", run a short range and flush more frequently:

Note: substreams-sink-sql run does not accept a module name as a positional argument. The optional 3rd argument is a block range in the form <start>:<stop>. The sink output module (this repo uses map_db_out) is inferred from the package.

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  387793194:387794000 \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --batch-block-flush-interval 50 \
  --batch-row-flush-interval 5000

Local smoke test near head (auto range)

If you want to validate against the latest chain head without manually picking slots, use the Makefile helper:

# Start 50 blocks behind head and stop 200 blocks ahead of head (bounded run)
make sink-smoke

This resolves the current Solana head slot via solana slot and runs an absolute range. Note: this uses the same cursor tables as other targets. If you run it on the same database, it can advance your cursor.

Start from head and keep streaming (realtime mode)

If you want to start from the current chain head (or slightly behind head) and keep streaming:

# Start 50 blocks behind head and keep streaming (no stop block)
make sink-head HEAD_FROM_HEAD=-50

Notes:

  • This is realtime-oriented and does not backfill historical state.
  • Stores like store_known_mints / store_holder_count will only reflect what they see from the chosen start block onward. If you need fully accurate historical state, use make sink (backfill) instead.
  • substreams-sink-sql does not support an "infinite stop" in the <start>:<stop> range syntax.
    make sink-head uses a very large stop (HEAD_TO_HEAD, default: 1_000_000_000 blocks ahead of head) to behave like a long-running stream without falling back to block 0 on a fresh cursor table.

Local live streaming (no stop)

If you want to watch data in pgweb while validating pump.fun in the browser, run without a stop block.

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry

Notes:

  • You are in true "live mode" once the sink logs show live: true in stream stats.
  • If you omit the block range entirely (as above), the sink will backfill from the manifest initialBlock (or resume from _cursors if present) before it becomes live.
  • Flush behavior differs between catch-up and live:
    • Catch-up (backfill): controlled by --batch-block-flush-interval (default 1000) and --batch-row-flush-interval (default 100000).
    • Live (near head): controlled by --live-block-flush-interval (default 1).
    • If you check the database before the first catch-up flush, it can look like "0 rows". Lower the batch intervals for faster visibility.

Realtime flush tuning (best practice)

Real-time ingestion is a trade-off: lower latency requires more frequent transactions (more DB load).

Recommended starting point for Postgres (single node) if latency matters:

  • Live: --live-block-flush-interval 5 (flush every 5 blocks; typically a few seconds on Solana)
  • Catch-up: --batch-block-flush-interval 200 and --batch-row-flush-interval 20000

Example:

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --batch-block-flush-interval 200 \
  --batch-row-flush-interval 20000 \
  --live-block-flush-interval 5 \
  --infinite-retry

Tuning tips:

  • If DB CPU or WAL grows too much: increase --live-block-flush-interval (e.g. 10, 20) first.
  • If catch-up takes too long: increase batch intervals (e.g. 500/50000) to improve throughput.
  • If you need reorg safety with minimal latency, keep --undo-buffer-size 0 (default) and use a modest live flush interval.

Production (bootstrap + long-running)

On a brand-new database, run once:

# WARNING: schema.sql DROPS the `public` schema (reset script).
psql "<POSTGRES_DSN>" -f schema.sql

substreams-sink-sql setup \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history

Then start the long-running sink (backfills from the manifest initialBlock and then continues streaming in live mode):

export SUBSTREAMS_API_TOKEN="<YOUR_TOKEN>"   # set via secret manager
export DLOG="info"

substreams-sink-sql run \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry

Run it under a process supervisor (systemd, Docker, Kubernetes). If you just need a quick background run:

nohup substreams-sink-sql run \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry \
  > sink.log 2>&1 &

Notes:

  • During initial backfill, the sink flushes in batches (defaults: 1000 blocks or 100000 rows). If you check before the first flush (or stop early), it can look like "0 rows". Lower --batch-* intervals for faster visibility.
  • If you want finalized-only data (higher latency), add --final-blocks-only.

Data Verification

pgweb (Web UI)

Open http://localhost:8081 in your browser

psql Command

# List tables
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c "\dt"

# Trade count
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c "SELECT COUNT(*) FROM trades"

# Latest trades (unified bonding curve + AMM)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT mint_address, sol_amount, token_amount, side, trade_source, wallet_address, block_timestamp
   FROM trades
   ORDER BY block DESC
   LIMIT 10"

# New token creations
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT symbol, mint_address, creator_address, bonding_curve_address, created_timestamp
   FROM tokens
   ORDER BY created_block DESC
   LIMIT 10"

# Token account balance changes
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT mint_address, owner, pre_balance, post_balance, change_amount, block_timestamp
   FROM token_account_changes
   ORDER BY block DESC
   LIMIT 10"

# Wallet labels (sniper, bundler, insider detection)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT mint_address, wallet_address, label_kind, detected_timestamp
   FROM wallet_labels
   ORDER BY detected_block DESC
   LIMIT 10"

# Cursor position (progress)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT block_num, block_id FROM _cursors"

Schema (PostgreSQL)

Schema Management (IMPORTANT)

⚠️ This project does NOT own the schema definition.

The PostgreSQL schema for substreams.* tables is owned and managed by agentic-terminal.

How to get the latest schema

# Fetch schema from agentic-terminal
make fetch-schema

# This will:
# 1. cd ../agentic-terminal
# 2. bun run export-substreams-schema
# 3. Generate schema.sql in this directory

The generated schema.sql is gitignored because it's a build artifact.

Schema update workflow

  1. Schema changes are made in agentic-terminal
  2. Run make fetch-schema in this project to get the latest schema
  3. Update Rust code (DatabaseChanges) if needed
  4. Test with the new schema

Why this architecture?

  • Type Safety: Schema is defined in TypeScript with Drizzle ORM
  • Single Source of Truth: No schema drift between projects
  • Separation of Concerns:
    • agentic-terminal = Schema definition + Query + Type generation
    • daiko_pump_fun_substreams = Data writing only

Related Projects

Schema Overview

For the authoritative schema definition, see agentic-terminal/packages/db/src/schema/substreams-schema.ts.

Core tables (and primary keys)

  • tokens: PK (mint_address)
  • token_metrics: PK (mint_address)
    • price_sol, mc_sol, ath_price_sol, ath_mc_sol: numeric(38,18)
    • ATH fields are maintained by DB trigger trg_token_metrics_ath_update
  • trades: PK (signature, ix_index, inner_ix_index, event_index)
    • trade_source: bonding_curve or amm
    • price_sol: numeric(38,18)
    • pool_address is populated for AMM trades
  • token_account_changes: PK (signature, account_index)
  • wallets: PK (address)
  • wallet_labels: PK (mint_address, wallet_address, label_kind)

Sink internal tables

These tables are used by substreams-sink-sql for cursor tracking and are now schema-qualified:

  • substreams._cursors - Stores the latest processed cursor for resume
  • substreams._substreams_history - Stores processing history for reorg/undo support

Note: When running the sink, use schema-qualified table names:

substreams-sink-sql run ... \
  --cursors-table "substreams._cursors" \
  --history-table "substreams._substreams_history"

Publishing to Substreams Registry

Automatic Publishing (CI/CD)

This project automatically publishes to the Substreams Registry on every push to main that modifies source code.

The GitHub Actions workflow (.github/workflows/publish-spkg.yml) will:

  1. Auto-increment the patch version in substreams.yaml
  2. Build the WASM module
  3. Pack and publish to the Substreams Registry
  4. Commit the version bump back to main

Required GitHub Secrets:

Manual Publishing

If you need to publish manually:

source .substreams.env

# Build and pack first
cargo build --target wasm32-unknown-unknown --release
substreams pack substreams.yaml

# Publish (requires registry auth)
# - For local use: run `substreams registry login` once, or set SUBSTREAMS_API_TOKEN in .substreams.env
substreams publish daiko_pump_fun_substreams-v0.2.0.spkg --yes

Using the Published Package

Consumers (like agentic-terminal) can reference the published package directly:

# In substreams-sink-sql
substreams-sink-sql run \
  "$DATABASE_URL" \
  "spkg.io/daikolabs/daiko_pump_fun_substreams-v0.2.0" \
  --endpoint mainnet.sol.streamingfast.io:443

# In Docker (agentic-terminal)
SPKG=spkg.io/daikolabs/daiko_pump_fun_substreams-v0.2.0 docker compose up -d

No local .spkg file or this repository is needed for consumption.

Development

Project Structure

.
├── Cargo.toml                    # Rust dependencies & build configuration
├── substreams.yaml               # Substreams module definitions
├── docker-compose.yaml           # PostgreSQL + pgweb for local development
├── Makefile                      # Common development commands
│
├── idls/                         # Anchor IDL files (INPUT - do not edit)
│   ├── pump.json                 # pump.fun Bonding Curve program IDL
│   └── pump_amm.json             # Pump AMM program IDL
│
├── proto/                        # User-defined Protobuf messages (INPUT)
│   ├── program.proto             # Domain events (TradeEvent, TokenCreated, etc.)
│   └── database.proto            # DatabaseChanges for SQL sink
│
└── src/
    ├── lib.rs                    # Substreams module entry points
    │
    ├── idl/                      # IDL module declarations
    │   └── mod.rs                # declare_program!() → generates types at compile time
    │
    ├── pb/                       # Generated Protobuf code (OUTPUT - do not edit)
    │   ├── mod.rs                # Module exports
    │   ├── substreams.v1.program.rs    # Generated from proto/program.proto
    │   └── sf.substreams.sink.database.v1.rs  # Generated from proto/database.proto
    │
    ├── programs/                 # Program-specific logic
    │   ├── pumpfun/              # pump.fun Bonding Curve handlers
    │   │   ├── events.rs         # Event parsing using idl::pump::events
    │   │   ├── map_create.rs     # Token creation extraction
    │   │   └── map_trades.rs     # Trade event extraction
    │   └── pump_amm/             # Pump AMM handlers
    │       ├── events.rs         # Event parsing using idl::pump_amm::events
    │       └── map_trades.rs     # AMM trade extraction
    │
    ├── modules/                  # Substreams module implementations
    │   ├── map/                  # Map modules (stateless fact extraction)
    │   ├── stores/               # Store modules (minimal state)
    │   ├── detect/               # Detection modules (sniper, bundler, insider)
    │   └── sinks/                # Sink modules (DatabaseChanges output)
    │
    └── utils/                    # Shared utilities
        ├── meta.rs               # ChainMeta construction
        ├── token_balance.rs      # Token balance change detection
        └── tx.rs                 # Transaction parsing helpers

Code Generation Commands

Regenerate Protobuf Rust code

When you modify files in proto/, regenerate Rust code:

substreams protogen substreams.yaml --exclude-paths=sf/substreams/rpc,sf/substreams/v1,google

Note: The generated files in src/pb/ should not be edited manually.

IDL (Anchor) code generation

IDL code is generated automatically at compile time by the declare_program!() macro. No manual step is required.

If you update the IDL files in idls/, simply rebuild:

cargo build --target wasm32-unknown-unknown --release

Code Formatting and Linting

Formatting

# Format code
cargo fmt

# Check formatting (no changes)
cargo fmt -- --check

# Or use Makefile
make fmt          # Format
make fmt-check    # Format check

Linting (Clippy)

# Run lint
cargo clippy

# Run lint (treat warnings as errors)
cargo clippy -- -D warnings

# Fix auto-fixable issues
cargo clippy --fix --allow-dirty --allow-staged

# Or use Makefile
make lint         # Run lint
make lint-fix     # Auto-fix
make check        # Format check + lint

Container Management

# Start
docker compose up -d

# Stop
docker compose down

# Restart (keep data)
docker compose down
docker compose up -d

# Hard reset (wipe Postgres data) + restart
# NOTE: Postgres data is bind-mounted to ./data/postgres in docker-compose.yaml
docker compose down --volumes
rm -rf ./data/postgres
docker compose up -d

# Complete removal (including volumes)
docker compose down -v

License

MIT

Troubleshooting / FAQ

_cursors / _substreams_history tables are created automatically. What are they?

Those are internal tables created by substreams-sink-sql (the SQL sink), not tables defined by this project. They exist to support:

  • Resume / checkpoints: _cursors stores the latest processed cursor so the sink can restart from the correct position.
  • History / undo: _substreams_history stores processing history, used for reorg/undo support.

Should I delete them?

In general, no. Deleting them loses progress tracking and can cause unexpected reprocessing or duplication. If you want a clean re-sync from scratch, prefer resetting the database explicitly (e.g. docker compose down -v to drop the volume and re-initialize).

Documentation

Modules

Maps icon
Maps

map
map_pump_create

dbf7bd41b5e71afa3c878412c10f03c161ae1a0f
map map_pump_create (
solana:blocks_without_votessf.solana.type.v1.Block
)  -> substreams.v1.program.TokenCreateds
substreams gui daiko-pump-fun-substreams@v0.2.3 map_pump_create

map
map_pump_graduations

8c894bbdc2fb686332f59f9e04c4c9d1fd5519c5
substreams gui daiko-pump-fun-substreams@v0.2.3 map_pump_graduations

map
map_pump_bonding_trades

ee3d80ece3fbbae967f450c6d115d17feb6e6056
substreams gui daiko-pump-fun-substreams@v0.2.3 map_pump_bonding_trades

map
map_pump_amm_trades

3380b030f3f9c4a9ef54fefc5b33f3053068eadc
substreams gui daiko-pump-fun-substreams@v0.2.3 map_pump_amm_trades

map
map_token_account_changes

4f75c61c87af539790f6f02122700221f8fb5333
substreams gui daiko-pump-fun-substreams@v0.2.3 map_token_account_changes

map
map_token_transfers

1972cd3ae88a681a333c575bb6effd2ee4658512
substreams gui daiko-pump-fun-substreams@v0.2.3 map_token_transfers

map
map_pump_amm_pools

2c7ca124f836fc31c34681ce6b123cba80d9cc9c
substreams gui daiko-pump-fun-substreams@v0.2.3 map_pump_amm_pools

map
map_detect_sniper

d0b038b49d414b80bb816719e8df4f18e3eb9ab9
substreams gui daiko-pump-fun-substreams@v0.2.3 map_detect_sniper

map
map_detect_bundler

d72339ccc79b1e4fabb40b08aa8b2d5ff6f36f51
substreams gui daiko-pump-fun-substreams@v0.2.3 map_detect_bundler

map
map_detect_insider

fde18a2a4c1c0be5d3bb7024a62433f7eaf100ae
substreams gui daiko-pump-fun-substreams@v0.2.3 map_detect_insider

map
solana:blocks_without_votes

1e7b653af0a4d6dc0c5bebd4741fe0cc3c1a006b
map solana:blocks_without_votes (
)  -> sf.solana.type.v1.Block
substreams gui daiko-pump-fun-substreams@v0.2.3 solana:blocks_without_votes
Stores icon
Stores

store
store_known_mints

e48d3851c963a5db772c882878b07b60dbf8ed1f
substreams gui daiko-pump-fun-substreams@v0.2.3 store_known_mints

store
store_token_created_time

11c9820960c3784ce5cedd436cc9a483620ad0a1
substreams gui daiko-pump-fun-substreams@v0.2.3 store_token_created_time

store
store_token_creator

edf9afe1c1687d74d2d6006219727e291e6fe1ae
substreams gui daiko-pump-fun-substreams@v0.2.3 store_token_creator

store
store_holder_count

a925ee403e80b03f07e06a0148a83ea2ce227ade
substreams gui daiko-pump-fun-substreams@v0.2.3 store_holder_count

store
store_pump_amm_pool_mints

a127db9f65046256e5a080c2f95030327df8dde5
store <set,string> store_pump_amm_pool_mints (
)
substreams gui daiko-pump-fun-substreams@v0.2.3 store_pump_amm_pool_mints
Protobuf

Protobuf Docs Explorer

sf.solana.type.v1
sol.instructions.v1
sol.transactions.v1
substreams.v1.program