Package Logo
daiko_pump_fun_substreams
daiko_pump_fun_substreams@v0.2.6
Total Downloads
341
Published
2 days ago
Publisher
User Avatar posaune0423

Readme

Pumpfun Substreams

A Substreams package that retrieves events and instructions from both Pump.fun Bonding Curve and Pump AMM programs in real-time.

flowchart LR
    subgraph Solana["Solana Mainnet"]
        PUMP["fa:fa-rocket pump.fun<br/>Bonding Curve"]
        AMM["fa:fa-exchange-alt Pump AMM"]
    end

    subgraph Substreams["Substreams"]
        MAP["fa:fa-filter map_db_out"]
    end

    subgraph Sink["substreams-sink-sql"]
        PG["fa:fa-database PostgreSQL"]
        PGWEB["fa:fa-globe pgweb"]
    end

    PUMP --> MAP
    AMM --> MAP
    MAP --> PG
    PG --> PGWEB

Overview

Program Program ID Description
pump.fun Bonding Curve 6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P Token creation and initial trading (bonding curve)
Pump AMM pAMMBay6oceH9fJKBRHGP5D4bD4sWpmSwMn52FMfXEA AMM trading after graduation

Code Generation Architecture

This project uses two code generation mechanisms:

1. Protobuf → Rust (proto/src/pb/)

User-defined protobuf messages in proto/ directory are compiled to Rust code using substreams protogen.

proto/
├── program.proto      # Domain-specific messages (TradeEvent, TokenCreated, etc.)
└── database.proto     # DatabaseChanges for SQL sink

↓ substreams protogen

src/pb/
├── mod.rs
├── substreams.v1.program.rs        # Generated from program.proto
├── sf.substreams.sink.database.v1.rs  # Generated from database.proto
└── ... (other generated files)

Regenerate proto files:

substreams protogen substreams.yaml --exclude-paths=sf/substreams/rpc,sf/substreams/v1,google

2. Anchor IDL → Rust (idls/src/idl/)

Program IDL (Interface Definition Language) files from Anchor-based Solana programs are used to generate type-safe Rust bindings via the declare_program!() macro.

idls/
├── pump.json         # pump.fun Bonding Curve program IDL
└── pump_amm.json     # Pump AMM program IDL

↓ declare_program!() macro (compile-time)

src/idl/
└── mod.rs            # Declares pump and pump_amm modules

How it works:

The declare_program!() macro from anchor-lang automatically reads the IDL JSON files at compile time and generates:

  • events module: Type-safe event structs (e.g., CreateEvent, TradeEvent)
  • types module: Custom types defined in the program
  • accounts module: Account data types
  • program module: Program ID constant
// src/idl/mod.rs
pub mod pump {
    use anchor_lang::declare_program;
    declare_program!(pump);  // Reads idls/pump.json
    pub use self::pump::*;
}

pub mod pump_amm {
    use anchor_lang::declare_program;
    declare_program!(pump_amm);  // Reads idls/pump_amm.json
    pub use self::pump_amm::*;
}

Usage in code:

use crate::idl::pump::events::CreateEvent;
use crate::idl::pump_amm::events::BuyEvent;

// Decode events from instruction data
let create_event = CreateEvent::try_from_slice(&event_data)?;

Reference: Anchor declare_program documentation

Architecture Principles

This project follows the Substreams best practices:

Block → map (facts extraction)
      → store (minimal state)
      → map (normalize → DatabaseChanges)
      → SQL sink
      → DB aggregates
      → Rule engine (outside Substreams)

Key principles:

  • map modules: Stateless, pure functions for event extraction
  • store modules: Minimal state only (e.g., holder_count, known_mints)
  • Substreams extracts facts only: No business logic, no alert rules
  • Database handles aggregation: 24h volume, price history, etc.

Available Events

Pump.fun Bonding Curve (from idls/pump.json)

Event Description
CreateEvent New token creation
TradeEvent Buy/sell transactions
CompleteEvent Bonding curve completion (Graduate)
SetParamsEvent Parameter settings

Pump AMM (from idls/pump_amm.json)

Event Description
BuyEvent AMM purchase
SellEvent AMM sale
CreatePoolEvent Pool creation
DepositEvent LP addition
WithdrawEvent LP removal
CollectCoinCreatorFeeEvent Creator fee collection

Normalized Output (from proto/program.proto)

Message Description
TokenCreated Normalized token creation event
TradeEvent Unified trade event (Bonding + AMM)
TokenAccountBalanceChange Balance changes for holder_count
WalletLabel Detection facts (sniper, bundler, insider)

Module Dependency Graph

flowchart TD
    subgraph Input["Solana Blocks"]
        BLOCKS["solana:blocks_without_votes"]
    end

    subgraph Maps["Map Modules"]
        CREATE["map_pump_create"]
        BONDING["map_pump_bonding_trades"]
        AMM["map_pump_amm_trades"]
        TAC["map_token_account_changes"]
    end

    subgraph Stores["Store Modules"]
        MINTS["store_known_mints"]
        TRADE["store_mint_has_trade"]
        CREATOR["store_token_creator"]
        HOLDER["store_holder_count"]
    end

    subgraph Detect["Detection Modules"]
        SNIPER["map_detect_sniper"]
        BUNDLER["map_detect_bundler"]
        INSIDER["map_detect_insider"]
    end

    subgraph Sink["Sink Module"]
        DB["map_db_out"]
    end

    BLOCKS --> CREATE
    BLOCKS --> BONDING
    BLOCKS --> AMM
    BLOCKS --> TAC

    CREATE --> MINTS
    CREATE --> CREATOR
    BONDING --> TRADE
    AMM --> TRADE
    MINTS --> TRADE
    MINTS --> TAC
    TRADE --> TAC
    TAC --> HOLDER

    BONDING --> SNIPER
    MINTS --> SNIPER
    BONDING --> BUNDLER
    TAC --> INSIDER
    CREATOR --> INSIDER

    CREATE --> DB
    BONDING --> DB
    AMM --> DB
    TAC --> DB
    SNIPER --> DB
    BUNDLER --> DB
    INSIDER --> DB
    HOLDER --> DB

DAG Overview (Substreams Pipeline)

The complete data flow from Solana blocks to PostgreSQL:

flowchart LR
  subgraph A["Solana"]
    B["Blocks (without votes)"]
  end

  subgraph S["Substreams"]
    M1["map_pump_create"]
    M2["map_pump_bonding_trades"]
    M3["map_pump_amm_trades"]
    M4["map_token_account_changes"]
    ST1["store_known_mints"]
    ST2["store_mint_has_trade"]
    ST3["store_token_creator"]
    ST4["store_holder_count"]
    D1["map_detect_sniper"]
    D2["map_detect_insider"]
    D3["map_detect_bundler"]
    OUT["map_db_out (DatabaseChanges)"]
  end

  subgraph DB["PostgreSQL (sink)"]
    T["tables"]
  end

  B --> M1
  B --> M2
  B --> M3
  B --> M4

  M1 --> ST1
  M1 --> ST3
  M2 --> ST2
  M3 --> ST2
  ST1 --> ST2
  M4 --> ST4

  M2 --> D1
  ST1 --> D1
  M4 --> D2
  ST3 --> D2
  M2 --> D3

  M1 --> OUT
  M2 --> OUT
  M3 --> OUT
  M4 --> OUT
  D1 --> OUT
  D2 --> OUT
  D3 --> OUT
  ST4 --> OUT

  OUT --> T

Quick Start

Prerequisites

# Install Substreams CLI
brew install streamingfast/tap/substreams

# Install substreams-sink-sql
brew install streamingfast/tap/substreams-sink-sql

# Install buf (development only)
brew install bufbuild/buf/buf

1. Authentication Setup

# Substreams registry authentication (interactive)
substreams registry login

# Optional (non-interactive): provide the token via env or a local .substreams.env file
# echo 'SUBSTREAMS_API_TOKEN=...' > .substreams.env

2. Build

# Build WASM
cargo build --target wasm32-unknown-unknown --release

# Create package
substreams pack substreams.yaml

3. Test (GUI)

source .substreams.env

# Test the main sink module (DatabaseChanges output)
substreams gui substreams.yaml map_db_out

# Or test individual modules
substreams gui substreams.yaml map_pump_create          # Token creations
substreams gui substreams.yaml map_pump_bonding_trades  # Bonding curve trades
substreams gui substreams.yaml map_pump_amm_trades      # AMM trades
substreams gui substreams.yaml map_detect_sniper        # Sniper detection

Data Synchronization to PostgreSQL

This repo ships a DatabaseChanges sink module (map_db_out) and writes to Postgres via substreams-sink-sql.

Local (Docker) setup

Services:

  • Postgres: localhost:5432
  • pgweb: http://localhost:8081
# 1) Start Postgres + pgweb
docker compose up -d

# 2) Apply schema
# WARNING: schema.sql DROPS the `public` schema (reset script). Safe for local/dev only.
docker exec -i daiko-pumpfun-substreams-postgres psql -U dev -d main < schema.sql

# 3) Provide Substreams API token (or ensure you are logged in via `substreams registry login`)
source .substreams.env

# 4) Create sink internal tables (cursor/history)
substreams-sink-sql setup \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history

Local smoke test (short range)

If you just want to confirm "events are flowing and rows are written", run a short range and flush more frequently:

Note: substreams-sink-sql run does not accept a module name as a positional argument. The optional 3rd argument is a block range in the form <start>:<stop>. The sink output module (this repo uses map_db_out) is inferred from the package.

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  387793194:387794000 \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --batch-block-flush-interval 50 \
  --batch-row-flush-interval 5000

Local smoke test near head (auto range)

If you want to validate against the latest chain head without manually picking slots, use the Makefile helper:

# Start 50 blocks behind head and stop 200 blocks ahead of head (bounded run)
make sink-smoke

This resolves the current Solana head slot via solana slot and runs an absolute range. Note: this uses the same cursor tables as other targets. If you run it on the same database, it can advance your cursor.

Start from head and keep streaming (realtime mode)

If you want to start from the current chain head (or slightly behind head) and keep streaming:

# Start 50 blocks behind head and keep streaming (no stop block)
make sink-head HEAD_FROM_HEAD=-50

Notes:

  • This is realtime-oriented and does not backfill historical state.
  • Stores like store_known_mints / store_holder_count will only reflect what they see from the chosen start block onward. If you need fully accurate historical state, use make sink (backfill) instead.
  • substreams-sink-sql does not support an "infinite stop" in the <start>:<stop> range syntax.
    make sink-head uses a very large stop (HEAD_TO_HEAD, default: 1_000_000_000 blocks ahead of head) to behave like a long-running stream without falling back to block 0 on a fresh cursor table.

Local live streaming (no stop)

If you want to watch data in pgweb while validating pump.fun in the browser, run without a stop block.

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry

Notes:

  • You are in true "live mode" once the sink logs show live: true in stream stats.
  • If you omit the block range entirely (as above), the sink will backfill from the manifest initialBlock (or resume from _cursors if present) before it becomes live.
  • Flush behavior differs between catch-up and live:
    • Catch-up (backfill): controlled by --batch-block-flush-interval (default 1000) and --batch-row-flush-interval (default 100000).
    • Live (near head): controlled by --live-block-flush-interval (default 1).
    • If you check the database before the first catch-up flush, it can look like "0 rows". Lower the batch intervals for faster visibility.

Realtime flush tuning (best practice)

Real-time ingestion is a trade-off: lower latency requires more frequent transactions (more DB load).

Recommended starting point for Postgres (single node) if latency matters:

  • Live: --live-block-flush-interval 5 (flush every 5 blocks; typically a few seconds on Solana)
  • Catch-up: --batch-block-flush-interval 200 and --batch-row-flush-interval 20000

Example:

source .substreams.env

substreams-sink-sql run \
  "postgres://dev:insecure@localhost:5432/main?sslmode=disable" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --batch-block-flush-interval 200 \
  --batch-row-flush-interval 20000 \
  --live-block-flush-interval 5 \
  --infinite-retry

Tuning tips:

  • If DB CPU or WAL grows too much: increase --live-block-flush-interval (e.g. 10, 20) first.
  • If catch-up takes too long: increase batch intervals (e.g. 500/50000) to improve throughput.
  • If you need reorg safety with minimal latency, keep --undo-buffer-size 0 (default) and use a modest live flush interval.

Production (bootstrap + long-running)

On a brand-new database, run once:

# WARNING: schema.sql DROPS the `public` schema (reset script).
psql "<POSTGRES_DSN>" -f schema.sql

substreams-sink-sql setup \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history

Then start the long-running sink (backfills from the manifest initialBlock and then continues streaming in live mode):

export SUBSTREAMS_API_TOKEN="<YOUR_TOKEN>"   # set via secret manager
export DLOG="info"

substreams-sink-sql run \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry

Run it under a process supervisor (systemd, Docker, Kubernetes). If you just need a quick background run:

nohup substreams-sink-sql run \
  "<POSTGRES_DSN>" \
  ./daiko_pump_fun_substreams-v0.2.0.spkg \
  --cursors-table _cursors \
  --history-table _substreams_history \
  --endpoint mainnet.sol.streamingfast.io:443 \
  --infinite-retry \
  > sink.log 2>&1 &

Notes:

  • During initial backfill, the sink flushes in batches (defaults: 1000 blocks or 100000 rows). If you check before the first flush (or stop early), it can look like "0 rows". Lower --batch-* intervals for faster visibility.
  • If you want finalized-only data (higher latency), add --final-blocks-only.

Data Verification

pgweb (Web UI)

Open http://localhost:8081 in your browser

psql Command

# List tables
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c "\dt"

# Trade count
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c "SELECT COUNT(*) FROM trades"

# Latest trades (unified bonding curve + AMM)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT mint_address, sol_amount, token_amount, side, trade_source, wallet_address, block_timestamp
   FROM trades
   ORDER BY block DESC
   LIMIT 10"

# New token creations
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT symbol, mint_address, creator_address, bonding_curve_address, created_timestamp
   FROM tokens
   ORDER BY created_block DESC
   LIMIT 10"

# Wallet labels (sniper, bundler, insider detection)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT mint_address, wallet_address, label_kind, detected_timestamp
   FROM wallet_labels
   ORDER BY detected_block DESC
   LIMIT 10"

# Cursor position (progress)
docker exec daiko-pumpfun-substreams-postgres psql -U dev -d main -c \
  "SELECT block_num, block_id FROM _cursors"

Schema (PostgreSQL)

Schema Management (IMPORTANT)

⚠️ This project does NOT own the schema definition.

The PostgreSQL schema for substreams.* tables is owned and managed by agentic-terminal.

How to get the latest schema

# Fetch schema from agentic-terminal
make fetch-schema

# This will:
# 1. cd ../agentic-terminal
# 2. bun run export-substreams-schema
# 3. Generate schema.sql in this directory

The generated schema.sql is gitignored because it's a build artifact.

Schema update workflow

  1. Schema changes are made in agentic-terminal
  2. Run make fetch-schema in this project to get the latest schema
  3. Update Rust code (DatabaseChanges) if needed
  4. Test with the new schema

Why this architecture?

  • Type Safety: Schema is defined in TypeScript with Drizzle ORM
  • Single Source of Truth: No schema drift between projects
  • Separation of Concerns:
    • agentic-terminal = Schema definition + Query + Type generation
    • daiko_pump_fun_substreams = Data writing only

Related Projects

Schema Overview

For the authoritative schema definition, see agentic-terminal/packages/db/src/schema/substreams-schema.ts.

Core tables (and primary keys)

  • tokens: PK (mint_address)
  • token_metrics: PK (mint_address)
    • price_sol, mc_sol, ath_price_sol, ath_mc_sol: numeric(38,18)
    • ATH fields are maintained by DB trigger trg_token_metrics_ath_update
  • trades: PK (signature, ix_index, inner_ix_index, event_index)
    • trade_source: bonding_curve or amm
    • price_sol: numeric(38,18)
    • pool_address is populated for AMM trades
  • wallets: PK (address)

Note: token_account_changes table has been deprecated. Balance changes are processed internally for holder_count computation but are not persisted to the database.

  • wallet_labels: PK (mint_address, wallet_address, label_kind)

Sink internal tables

These tables are used by substreams-sink-sql for cursor tracking and are now schema-qualified:

  • substreams._cursors - Stores the latest processed cursor for resume
  • substreams._substreams_history - Stores processing history for reorg/undo support

Note: When running the sink, use schema-qualified table names:

substreams-sink-sql run ... \
  --cursors-table "substreams._cursors" \
  --history-table "substreams._substreams_history"

Publishing to Substreams Registry

Automatic Publishing (CI/CD)

This project automatically publishes to the Substreams Registry on every push to main that modifies source code.

The GitHub Actions workflow (.github/workflows/publish-spkg.yml) will:

  1. Auto-increment the patch version in substreams.yaml
  2. Build the WASM module
  3. Pack and publish to the Substreams Registry
  4. Commit the version bump back to main

Required GitHub Secrets:

Manual Publishing

If you need to publish manually:

source .substreams.env

# Build and pack first
cargo build --target wasm32-unknown-unknown --release
substreams pack substreams.yaml

# Publish (requires registry auth)
# - For local use: run `substreams registry login` once, or set SUBSTREAMS_API_TOKEN in .substreams.env
substreams publish daiko_pump_fun_substreams-v0.2.0.spkg --yes

Using the Published Package

Consumers (like agentic-terminal) can reference the published package directly:

# In substreams-sink-sql
substreams-sink-sql run \
  "$DATABASE_URL" \
  "spkg.io/daikolabs/daiko_pump_fun_substreams-v0.2.0" \
  --endpoint mainnet.sol.streamingfast.io:443

# In Docker (agentic-terminal)
SPKG=spkg.io/daikolabs/daiko_pump_fun_substreams-v0.2.0 docker compose up -d

No local .spkg file or this repository is needed for consumption.

Development

Project Structure

.
├── Cargo.toml                    # Rust dependencies & build configuration
├── substreams.yaml               # Substreams module definitions
├── docker-compose.yaml           # PostgreSQL + pgweb for local development
├── Makefile                      # Common development commands
│
├── idls/                         # Anchor IDL files (INPUT - do not edit)
│   ├── pump.json                 # pump.fun Bonding Curve program IDL
│   └── pump_amm.json             # Pump AMM program IDL
│
├── proto/                        # User-defined Protobuf messages (INPUT)
│   ├── program.proto             # Domain events (TradeEvent, TokenCreated, etc.)
│   └── database.proto            # DatabaseChanges for SQL sink
│
└── src/
    ├── lib.rs                    # Substreams module entry points
    │
    ├── idl/                      # IDL module declarations
    │   └── mod.rs                # declare_program!() → generates types at compile time
    │
    ├── pb/                       # Generated Protobuf code (OUTPUT - do not edit)
    │   ├── mod.rs                # Module exports
    │   ├── substreams.v1.program.rs    # Generated from proto/program.proto
    │   └── sf.substreams.sink.database.v1.rs  # Generated from proto/database.proto
    │
    ├── programs/                 # Program-specific logic
    │   ├── pumpfun/              # pump.fun Bonding Curve handlers
    │   │   ├── events.rs         # Event parsing using idl::pump::events
    │   │   ├── map_create.rs     # Token creation extraction
    │   │   └── map_trades.rs     # Trade event extraction
    │   └── pump_amm/             # Pump AMM handlers
    │       ├── events.rs         # Event parsing using idl::pump_amm::events
    │       └── map_trades.rs     # AMM trade extraction
    │
    ├── modules/                  # Substreams module implementations
    │   ├── map/                  # Map modules (stateless fact extraction)
    │   ├── stores/               # Store modules (minimal state)
    │   ├── detect/               # Detection modules (sniper, bundler, insider)
    │   └── sinks/                # Sink modules (DatabaseChanges output)
    │
    └── utils/                    # Shared utilities
        ├── meta.rs               # ChainMeta construction
        ├── token_balance.rs      # Token balance change detection
        └── tx.rs                 # Transaction parsing helpers

Code Generation Commands

Regenerate Protobuf Rust code

When you modify files in proto/, regenerate Rust code:

substreams protogen substreams.yaml --exclude-paths=sf/substreams/rpc,sf/substreams/v1,google

Note: The generated files in src/pb/ should not be edited manually.

IDL (Anchor) code generation

IDL code is generated automatically at compile time by the declare_program!() macro. No manual step is required.

If you update the IDL files in idls/, simply rebuild:

cargo build --target wasm32-unknown-unknown --release

Code Formatting and Linting

Formatting

# Format code
cargo fmt

# Check formatting (no changes)
cargo fmt -- --check

# Or use Makefile
make fmt          # Format
make fmt-check    # Format check

Linting (Clippy)

# Run lint
cargo clippy

# Run lint (treat warnings as errors)
cargo clippy -- -D warnings

# Fix auto-fixable issues
cargo clippy --fix --allow-dirty --allow-staged

# Or use Makefile
make lint         # Run lint
make lint-fix     # Auto-fix
make check        # Format check + lint

Container Management

# Start
docker compose up -d

# Stop
docker compose down

# Restart (keep data)
docker compose down
docker compose up -d

# Hard reset (wipe Postgres data) + restart
# NOTE: Postgres data is bind-mounted to ./data/postgres in docker-compose.yaml
docker compose down --volumes
rm -rf ./data/postgres
docker compose up -d

# Complete removal (including volumes)
docker compose down -v

License

MIT

Troubleshooting / FAQ

_cursors / _substreams_history tables are created automatically. What are they?

Those are internal tables created by substreams-sink-sql (the SQL sink), not tables defined by this project. They exist to support:

  • Resume / checkpoints: _cursors stores the latest processed cursor so the sink can restart from the correct position.
  • History / undo: _substreams_history stores processing history, used for reorg/undo support.

Should I delete them?

In general, no. Deleting them loses progress tracking and can cause unexpected reprocessing or duplication. If you want a clean re-sync from scratch, prefer resetting the database explicitly (e.g. docker compose down -v to drop the volume and re-initialize).

Documentation

Modules

Maps icon
Maps

map
map_pump_create

fba62821253db770320b185e964d5184a8e1acaa
map map_pump_create (
solana:blocks_without_votessf.solana.type.v1.Block
)  -> substreams.v1.program.TokenCreateds
substreams gui daiko-pump-fun-substreams@v0.2.6 map_pump_create

map
map_pump_graduations

231c0569692c5198ca4b4e3fe0a64f5af6de7755
substreams gui daiko-pump-fun-substreams@v0.2.6 map_pump_graduations

map
map_pump_bonding_trades

5a45c8eddb6229b471a6cd243133c2e9560e8ff9
substreams gui daiko-pump-fun-substreams@v0.2.6 map_pump_bonding_trades

map
map_pump_amm_trades

f4878542d57d59f1cb1253dfb347373b8b647455
substreams gui daiko-pump-fun-substreams@v0.2.6 map_pump_amm_trades

map
map_token_account_changes

8a0d9755ba47d36e30ec689a59ad2f0bd44bf065
substreams gui daiko-pump-fun-substreams@v0.2.6 map_token_account_changes

map
map_token_transfers

2b9cef6afb39c7d14efc379f141acbeb4b42d9e5
substreams gui daiko-pump-fun-substreams@v0.2.6 map_token_transfers

map
map_pump_amm_pools

85bf59ce69fa6a5e468ffe0144e4c7e202efab59
substreams gui daiko-pump-fun-substreams@v0.2.6 map_pump_amm_pools

map
map_detect_sniper

6ec0ea37a4fcd438d6a8275647321ce91545cfa0
substreams gui daiko-pump-fun-substreams@v0.2.6 map_detect_sniper

map
map_detect_bundler

8a689a491936c3ae9ad07e63a62e0053e6d19cbf
substreams gui daiko-pump-fun-substreams@v0.2.6 map_detect_bundler

map
map_detect_insider

2dafd6ad9d0310ae653f7c23bd97cda350c24e71
substreams gui daiko-pump-fun-substreams@v0.2.6 map_detect_insider

map
solana:blocks_without_votes

3c0f6b9ed18876ccf0b28f53e5092b5aa82f75e1
map solana:blocks_without_votes (
)  -> sf.solana.type.v1.Block
substreams gui daiko-pump-fun-substreams@v0.2.6 solana:blocks_without_votes
Stores icon
Stores

store
store_known_mints

83436c207f06a1e9c05cbab70f73aef4d9740aec
substreams gui daiko-pump-fun-substreams@v0.2.6 store_known_mints

store
store_token_creator

ad7b7c9cc8a89b9e6088df3bda0672161f88ad09
substreams gui daiko-pump-fun-substreams@v0.2.6 store_token_creator

store
store_holder_count

d103238c82bbae1add4856d2b094259d6ea74cc5
substreams gui daiko-pump-fun-substreams@v0.2.6 store_holder_count

store
store_mint_has_trade

e8090a7a82e617c257750f1cdbabbf5e63d3ebb7
substreams gui daiko-pump-fun-substreams@v0.2.6 store_mint_has_trade

store
store_pump_amm_pool_mints

a4b859fa41476c0205cb08d3b109782dd4296d7e
store <set,string> store_pump_amm_pool_mints (
)
substreams gui daiko-pump-fun-substreams@v0.2.6 store_pump_amm_pool_mints
Protobuf

Protobuf Docs Explorer

sol.transactions.v1
sol.instructions.v1
substreams.v1.program
sf.solana.type.v1