SANDRAIL Core: Community Rewards Standard and Distribution Toolkit for The Sandbox

Summary

Creators and community organizers repeatedly need a reliable way to run community rewards (e.g., event prizes, experience challenges) without rebuilding the same plumbing each time. Today, many reward programs still rely on manual steps (collect wallets, validate entries, and distribute rewards), which creates delay, inconsistencies, and disputes.

SANDRAIL Core is an open-source standard + reference toolkit that makes community reward distribution more consistent and auditable without requiring The Sandbox private APIs. It provides:

  • a small Reward Manifest Standard (RMS) (a JSON “receipt” describing what was rewarded and how),
  • a Proof Intake & Review module (optional) to collect and review completion evidence,
  • a Distribution Pack Generator (Merkle root + proofs + checksums) for Polygon distributions,
  • and integration adapters (SDK + webhooks + embed widget) so other ecosystem tools can reuse this work.

SANDRAIL Core is not a new quest platform, and it does not present itself as an official Sandbox claims system.

Problem

The Sandbox already shows that rewards are a core participation loop. Official rewards are claimed through a structured flow, on Polygon, with KYC requirements.

But creators and community organizers still lack reliable, reusable primitives for running community rewards. That forces incentives into improvised workflows instead of standard infrastructure.

Why this gap is a real blocker

  1. Expectation mismatch
    Players experience structured official rewards. Community programs often cannot match that consistency, which weakens repeat participation.

  2. Creators are explicitly asking for the missing pieces
    Creators keep requesting creator accessible distribution and claim, leaderboards, and APIs because the gap is still there.

  3. API access is not a safe dependency
    Public discussions show key data and gameplay signals are not broadly available and some are not on the short term roadmap, so community tooling that depends on official APIs is fragile.

  4. Rewards fall back to manual pipelines
    Organizers end up with wallet collection, spreadsheets, manual validation, and manual distribution steps, increasing delays and disputes.

  5. Trust gets worse
    Scattered forms and one off links force players to judge legitimacy. Community members already warn against trusting random links.

Why this needs action now

Manual reward operations and uncertain API paths keep repeating the same pain. This blocks scalable community campaigns and undermines the creator growth loop the ecosystem wants.

Goals and non goals

Goals

  • Standardize community reward campaigns

    • RMS schema covers: campaign info, eligibility source, review decisions, winners list, distribution artifacts, integrity hashes
  • Make rewards repeatable and auditable

    • Same inputs produce the same outputs, reducing disputes and rework
  • Support proof based campaigns without private APIs

    • Optional proof intake and review flow that outputs an auditable approved winners list
  • Make integration simple for existing tools

    • TypeScript SDK, signed webhooks, embed widget

Non goals

  • Not a quest or mission platform

  • Not a leaderboard engine

    • TOP reward mode imports rankings from elsewhere
  • Not an official Sandbox Claims replacement

    • No KYC, no official branding
  • Not dependent on private Sandbox APIs

Proposed solution

RMS v1: Reward Manifest Standard

  • A small versioned JSON schema that records:

    • Campaign: name, organizer label, start and end window, distribution mode
    • Eligibility: allowlist import or export from reviewed submissions
    • Artifacts: token or NFT identifiers, Merkle root, proofs bundle hash, winners list hash
    • Integrity: schema version, timestamps, file hashes
  • Purpose: any tool can generate and validate the same format and publish a transparency bundle

Proof Intake and Review: optional module

  • Submission: evidence plus wallet signature that binds submission to wallet
  • Review: approve, reject, flag
  • Reasons: reason codes plus audit log
  • Export: approved winners to CSV for distribution
  • Evidence v1: structured screenshot receipt plus optional event code
  • Helper checks: optional OCR extraction as reviewer aid only

Distribution Pack Generator

  • Inputs: winners.csv plus campaign config
  • Outputs: Merkle root, proofs bundle, RMS manifest, transparency hashes
  • Interface: CLI plus TypeScript SDK

Integration layer

  • Webhooks: submission.created, submission.reviewed, distribution.ready, campaign.closed
  • SDK: create campaign, upload winners, generate packs, fetch manifests
  • Widget: submit proof, check status, view distribution info

Milestones and budget

Milestone 1: RMS v1 plus validator plus generator core

  • Scope

    • RMS v1 schema and examples for ALL, FCFS, RAFFLE, TOP
    • Validator tool
    • Generator core: winners.csv to Merkle root, proofs, manifest, bundle hashes
    • Docs: runbook for organizers and integrators
  • Budget: $7,500

  • Acceptance

    • Given a sample winners.csv and config, a third party can validate the manifest and reproduce the bundle hash using documented steps

Milestone 2: Proof Intake and Review module

  • Scope

    • Submission API and storage
    • Review queue UI: approve, reject, flag with reason codes
    • Audit log for review actions
    • Export approved winners.csv
    • Basic anti abuse: rate limits, duplicate submission checks
  • Budget: $9,000

  • Acceptance

    • Demo 100 test submissions: submit, review decisions recorded with reasons, export winners.csv with audit trail

Milestone 3: Integration adapters and embed widget

  • Scope

    • Signed webhooks with retries
    • TypeScript SDK documentation and examples
    • Embeddable widget: submit proof, check status, view distribution artifacts
    • One reference integration demo using webhooks or SDK
  • Budget: $5,500

  • Acceptance

    • Reference integration receives signed webhooks or uses SDK and displays campaign status plus distribution artifacts

Milestone 4: Pilots, QA hardening, and security review

  • Scope

    • Pilot A: allowlist based campaign produces published transparency bundle
    • Pilot B: proof review campaign produces published transparency bundle
    • QA hardening and threat model notes focused on distribution flow
    • Focused security review of claim and distribution flow
    • Final report and handoff docs for self hosting and operations
  • Budget: $3,000

  • Acceptance

    • Two pilots completed with published manifests and transparency hashes, plus a short debrief and operator playbook

Total: $25,000

Team

We are Dapps over Apps, a collective focused on developer tooling and creator-facing utilities across Web3 ecosystems.

Selected projects

  • We created a VoxEdit → Unity/Roblox asset converter for Sandbox creators and gamers, allowing Sandbox-style assets to be used in other engines:
  • We built a local testing patch for Arbitrum that adds native support for Arbitrum precompiles (ArbSys at 0x64, ArbGasInfo at 0x6c) and transaction type 0x7e (deposit transactions) to Hardhat and Foundry (Anvil).

Project Website: https://www.ox-rollup.com

  • We created a Retrieval Utility for Filecoin that tests CID retrieval performance across multiple public gateways:

Filecoin Retrieval Tester

  • We have also worked on Zeckit for Zcash, a Zcash-focused tooling project, as part of broader research and experimentation around privacy-preserving and compliance-aware tools.
1 Like

Hi community, tagging @delegates for visibility on this proposal. @theKuntaMC @meowl @ixura @shont @hishmad

I really appreciate the time you took to map this out for us.

Before I form my final opinion, I’d like to clarify a few points:

  1. Market validation: You mention “Creators keep requesting creator accessible distribution and claim, leaderboards, and APIs” — do you have specific data on how many creators are asking for this, or examples of creators who’ve committed to using SANDRAIL Core once it exists?
  2. Competition/alternatives: Are there existing tools (in Sandbox or other ecosystems) that solve parts of this problem? Why would creators choose SANDRAIL Core over building custom solutions or waiting for official tooling?
  3. Adoption path: What’s your strategy for getting initial users? Will you be working with specific community organizers or creators during the pilot phase?
  4. Sustainability: This is open-source infrastructure. After the $25k grant, how will ongoing maintenance, security updates, and feature development be funded?
  5. Sandbox relationship: Have you discussed this with The Sandbox team? Is there any risk this duplicates or conflicts with their roadmap?

Hi @hishmad, thank you so much for the questions. Below are the answers to your questions:

Market validation

We don’t have an official “how many creators want this” number from The Sandbox, so we won’t guess. What we can show publicly is that the same needs keep coming up, and teams are already stuck using manual workarounds.

• Creators are asking for the exact toolkit pieces: distribution and claim, leaderboards, EP seasons, and APIs. In the same thread, a studio operator says contests are “very difficult and tedious” and they can’t keep running “professional” contests with Google Forms while “hoping no one is manipulating data.”
Evidence: https://forum.sandboxdao.com/t/sip-creator-toolkit-release/689

• The “Sandbox API Asks” thread shows why builders can’t plan around “we’ll just use the official API” as a near term dependency.
Evidence: https://forum.sandboxdao.com/t/sandbox-api-asks/2069?page=3

• The Chrysalis SIP discussion lays out the real world reward pipeline many organizers end up doing: collecting wallets, spreadsheets, validation, distribution, then verifying delivery.
Evidence: https://forum.sandboxdao.com/t/sip-24-chrysalis-quest-the-sandbox-communities-platform/1891?page=2

On commitments: we won’t claim signed “we’ll use it” promises we don’t have. We’ll prove demand with two pilots during delivery and publish simple numbers like campaigns run, submissions processed, and distributions completed.

Competition and alternatives

• Wait for official tooling: ideal long term, but not something community builders can schedule against today.
• Custom builds: work for one event, then you rebuild the same pipeline again for the next one.
• Payout tools: can help send tokens or NFTs, but they don’t standardize eligibility, review decisions, or transparency artifacts.
• Mission platforms: useful, but they still benefit from a shared, portable standard for reward artifacts that other tools can also use.

SANDRAIL Core is meant to be the shared layer: a standard, a pack generator, and plug-ins so community tools stop rebuilding the same reward plumbing.

Adoption path

• Pilot A (fast path): a real organizer with a ready winners list runs an allowlist distribution and publishes the artifacts.
• Pilot B (proof path): a real event that already collects submissions uses proof intake and review to produce an auditable winners list without private APIs.
• If no organizer is confirmed by the end of week 2: we run Pilot A as a small DAO test campaign with a limited allowlist, so the work still ships and the artifacts still get published.
• We focus on adoption through integration, not “please use our site”: SDK, signed webhooks, and an embed widget for existing community pages.

Sustainability

• Named maintainers, public triage rules, and a release process.
• During post delivery support: critical security issues triaged within 72 hours.
• After launch: monthly maintenance releases for 3 months for bugfixes and small hardening updates.
• Longer term funding: small follow on bounties or SIPs for specific work, plus optional paid support for teams that want help hosting or running larger campaigns.
• Security stays realistic for the budget by keeping custom on chain logic minimal and reviewing the distribution flow carefully.

Sandbox relationship

• This does not require The Sandbox team to ship new APIs for it to work.
• Any community rewards UI will be clearly labeled community run, and it will not mimic official Claims branding to reduce confusion.
• We won’t claim approval we don’t have. We will share the final SIP with The Sandbox team for feedback on naming, branding boundaries, and user trust concerns.

We hope this makes sense. Kindly let us know if you have any more questions.
Thank you.

1 Like