Taxonomy guide

    Prediction Market Tools: Match the Tool to the Job

    Flat tool directories list products. This guide maps user jobs to tool categories, trust checks, source requirements, and the failure modes that can turn software into false confidence.

    Use it to separate beginner learning, data terminals, alerts, arbitrage comparison, APIs, portfolio tracking, wallet research, and market-context tools before caring about any brand name.

    Last updated 2026-05-07

    Durable job-to-tool taxonomy for prediction-market tools. This is not a ranking, directory, endorsement list, broker, router, exchange, clearing, custody, or execution product.

    Control-layer map
    Job map

    Map the tool to the job

    Flat directories start with product names. Start here instead: the user job, the right category, the trust checks, and the failure mode to avoid before a tool earns attention.

    News/media intelligence layer

    Beginner learning / choosing a tool type

    You are trying to understand which kind of prediction-market tool fits the problem before evaluating any product list.

    Trust checks

    • Look for plain-language methodology, not promo copy.
    • Confirm the guide separates education from execution or account routing.
    • Prefer pages that link back to primary market rules or official docs.

    Do not use for

    Do not use a beginner guide as proof that a tool executes, routes, custodies, or clears trades.

    Check the evidence path
    Data/API provider

    Data terminal / live odds + history

    You need current prices, historical snapshots, market metadata, liquidity context, or a dashboard view across many contracts.

    Trust checks

    • Check whether refresh cadence and source venue are disclosed.
    • Separate live price display from historical completeness.
    • Confirm the tool explains stale, closed, or near-resolved markets.

    Do not use for

    Do not treat a terminal as proof that two contracts resolve the same way.

    Check the evidence path
    Alerting/monitoring tool

    Alerts / monitoring

    You want notifications for price moves, liquidity changes, new markets, or watchlist thresholds.

    Trust checks

    • Confirm what event triggers the alert and what data source powers it.
    • Check whether alerts include context or only price movement.
    • Look for rate limits, delay caveats, and stale-data handling.

    Do not use for

    Do not treat an alert as a trading recommendation or as confirmation that the move will persist.

    Check the evidence path
    Arbitrage scanner

    Arbitrage / cross-venue comparison

    You are comparing prices across venues and need to know whether a spread is real or just a contract mismatch.

    Trust checks

    • Require market-rule matching, not just similar titles.
    • Check settlement source, timing, fees, depth, and withdrawal friction.
    • Confirm the scanner explains excluded or non-equivalent contracts.

    Do not use for

    Do not call a spread risk-free until wording, settlement, liquidity, and execution risk are checked.

    Check the evidence path
    Developer client library / SDK

    API / builder workflow

    You are building your own model, dashboard, bot, importer, or research workflow on top of market data.

    Trust checks

    • Start with official API docs, official SDKs, or maintained official repos.
    • Check signing, authentication, permissions, and endpoint deprecation notes.
    • Separate data access helpers from exchange behavior or policy guarantees.

    Do not use for

    Do not assume a client library changes venue rules, account permissions, uptime, or order-state behavior.

    Check the evidence path
    Portfolio/account monitor

    Portfolio / position tracking

    You need to reconcile open positions, realized outcomes, deposits, withdrawals, exposure, or cross-platform account state.

    Trust checks

    • Check whether balances are official account data, imported CSVs, wallet views, or estimates.
    • Confirm how the tool handles settled, pending, canceled, and disputed markets.
    • Look for privacy, custody, and credential-boundary disclosures.

    Do not use for

    Do not use a tracker as custody proof unless it is tied to official account, ledger, or wallet records.

    Check the evidence path
    Wallet/order-flow monitor

    Wallet/order-flow research

    You are studying visible wallets, large prints, flow timing, exits, or position changes.

    Trust checks

    • Check label methodology before trusting any whale or smart-money tag.
    • Pair flow with liquidity, market depth, timing, and position context.
    • Distinguish entry, exit, hedge, rebalance, and test trades.

    Do not use for

    Do not copy visible flow without understanding whether you are becoming someone else’s exit liquidity.

    Check the evidence path
    News/media intelligence layer

    News/context / market-move explanation

    You need to understand why a market moved, what source changed, and whether the move belongs to price, liquidity, rules, or narrative.

    Trust checks

    • Prefer source ladders that separate official docs, filings, market pages, and discovery signals.
    • Check whether the page links to primary sources behind factual claims.
    • Confirm the explanation avoids turning odds movement into certainty.

    Do not use for

    Do not use market commentary as proof of final resolution, official policy, or execution quality.

    Check the evidence path
    Calibration/research tool

    Model evaluation / sports-pricing workflow

    You are testing a prediction-market pricing model and need to separate forecast quality from execution quality before trusting P&L screenshots.

    Trust checks

    • Score probabilistic forecasts against resolved outcomes over a meaningful sample; do not treat early P&L as model quality.
    • Compare the model's stated probability to the market price available before the trade, not to a later cherry-picked screenshot.
    • Track execution separately: displayed midpoint, executable bid/ask, spread, depth, fees, fill timing, and partial fills.
    • Separate forecast edge from position sizing, market selection, and late-entry chasing.

    Do not use for

    Do not use a model-evaluation checklist as a trading recommendation, a claim that any model is profitable, or proof that a public wallet has durable edge.

    Check the evidence path
    Model evaluation

    How should I judge a prediction-market sports model?

    Model evaluation starts before P&L. Use calibration, fill quality, and sample size to separate forecast quality from execution quality so a single hot streak does not masquerade as repeatable edge.

    Testing a prediction-market model? Don't start with P&L.

    P&L answers whether this account made money over this sample.

    It does not isolate probability calibration, whether the model beat the market price, whether fills were executable, or whether one large trade distorted the result.

    Start with three ledgers:

    • Forecast ledger: model probability vs resolved outcome.
    • Market ledger: available market price at decision time.
    • Execution ledger: fill price, size, spread, fees, depth.

    Forecast quality lives in calibration and error scores. Execution quality lives in fill quality, liquidity, fees, and timing.

    Three ledgers, not one P&L

    Separate forecast quality from execution quality before drawing conclusions.

    Forecast ledger

    • Model probability
    • Timestamp
    • Market/question
    • Resolved outcome

    Market ledger

    • Bid/ask/midpoint at decision time
    • Available price before the trade
    • Not after the outcome

    Execution ledger

    • Order type and fill price
    • Average fill, size, fees, spread
    • Depth and partial-fill state
    Calibration/research tool

    Model evaluation / sports-pricing workflow

    You are testing a prediction-market pricing model and need to separate forecast quality from execution quality before trusting P&L screenshots.

    Trust checks

    • Score probabilistic forecasts against resolved outcomes over a meaningful sample; do not treat early P&L as model quality.
    • Compare the model's stated probability to the market price available before the trade, not to a later cherry-picked screenshot.
    • Track execution separately: displayed midpoint, executable bid/ask, spread, depth, fees, fill timing, and partial fills.
    • Separate forecast edge from position sizing, market selection, and late-entry chasing.

    Do not use for

    Do not use a model-evaluation checklist as a trading recommendation, a claim that any model is profitable, or proof that a public wallet has durable edge.

    Review execution risk
    Model evaluation glossary

    Key metrics and what they actually mean

    These are common evaluation terms. None of them are a promise of profitable execution.

    Calibration

    If a model says 70% across many similar resolved events, roughly 70% should happen.

    Why it matters

    A model can sound confident and still be systematically overconfident or underconfident.

    Brier score

    A common score for probabilistic binary forecasts: lower is better, but it blends more than one property of forecast quality.

    Why it matters

    It is better than raw P&L for forecast evaluation, but it is not the same thing as execution profitability.

    Closing-line value / market-improvement check

    Did your probability beat the later market consensus, or were you just early/lucky on one result?

    Why it matters

    It helps separate repeatable information edge from one-off outcome luck, but only if timestamps and available prices are honest.

    Markout

    A post-trade check of how the market price moved after your fill over a fixed window.

    Why it matters

    Positive markout can suggest your fill was favorable; negative markout can show adverse selection or bad timing.

    Fill quality

    The gap between the price your model assumed, the displayed quote, and the actual filled average price.

    Why it matters

    A good forecast can still lose money if the order walks the book, misses fills, or pays too much spread.

    Sample size

    How many independent resolved bets/forecasts support the claim.

    Why it matters

    Ten wins can be a hot streak. A model needs enough resolved, comparable forecasts before users should trust the pattern.

    Research workflow

    The journalist/researcher toolkit layer

    A platform-neutral research workflow should help a reporter move from topic to relevant markets to citation-ready context. It is not a broker, router, exchange, custody product, or proof that odds equal truth.

    Step 1

    Start with the article or topic

    Define the news question, event window, geography, and market categories before searching for odds.

    Step 2

    Find relevant markets across venues

    Match candidate markets by wording, timeline, status, and resolution source rather than similar headlines alone.

    Step 3

    Compare liquidity, spread, and timeline

    Show whether the quoted price has usable depth, whether the spread is wide, and when the contract closes or resolves.

    Step 4

    Attach resolution-source context

    Include the rule text, official source, or settlement source a reader needs to understand what the market actually measures.

    Step 5

    Generate citation-ready context

    Package the timestamped quote with caveats so editors can cite market odds without overstating certainty.

    Calibration/accountability checks

    • Use a resolved-outcome dataset, not only live markets or cherry-picked winners.
    • Bucket historical probabilities before resolution and compare each bucket with actual outcomes.
    • Store quote timestamps, market category, venue, and time horizon so old snapshots are auditable.
    • Publish inclusion, exclusion, stale-market, voided-market, and duplicate-market rules.
    • Keep forecast calibration separate from executable price, fill quality, fees, and liquidity.

    Citation-ready embed requirements

    • Market title and venue or platform label.
    • Timestamp for the quoted odds or displayed price.
    • Displayed price versus executable depth caveat when depth or spread matters.
    • Liquidity, spread, or volume context when available.
    • Resolution source, rule text, or market-source link.
    • Market status such as active, closed, resolved, archived, or disputed when relevant.
    • Plain-language caveat that odds are market prices, not settled facts.

    What not to claim

    • Do not turn the citation into a trading recommendation.
    • Do not present market odds as proof of truth or official outcome probability.
    • Do not claim one venue is more accurate without a disclosed calibration methodology.
    • Do not rank products by affiliate or promo incentives.
    • Do not imply the widget routes, executes, clears, brokers, custodies, or places trades.
    Taxonomy

    What each category actually controls

    A tool category can be useful without controlling settlement, clearing, custody, or execution. This table keeps those boundaries visible.

    CategoryControlsDoes not controlSource requirement
    Data/API provider
    Market data access, schema shape, historical snapshots, and endpoint reliability.Settlement rules, matching, custody, account balances, or whether two contracts are truly equivalent.Official API documentation, official data dictionary, maintained official repo, or platform-owned developer docs.
    Execution router / smart-order-routing layer
    Order-entry workflow, venue selection UX, and sometimes routing preferences if the user authorizes it.Resolution criteria, clearing, platform solvency, contract legality, or whether a visible price has enough depth.Official product docs, terms, routing disclosures, and clear statements about custody/execution boundaries.
    Arbitrage scanner
    Detection of visible price spreads and headline-level event matches.Contract wording, rule differences, settlement-source disagreements, fees, withdrawal friction, or fill risk.Official venue rules plus visible methodology for matching markets and excluding non-equivalent contracts.
    Wallet/order-flow monitor
    Observed wallets, large prints, order-flow timing, and position-change displays where data is public or platform-exposed.Trader intent, whether a trade is smart money, whether a wallet is hedging, or whether liquidity can absorb a copy trade.Official chain explorer, official API docs, or transparent methodology for wallet clustering and label limitations.
    Developer client library / SDK
    Developer ergonomics, request helpers, signing flows, and integration speed.Exchange behavior, endpoint uptime, account permissions, or platform policy changes.Maintained official repository, package registry controlled by the platform, or official developer documentation.
    Portfolio/account monitor
    Position views, exposure summaries, account-state reconciliation, and sometimes imported trade or ledger history.Custody, official balances, withdrawals, tax treatment, settlement timing, or whether a display issue means funds are missing.Official account export, official API/account docs, transparent import methodology, and clear privacy/credential boundaries.
    News/media intelligence layer
    Context, framing, source comparison, market-move explanations, and editorial workflow around live odds.Order routing, execution, settlement, user account decisions, or market making.Clear editorial methodology, source hierarchy, and links to primary market/official sources when making factual claims.
    Alerting/monitoring tool
    Notifications for price moves, liquidity changes, watchlist events, or custom thresholds.Why a move happened, whether a move persists, or whether a user should trade on the alert.Official app docs, API docs, or transparent trigger definitions and data-source disclosures.
    Calibration/research tool
    Historical accuracy views, category-level calibration, backtests, and research dashboards.Future truth, liquidity, fill probability, tax treatment, or live execution quality.Published methodology, dataset provenance, outcome-label rules, and refresh cadence.
    Journalist/researcher toolkit
    Topic-to-market discovery, citation context, cross-platform comparison workflow, and editorial packaging for researchers and reporters.Truth, trade execution, settlement, venue liquidity, custody, routing, or whether odds should be treated as a recommendation.Visible methodology, timestamped market snapshots, venue/source links, resolution-rule links, and clear disclosure of included/excluded markets.
    Do not miss

    Four ways tools overpromise

    Faster execution is not better resolution.

    A router can improve workflow without changing what the contract means or who decides the outcome.

    Smart-money labels need liquidity context.

    A large wallet can be entering, hedging, exiting, or testing depth. Flow is not automatically intent.

    Arbitrage scanners must compare wording, not just price.

    Two headlines can look identical while the rule text, timing, source, or settlement fallback differs.

    Wallet tracking shows flow, not intent.

    Copying visible flow without depth, timing, and position context can turn a signal into exit liquidity.

    Citation widgets need timestamps and rules links.

    A market embed without quote time, venue, liquidity or spread context, and a resolution-source link can make live odds look more certain than they are.

    Source policy

    Why this is not a tool directory

    Current-state lists decay fast. This page stays useful by separating what a tool category controls from what it cannot control. Named products require official-source verification before they belong here.

    Open architecture map

    Specific third-party tool names render only after same-session verification against an official site, official docs page, or maintained official repo.

    Third-party curated lists can help discover categories, but they are not proof that a product controls execution, routing, custody, settlement, or clearing.

    The architecture map is the control-layer source of truth for separating rails, wrappers, routers, SDKs, analytics, and media/intelligence surfaces.

    If a capability is not official-source clear, describe the category generically and omit the brand name.

    FAQ