Mission
We aim to make forecasting accessible and accountable by combining professional market mechanics with transparent AI resolution. Our focus is clarity of criteria, auditability of decisions, and participant safety.
How it works
Each market specifies a resolution criterion and an oracle policy. At close, we execute a structured evaluation using LLMs with pinned prompts and either a pinned model version or a provider alias frozen at market close. We record inputs and evidence to enable reproducible review under the same conditions.
- Reproducibility: prompts, seeds, citations, and resolved model metadata are recorded in a public proof bundle.
- Health checks: providers are continuously probed; status appears in the product.
- Policy-first: retries and fallbacks follow a documented, versioned policy; if resolution cannot be completed to spec, the market is marked INVALID with rationale.
Oracle policy
Our oracle pipeline aims for reproducibility: pinned prompts, temperature zero, and strict provider timeouts, with recorded inputs/outputs shown in the UI for post‑hoc review. We do not guarantee bit‑for‑bit determinism across providers or fresh web retrieval.
- Versioning: oracle policies are semantically versioned.
- Fallbacks: well‑documented, finite retries and provider fallbacks; no unbounded loops.
- Human review: for ambiguous criteria, disputes route to a structured human arbitration process as a final tiebreaker.
Disputes & governance
Participants can raise disputes during a defined window after preliminary resolution. Evidence is attached to a case, which is adjudicated under published rules. Resolutions and rationales are published.
Security & trust
We apply timeouts, least‑privilege, and defense‑in‑depth across the stack. SeePrivacy and Terms for details.
Contact
Questions or ideas? Emailhello@latentmarkets.comor DM@llmmarketson X.