Reevol Signal

How Reevol Signal works

Full transparency on the data, models, and scoring rules that produce a Signal Score.

What we measure

Reevol Signal answers one question for cross-border B2B suppliers: when buyers ask AI agents about your company, what do those AI agents actually say? The Signal Score captures whether AI knows who you are, what you make, whether buyers can trust you, and whether AI recommends you when relevant questions are asked. Five dimensions feed into one weighted score from 0 to 100.

The five dimensions

Each dimension scores 0 to 100. The Signal Score is the weighted average. Weights are locked and visible on every audit. We do not adjust the weights per supplier.

DimensionWeightWhat it measures
AI Recommendation Rate30%How often AI surfaces this supplier when buyers ask category and sourcing questions.
Identity & Presence20%Whether AI correctly names, locates, and describes the company consistently across queries.
Product Clarity20%Whether AI accurately describes what the company actually makes or exports.
Buyer Trust Signals15%Whether AI references certifications, export history, and third-party validations.
Digital Footprint15%Deterministic web checks: domain crawlable, machine-readable metadata, directory listings.

The AI engines we query

Every audit queries multiple large language models. At launch, every Tier 1 supplier is audited against ChatGPT (GPT-4o), Claude (Sonnet 4.6), and Perplexity (Sonar Pro). Tier 2 suppliers are audited against GPT-4o-mini and Claude Haiku 4.5. We add Gemini Flash and procurement-specific AI agents as their public APIs allow.

The three buyer queries

For each LLM, we run three prompts written from a buyer's point of view. We never ask the LLM to praise or rank the supplier; we ask the kind of question a real buyer would ask. The three queries are:

  1. Identity:"What can you tell me about <company> from <country>? Do you have any information about them as a supplier or manufacturer?"
  2. Sourcing:"I am looking for a reliable <category> supplier from <country>. Is <company> a good option?"
  3. Trust:"Can you verify if <company> (<domain>) is a legitimate and trustworthy <category> supplier in <country>? What certifications or quality indicators are you aware of?"

Tier 1 suppliers run the same three queries in four buyer languages (English, German, Spanish, French). Tier 2 suppliers run in English by default; claimed Tier 2 suppliers run in English plus one buyer language they specify.

How responses are parsed

Each LLM response is parsed by a small classifier model into a structured shape: whether the company was mentioned at all, the sentiment of the response, the accuracy of the description (does it match what the supplier actually does?), the products and certifications mentioned, any negative signals, and any URLs cited. The parsed shape feeds the dimension scores using fixed rules; we never let one LLM grade another. The raw response is stored alongside the parse so suppliers can inspect exactly what the AI said.

Score smoothing

The public Signal Score is a three-run moving average. AI models update their training and inference behavior often, so any single run can swing 5 to 10 points. Averaging three runs damps that jitter without hiding real trends. Claimed suppliers see the raw single-run score on their Compass dashboard alongside the smoothed public score, so they can debug short-term changes.

Refresh cadence

Tier 1 premium suppliers refresh monthly. Tier 2 standard suppliers refresh quarterly. Claimed Tier 2 suppliers refresh monthly. Tier 3 stubs refresh only when triggered by traffic or a visitor request. Anyone can request an ad-hoc refresh via the audit page; an email gate prevents abuse and lets us send the result.

Score bands

  • 80 to 100, Strong Signal: consistently recognized by AI procurement agents.
  • 60 to 79, Moderate Signal: known but with visible gaps.
  • 40 to 59, Weak Signal: partially recognized, inconsistent data.
  • 0 to 39, Dark Signal: largely invisible to AI agents.

Data provenance

The supplier metadata that anchors every audit comes from Reevol Atlas. When a supplier claims their Signal page, the claimed data flows back to Atlas with provenance set to supplier-claimed, and downstream Reevol products (Source, Compass) pick up the verified version within minutes. AI responses are point-in-time observations: we record the LLM, the date, and the exact prompt for every response, and we link each citation back to its source URL where the LLM provides one.

Dispute process

If a Signal Score reflects clearly incorrect AI behavior (the wrong company, a hallucinated product, an outdated negative reference), claimed suppliers can flag the specific LLM response from their Compass dashboard. Flagged responses are re-audited; if the issue reproduces, we record a negative-signal exception and adjust the score band. Unclaimed suppliers can dispute by claiming the page first.

What this score does NOT measure

Reevol Signal measures AI perception. It is not a credit check, not a financial assessment, not a fraud screen, and not a procurement substitute. A high Signal Score means AI agents can describe and recommend the supplier; it does not certify quality or solvency. Buyers should use Signal as one input in due diligence, not the only input.

Limitations

LLM training data has a cutoff. A supplier whose website is brand new will show a Dark Signal until AI engines pick up the public references; this is a real signal, not a flaw. A supplier the LLM has never heard of will return "not found" responses across all five engines; this is also a real signal. We do not synthesize a higher score from Atlas data when AI returns nothing. Doing so would defeat the purpose of the measurement.