Kinsho AI
Get Started

How Kinsho collects data

A transparent look at what happens behind the scenes when Kinsho runs your prompts.

Kinsho is a daily pipeline. Every day, every prompt you created is asked to every AI model you selected. We capture the raw response, extract brand mentions, compute metrics, and store the full answer so you can go back and inspect it at any time.

The daily loop
Every prompt × every model × every day — aggregated into the metrics you see
every 24h
OpenAIGeminiPerplexityClaude
AI engines
Prompt execution
Response capture
Brand extraction
Sentiment & position
Source attribution
Aggregation
  1. 01Prompt execution
  2. 02Response capture
  3. 03Brand extraction
  4. 04Sentiment & position
  5. 05Source attribution
  6. 06Aggregation

The daily pipeline

  1. 01
    Prompt execution
    Your prompts are sent to each selected AI model through the same interface a real user would see.
  2. 02
    Response capture
    The full text, citations, and timestamps are stored so you can audit any score later.
  3. 03
    Brand & competitor extraction
    A multi-pass extractor (including CJK-aware matching) identifies every brand mention and its context.
  4. 04
    Sentiment & position scoring
    Each mention is classified as positive / neutral / negative and ranked within the response.
  5. 05
    Source attribution
    Citations are normalized and mapped to the domains and pages AI models actually referenced.
  6. 06
    Aggregation
    Scores are aggregated into the visibility, sentiment, and position you see on the dashboard.

Why daily, and why multiple runs?

AI responses are probabilistic — the same question can produce different answers on different days. A single run is noisy. Running every day across multiple models lets Kinsho produce stable, trend-aware metrics you can act on.

Multi-region simulation for accurate results

AI answers are not the same everywhere. A question asked from Tokyo produces different brand mentions than the same question asked from New York or Berlin — because AI models weight locally relevant sources, regional product availability, and language-specific training data differently.

Kinsho runs each prompt from multiple geographic regions simultaneously, using an advanced simulation layer that replicates how a real user in that region would interact with the AI. This means your visibility scores reflect actual regional variation rather than a single US-centric viewpoint.

Tip
Each prompt you write can be pinned to a country and optional sub-region. Kinsho will simulate it from infrastructure in that locale — so a Japanese prompt runs from a Japanese IP, gets Japanese-localised results, and is scored accordingly.

Why we use UI interactions, not APIs

Most analytics tools call official APIs, but API responses often differ from what real users see: different models get selected, different sources get cited, and sometimes web search is silently disabled.

Kinsho interacts with AI models through their real product surfaces. That means your data matches what the average logged-out user actually experiences.

Info
Kinsho interacts with AI models through their real product surfaces. That means your data matches what the average logged-out user actually experiences.