Skip to content

Visibility Kit · free tool

Test how AI models see your brand.

Five prompts. Two modes each. ChatGPT, Claude, Perplexity. Copy, paste, see where the model knows you and where it doesn't. No email, no sign-up.

Setup

Brand is required. Category unlocks 04–06. Use case unlocks 05. Domain unlocks the bonus extraction card.

0 / 6 ready

Shareable — the URL already carries your setup.

01 · Awareness

Does the model know you exist?

Whether the brand has crossed the visibility threshold into the model's training data. If the parametric run returns 'I don't have information about...', there is no parametric presence.

What do you know about {BRAND}? Don't search the web — tell me only what you already know. What do they do, who are they for, and what are the main alternatives?

Enter brand above to activate this prompt.

02 · Perception

What does the model think you stand for?

Which associations are anchored to the brand. Parametric mirrors the training data; dynamic is a proxy for current discourse.

Without searching the web, what is {BRAND} best known for? What's their reputation? Based on what you know, what do customers typically say about them?

Enter brand above to activate this prompt.

03 · Competition

Who does the model name alongside you?

Competitive framing. Whether the brand shows up in the right set when a user asks about alternatives, or gets skipped from 'vs' queries entirely.

Without searching, who are the main competitors to {BRAND}? How does {BRAND} compare to the alternatives in terms of features, pricing, and target audience?

Enter brand above to activate this prompt.

04 · Authority

Does the model cite you as an expert?

A proxy for topical authority. If the dynamic run returns third-party sources but never the brand's own content, the brand has traffic but no thought leadership.

Without searching, is {BRAND} considered a leader or expert in their space? What specific expertise or content are they known for? Do you cite them when people ask about {CATEGORY}?

Enter brand + category above to activate this prompt.

05 · Recommendation

Would the model actually send someone to you?

The bottom-of-funnel question. The gap between 'mentioned in awareness' and 'recommended in the list' is diagnostic of commercial intent coverage.

Without searching: if someone asked you to recommend a {CATEGORY} tool for {USE_CASE}, would you recommend {BRAND}? For whom is it a good fit, and who should choose an alternative?

Enter brand + category + use case above to activate this prompt.

Bonus card — the extraction test

See what ChatGPT actually reads on your site.

The five prompts above test what the model thinks of your brand. This one tests what the model retrieves and summarises when it runs its own search against your domain — the fan-out layer behind every citation.

06 · Extraction

Bonus · ChatGPT only

What does ChatGPT actually read on your site?

Which two pages surface first when ChatGPT runs a site: search against your domain, and how it summarises them. The first five prompts test what the model thinks of your brand. This one tests what the model extracts when it works the page itself — the fan-out layer behind every citation.

How this works

The JSON blocks below are ChatGPT's own internal tool-call syntax. Pasted into the chat, GPT-5 parses them as a search-then-open instruction and runs a real site: query, then opens the first two results. The turn0search0 and turn0search1 references are how ChatGPT indexes the results of the first search turn. Works reliably in ChatGPT; the format is specific to OpenAI's tool system, so Claude and Perplexity do not parse it the same way.

Search for this query, then open the first two results and summarize what you find on each page.

Step 1 — Search:

{
  "search_query": [
    { "q": "site:{DOMAIN} {CATEGORY}" }
  ],
  "response_length": "short"
}

Step 2 — Open the first two results:

{
  "open": [
    { "ref_id": "turn0search0" },
    { "ref_id": "turn0search1" }
  ]
}

Enter domain + category above to activate this prompt. Domain = the root site you want ChatGPT to search (e.g. notion.so). Category = the topic within that site you want surfaced.

Open in ChatGPT

Surfaced by Lily Ray. Prompt structure from Olivier de Segonzac (RESONEO) and Chris Long (Nectiv).

How to read the results

Parametric, then dynamic, then the gap.

The point of running both modes is the delta between them. Each prompt takes about a minute to run. Work through one model at a time.

01

Run parametric first

Switch web search off. The model answers from training data alone. This tells you what the model 'remembers' about the brand — the baseline visibility you've earned through historical footprint.

02

Then run dynamic

Turn web search on and run the same prompt. The model now answers grounded in live sources. This tells you how well the current web discourse reinforces the brand when a user actively looks.

03

Look at the gap

Parametric miss + dynamic hit = the model finds you when looking, but doesn't remember you otherwise. Parametric hit + dynamic miss = you're in the training data but current discourse isn't reinforcing you. Both hit = strong. Both miss = the GEO Checklist is where you start.

Frequently asked questions

Need an engineered growth plan?

Stack audit, opportunity review, twelve-month roadmap. No lock-in, full knowledge transfer.

Reply within one business day.