Auxy ARC
The measurement and optimization platform for AI visibility. Track what Gemini, ChatGPT, and Claude actually say about your brand. Diagnose where the model is failing to recommend you. Hand your content team the literal edits that move you up the ARC funnel.
From CTR to SRO
For two decades, the metric that mattered was CTR: how often a human clicked a link to your site. The shape of search has changed. Increasingly, the entity that "clicks" is a model deciding which sources to ground its answer on. The metric that matters now is Selection Rate: how often a generative engine actually picks your content when answering a query in your category.
Auxy ARC is the toolkit for measuring that, optimizing for it, and reporting on it to a board. It runs the same instrumentation our agency uses for client work, built on three years of LLM-grounding research at DEJAN.
How it works
Define what to track, probe the models, see where you're losing, ship the edit. The same loop on every property.
Add domains, brands, entities, queries, locations, target models, and objectives. The onboarding wizard generates entity and query candidates from your domain so you don't start from a blank sheet.
Auxy runs probes in parallel across Gemini, ChatGPT, and Claude. Grounded and parametric variants. Visibility Tracking takes nightly snapshots of the entities you tag for time-series scoring.
Dashboard, Citation Mining, Page Grounding, and Treewalker tell you which queries the model is missing you on, which competitor URL it's grounding instead, and whether the gap is content or index.
Snippet and Holistic Optimization emit concrete line-level edits with measured before/after lift attributable to each change. Cross-run learning compounds across iterations.
What's inside
Each module is independently useful. Together they compose into a measurement-to-action pipeline.
Nightly grounded probes against tagged entities and locations. Per-entity and holistic visibility scores with weekly drift alerts.
Which domains and URLs the model actually cited for queries in your category. Aggregated, ranked, tied back to the prompts that surfaced them.
Three directional probes mapping which brands the model associates with your entities, with your queries, and which entities it associates with your brand. Confidence-scored.
Binary classification probes that measure how consistently a model rates your brand as relevant to a category. Trended over time, per model.
Logprobs-driven probing of what a model recalls about your brand without web search. Token-level confidence with alternative-token suggestions. Tells you if the gap is parametric or grounded.
Compares grounded responses against the source page. Surfaces what survived, what was lost, what was distorted. Line-level scoring against the actual claim.
Auto-generated query taxonomy that tags every query with intent, funnel stage, and content type so optimization effort lines up with the audience you actually want.
Iteratively refines the specific lines a model is grounding on. Each candidate edit is re-ranked. The version with the highest measured lift wins.
Full-page rewrites with multi-line edit hypotheses. Cross-run insights compound: lessons from one page improve recommendations on the next.
Synthesizes probes, GSC traffic, Treewalker, and prior optimizer runs into a strategic report: strengths, weaknesses, gaps, and prioritized optimization directions.
Real-time summary cards (mention share, citation share, top entities, top domains) plus print-ready HTML reports for client and board review.
Under the hood
Auxy ARC is multi-model by design. Probes run in parallel across Gemini (Flash, Pro), OpenAI GPT (4 / 5 variants), and Anthropic Claude (Sonnet, Opus). Each surface is treated as a first-class measurement target with its own grounding behavior and citation conventions.
The platform uses mechanistic interpretability as its working method: rather than guessing how a model decides, we probe it like a black box. Treewalker walks the logprobs token tree. Page Grounding reconciles what the model said against what the source contained. Veracity scores fidelity per claim. The recommendations are evidence-led rather than folk theory.
Data lives in PostgreSQL, scoped per Property. Multi-property accounts, role-based access (Owner, Editor, Viewer), team membership, and an audit-friendly schema are all built in.
Tiers
The free Demo tier gives you enough to evaluate Auxy ARC on a real domain. Paid tiers unlock the citation-mining and optimizer volume you need to run a continuous program.
Demo
50 citation credits / month. 5 optimizer runs. Single property. For evaluating the platform on a real domain before committing.
Starter
1,000 citation credits / month. 100 optimizer runs. For solo SEOs and small brands running their first GEO program.
Pro
5,000 citation credits / month. 500 optimizer runs. Multiple properties, multiple seats, GSC integration, advanced optimizer modules.
Client
10,000+ citation credits / month. 1,000+ optimizer runs. Custom limits, full team access, priority support. Book a call for pricing.
Sign in with Google to start a property on the Demo tier, or book a 30-minute call.