Guide
Introduzione
Brains
Definizione
Performance
Insights
Competitive tracking
Definizione
Intents
Metriche
Geo Audit
Panoramica
Actions
Definizione
Generazione
Workflow
Actions
Workflow

Workflow Actions

What

Actions have statuses: To Do (just created), In Progress (you are working on them), Done (completed), Rejected (declined with a reason). You can manage them in Kanban view (drag cards between columns), List view (filterable table), or Analytics view (dashboard with metrics). Each action can be commented on, edited, and linked to specific tasks.

How

In Kanban view, you drag an action from To Do to In Progress when you start working on it. When you finish, move it to Done and specify the completion date. If an action no longer makes sense, you can reject it by selecting a reason (not relevant, already done, not feasible, etc.). The Analytics view shows how many actions are open, how many are completed, the completion rate, and the distribution by priority and type.

Why

A structured workflow ensures that actions are not forgotten. Tracking completion allows you to measure progress and understand whether the actions taken are working. If you complete 10 actions to improve the Authority KPI and then see the score increase, you know the work paid off. If it does not increase, you understand that you need to change your approach.

Actions
Generazione

Action Generation

What

There are three generation modes.

  • Auto mode: the platform analyzes everything (KPIs, competitors, Geo Audit) and generates a complete action plan.
  • KPI mode: you focus on specific KPIs that are underperforming and generate targeted actions to improve them.
  • Geo Audit mode: you take the technical issues identified in the audit and generate actions to resolve them.

You can also filter by country.

How

Click “Generate”, choose the mode, and optionally select specific KPIs or countries. The platform starts a background job that analyzes the data. After 1–2 minutes, you receive a notification and the generated actions appear in the dashboard.

Each action includes:

  • a title
  • a detailed description
  • the source (which data triggered it)
  • an estimated impact (high / medium / low)
  • a timeframe (short / medium / long term)
  • a type (onsite / offsite)

Why

Manually generating an action plan would require hours of analysis. Automated generation does it in minutes, and most importantly it is based on objective data rather than intuition.

You can also regenerate periodically to adapt to changes: if competitors change strategy or new issues emerge, the actions update accordingly.

Prioritization System

What

Action priority is automatically calculated based on the number and weight of sources that confirm the issue.

Each source has a weight:

  • COMPETITIVE = 3
  • GEO_AUDIT = 2
  • KPI = 1

The system also applies a multi-signal bonus: if multiple sources confirm the same issue, the priority increases.

How

The priority calculation works as follows:

  1. Sum the weights of the involved sources.
  2. Add a bonus for each additional source (+1.5 points).
  3. Compare the total score against dynamic thresholds.

Priority levels:

  • HIGH: all 3/3 sources, or 2/3 with COMPETITIVE involved
  • MEDIUM: 2/3 sources without COMPETITIVE, or only COMPETITIVE
  • LOW: only GEO_AUDIT or only KPI

Why

This prioritization system ensures that the most critical actions—those confirmed by multiple sources or by authoritative sources like competitive tracking—are highlighted first.

It prevents minor issues from overshadowing major problems, and helps teams focus efforts where the impact will be greatest.

The system is also extensible: if new sources are added in the future (for example social media monitoring), they can simply be registered in the registry with their corresponding weight.

Actions
Definizione

Actions Overview

What

Actions are intelligent to-dos automatically created by the platform. It analyzes all available data — KPI performance, competitive positioning, and Geo Audit results — and when it detects problems or opportunities, it generates specific recommendations.

Each action tells you:

  • what to do (e.g. “Create content about [topic]”)
  • why (e.g. “Authority KPI dropped by 20%”)
  • the expected impact
  • the estimated time required
  • the priority level

How

When generating actions, you choose a strategy:

  • Auto: analyzes everything and suggests the best actions
  • KPI-focused: focuses on specific KPIs that are underperforming
  • Geo Audit: resolves technical issues identified in the audit

The platform uses AI models to analyze the data, identify patterns, and formulate concrete recommendations. The actions are then prioritized based on potential impact and required effort.

Why

Without actions, you would have a lot of data but no clear starting point. Actions translate data into concrete steps.

Instead of saying “the Innovation KPI is low,” the platform might suggest:

  • Publish 3 case studies about recent innovations
  • Mention technology partnerships
  • Update the R&D section of the website

Actions are therefore actionable and measurable.

Actions Knowledge Graph

Behind the scenes, the platform uses a Knowledge Graph to track the relationships between scores, reasons, and actions.

Every time a Brain is executed:

  • scores (SCORE)
  • positive/negative reasons (REASON_POSITIVE / REASON_NEGATIVE)
  • referenced URLs (SOURCE_URL)

are saved as nodes in the graph.

Generated actions (ACTION) are then connected to the negative reasons they aim to resolve through ADDRESSES relationships.

This architecture allows the system to:

  • avoid duplicates (it does not generate actions for issues already covered)
  • track effectiveness (when an action is completed, an IMPROVED relationship is created toward the new score)
  • perform cross-brand pattern recognition (identify solutions that worked for similar brands)

In practice, the system continuously learns which actions produce real improvements and becomes more effective over time.

Geo Audit
Panoramica

Geo Audit Overview

What

The Geo Audit is an automated crawler that analyzes a brand’s website from an AI perspective. It scans all accessible pages, verifies the technical configuration, and evaluates how easy it is for an AI system to understand what the brand is about, find key information, and correctly cite it in generated answers.

The output includes:

  • an overall score (0–100)
  • detailed scores for three main areas
  • a complete list of identified issues

How

When a Geo Audit is launched, the system runs several analyses in parallel:

1) Discoverability
Checks critical technical files such as sitemap.xml, robots.txt, llm.txt, schema.org structured data, Open Graph meta tags, and other SEO elements that help AI systems index the website.

2) Navigability
Autonomous LLM agents start from the homepage and attempt to navigate to key pages (About, Contact, FAQ, Products) by following links and menus. The score is based on how many agents successfully complete these navigation paths.

3) Content Clarity
Extracts all text from scanned pages, converts it into vector embeddings, and compares it with the brand’s defining variables (industry, categories, positioning, heritage) to measure how well the website communicates the brand identity.

The final score is a weighted average of these three KPIs.

Why

Many websites are perfect for human users but difficult for AI systems to interpret.

Examples include:

  • menus built entirely in JavaScript
  • important content presented only in images without alt text
  • lack of structured data
  • complex navigation structures

These issues can prevent AI systems from understanding and citing the brand correctly.

The Geo Audit identifies these exact problems and provides a clear roadmap to make the site “AI-friendly”, increasing the chances that the brand will be correctly referenced in answers generated by AI models.

Competitive tracking
Metriche

Competitive Metrics

What

Three key indicators measure visibility in AI-generated answers:

  • Visibility Share – the share of responses in which the brand appears relative to the total.
  • Average Ranking – the average position where the brand is cited when it appears.
  • Citations – the percentage of rankings in which the brand is mentioned.

Additionally, Trend shows the direction over time (improving or declining).

How

For each question within every Intent, the ranking is stored: the system identifies if and where the brand is cited, and the metrics are then calculated automatically per intent and overall.

During visualization, it is possible to filter by AI model and country. The charts show how these metrics evolve over time, highlighting significant changes compared to the previous run.

Why

These metrics provide a fast and quantitative overview of the brand’s competitive position.

  • Increases in Visibility Share or improvements in Average Ranking indicate that interventions (content, SEO, site structure) are increasing the likelihood that AI systems will cite the brand.
  • Drops or unexpected variations signal opportunities or regressions that need investigation.

Metric Calculation Details

Citations
Percentage count of responses in which the brand appears.

Visibility Share
(Sum of the brand’s scores across all rankings / Sum of the scores of all brands) × 100

Average Ranking
Average position of the brand in the rankings where it appears.

Additional notes

  • The brand score within a ranking is calculated as:
    [ Total number of brands in the ranking − Brand position + 1 ]
  • If the brand does not appear in a ranking, its score for that ranking is 0.
Competitive tracking
Intents

Search Intents

What

An Intent represents a search scenario or informational need for the target brand. Instead of tracking individual keywords like in traditional SEO, this system defines broader contexts.

Examples include:

  • “Searching for luxury hotels in Milan”
  • “Comparing CRM software for SMEs”

For each Intent, the AI automatically generates 10 different questions that a real user might ask.

How

When creating an Intent, you specify a name and description. The platform then generates realistic questions related to that Intent.

These questions are executed through the configured AI models, and the platform collects the responses. By analyzing these responses, it builds visibility rankings.

Why

Different Intents reveal different competitive dynamics.

A brand may dominate certain Intents while being weaker in others. By tracking multiple Intents, you can understand where the brand is strong and where positioning needs improvement.

Additionally, this module enables market analysis for expansion into new markets or categories, identifying which competitors are strongest in those contexts.

Brand Spoofing

What

A fundamental requirement for this module to work correctly is that questions must not be influenced by the brand being tracked.

For this reason, the name of any brand must never appear in the Intents, otherwise the results risk becoming biased.

How

Intent generation is designed so that the reference brand is never passed into the system, nor are generic pieces of information such as:

  • industry
  • product categories
  • positioning

This ensures that the generated questions remain realistic while keeping the context neutral.

Why

Avoiding brand spoofing is essential for obtaining reliable results.

If the brand name appears in the questions, AI models are much more likely to mention it, even when doing so would not be natural. This would distort visibility metrics and competitive positioning analysis.

Keeping questions neutral ensures that the results reflect the true perception of the brand by AI systems.

Competitive tracking
Definizione

Competitive Tracking Overview

What

Competitive Tracking works by creating “Intents”, meaning search scenarios relevant to your business, and analyzing the brand’s positioning by simulating user searches.

For each Intent, the platform automatically generates a set of questions that a potential customer might ask, submits them to different AI models, and analyzes the responses to see:

  • which brands are mentioned
  • in what order they appear
  • how often they are cited

The result is a ranking that shows your position relative to competitors.

How

An Intent is created by defining a theme (e.g., “Buying running shoes”). The platform generates related questions, which are sent to the configured AI models during a run.

The responses are structured as rankings.

By aggregating all the rankings obtained, the system calculates key metrics such as:

  • Visibility Share – how much visibility your brand has compared to competitors
  • Average Ranking – the average position in which your brand appears
  • Citations – how often your brand is mentioned

Rankings can be viewed:

  • per question
  • per Intent
  • overall

It is also possible to filter results by AI model and country to identify significant differences in visibility.

Why

If a user asks ChatGPT “what are the best running shoe brands” and your brand does not appear while competitors do, this represents a loss of visibility.

Competitive Tracking shows exactly where the brand is losing ground and where it is strong, enabling targeted interventions.

This is not traditional SEO. Instead, it focuses on understanding how AI perceives the brand compared to competitors when answering business-relevant questions.

Ranking Size

The number of brands cited in rankings varies depending on the responses generated by AI models.

There is no fixed number of competitors because the system does not force the model to respond in a standardized format.

However, the typical range is between 3 and 12 brands per question.

Brains
Insights

Brain Insights

What

Insights are automatic observations generated by the platform during the analysis of Brains. In addition to the textual explanation, each insight is accompanied by the sources used by the models to extract the information.

Insights serve as the foundation for generating Actions, because they automatically identify the brand’s strengths and weaknesses and highlight the areas that require intervention.

How

During the KPI analysis phase, the system detects the reasons why a KPI is evaluated in a certain way.

The response from the analysis call includes:

  • a textual explanation of the reason
  • a list of sources that were consulted

These sources are then displayed in the Insights section, separating them into:

  • sources of positive insights (strengths)
  • sources of negative insights (weaknesses)

The score for the selected run is also highlighted.

By opening the details of a specific KPI, you can view the list of insights associated with that KPI, including their sources, allowing you to clearly understand what worked and what did not during that run.

Why

Knowing that a KPI score increased or decreased is not enough. It is crucial to understand why it changed.

Insights provide this explanation by highlighting the factors that contributed to the result.

This makes it possible to precisely identify strengths to reinforce and weaknesses to address, enabling the next actions to be more targeted and effective.

Brains
Performance

Brain Performance

What

This section displays the results of Brain analyses through charts that highlight trends and variations over time. You can view and compare results by filtering for market, provider, AI model, and KPI, making it easier to identify positive and negative trends.

How

Depending on the selected filters, the data is displayed in a chart where:

  • the x-axis represents time (the run execution date)
  • the y-axis represents the KPI score

Each point on the chart represents the result of a run. By clicking on a point, you can see:

  • the score of that run
  • the variation compared to the previous run

The score calculation is obtained by taking the average of all prompt scores associated with that KPI. Naturally, the value changes depending on the selected filters and is updated automatically.

Brains
Definizione

Brain Overview

What

A Brain is essentially a container that groups together related KPIs and Prompts. It can be thought of as a structured questionnaire: you define what you want to measure (the KPIs) and which questions to ask (the Prompts), and the platform automatically executes everything across the active AI models.

Each Brain can be one of three types:

  • ON-SITE – focused on pages from the brand’s website
  • OFF-SITE – no search on the brand’s website
  • GENERIC – no constraints applied

How

When creating a Brain, you must define a set of KPIs to track and the Prompts used to query the models.

  • The same Prompt can be linked to multiple KPIs.
  • A KPI can be defined by multiple Prompts.
  • A Brain can be associated with one or more brands.

You can configure the analysis to run automatically at regular intervals or start it manually.

During an analysis:

  1. Each Prompt is sent to the configured AI models.
  2. The responses are collected.
  3. The system analyzes the responses to calculate KPI scores.

These scores are displayed in charts showing trends over time, allowing you to monitor brand perception and identify trends or areas for improvement, with the option to filter by model or country.

Why

Without a Brain, you would need to manually query each AI model, copy the responses, read them, and evaluate them one by one.

Brains automate this entire process and make it possible to obtain comparable data over time. This allows you to see:

  • whether brand perception is improving or declining
  • which aspects are working well
  • which areas need improvement

All in a systematic and repeatable way.

Creating a Brain

When creating a Brain and its components, it is important to add detailed descriptions to provide context and guide the interpretation of results.

In particular:

  • The KPI Text Description should clearly specify what is being measured.
  • The KPI Evaluation Description should explain how the KPI should be evaluated.

Additionally, brand-generated parameters can be used within Prompts. This ensures that the AI already has key information about the brand during the analysis, improving the quality and relevance of the results.

Guide
Introduzione

Platform Overview

This guide explains how to use the platform and the logic behind its features.

Cards with a green background and a PC icon indicate sections containing technical implementation details, mainly intended for Searchbridge administrators and the development/maintenance team.

Other cards with the megaphone icon contain feature descriptions and platform business logic, which are useful for all users.

Modules

Brains

Brains are the analytical core of the platform. They are evaluation frameworks used to measure specific aspects of brand perception.

Each Brain consists of:

  • KPIs – the indicators you want to measure
  • Prompts – the questions posed to AI models

When a Brain is activated, the platform systematically queries multiple AI models and evaluates the responses according to the defined criteria.

Competitive Tracking

Competitive Tracking shows the brand’s positioning relative to its competitors when AI systems answer industry-related questions.

This is not traditional SEO. The module measures:

  • how often a brand is cited
  • its position within AI responses
  • how much visibility space it occupies compared to other brands in the market

Geo Audit

Geo Audit is an automated crawler that analyzes a brand’s website from an AI perspective.

It scans pages, checks the technical configuration, and evaluates how easy it is for an AI system to:

  • understand what the brand is about
  • find key information
  • correctly cite the brand

The output includes an overall score and a detailed list of what works and what does not.

Actions

Actions are concrete recommendations automatically generated by the platform after analyzing all collected data.

If:

  • KPIs decline
  • competitors are recommended more often
  • the Geo Audit detects technical issues

the system suggests specific actions to take.

These are not generic suggestions. Each action specifies:

  • what to do
  • why it should be done
  • the expected impact
  • the estimated timeframe.