What
Actions have statuses: To Do (just created), In Progress (you are working on them), Done (completed), Rejected (declined with a reason). You can manage them in Kanban view (drag cards between columns), List view (filterable table), or Analytics view (dashboard with metrics). Each action can be commented on, edited, and linked to specific tasks.
How
In Kanban view, you drag an action from To Do to In Progress when you start working on it. When you finish, move it to Done and specify the completion date. If an action no longer makes sense, you can reject it by selecting a reason (not relevant, already done, not feasible, etc.). The Analytics view shows how many actions are open, how many are completed, the completion rate, and the distribution by priority and type.
Why
A structured workflow ensures that actions are not forgotten. Tracking completion allows you to measure progress and understand whether the actions taken are working. If you complete 10 actions to improve the Authority KPI and then see the score increase, you know the work paid off. If it does not increase, you understand that you need to change your approach.
What
There are three generation modes.
You can also filter by country.
How
Click “Generate”, choose the mode, and optionally select specific KPIs or countries. The platform starts a background job that analyzes the data. After 1–2 minutes, you receive a notification and the generated actions appear in the dashboard.
Each action includes:
Why
Manually generating an action plan would require hours of analysis. Automated generation does it in minutes, and most importantly it is based on objective data rather than intuition.
You can also regenerate periodically to adapt to changes: if competitors change strategy or new issues emerge, the actions update accordingly.
What
Action priority is automatically calculated based on the number and weight of sources that confirm the issue.
Each source has a weight:
The system also applies a multi-signal bonus: if multiple sources confirm the same issue, the priority increases.
How
The priority calculation works as follows:
Priority levels:
Why
This prioritization system ensures that the most critical actions—those confirmed by multiple sources or by authoritative sources like competitive tracking—are highlighted first.
It prevents minor issues from overshadowing major problems, and helps teams focus efforts where the impact will be greatest.
The system is also extensible: if new sources are added in the future (for example social media monitoring), they can simply be registered in the registry with their corresponding weight.
What
Actions are intelligent to-dos automatically created by the platform. It analyzes all available data — KPI performance, competitive positioning, and Geo Audit results — and when it detects problems or opportunities, it generates specific recommendations.
Each action tells you:
How
When generating actions, you choose a strategy:
The platform uses AI models to analyze the data, identify patterns, and formulate concrete recommendations. The actions are then prioritized based on potential impact and required effort.
Why
Without actions, you would have a lot of data but no clear starting point. Actions translate data into concrete steps.
Instead of saying “the Innovation KPI is low,” the platform might suggest:
Actions are therefore actionable and measurable.
Behind the scenes, the platform uses a Knowledge Graph to track the relationships between scores, reasons, and actions.
Every time a Brain is executed:
are saved as nodes in the graph.
Generated actions (ACTION) are then connected to the negative reasons they aim to resolve through ADDRESSES relationships.
This architecture allows the system to:
In practice, the system continuously learns which actions produce real improvements and becomes more effective over time.
What
The Geo Audit is an automated crawler that analyzes a brand’s website from an AI perspective. It scans all accessible pages, verifies the technical configuration, and evaluates how easy it is for an AI system to understand what the brand is about, find key information, and correctly cite it in generated answers.
The output includes:
How
When a Geo Audit is launched, the system runs several analyses in parallel:
1) Discoverability
Checks critical technical files such as sitemap.xml, robots.txt, llm.txt, schema.org structured data, Open Graph meta tags, and other SEO elements that help AI systems index the website.
2) Navigability
Autonomous LLM agents start from the homepage and attempt to navigate to key pages (About, Contact, FAQ, Products) by following links and menus. The score is based on how many agents successfully complete these navigation paths.
3) Content Clarity
Extracts all text from scanned pages, converts it into vector embeddings, and compares it with the brand’s defining variables (industry, categories, positioning, heritage) to measure how well the website communicates the brand identity.
The final score is a weighted average of these three KPIs.
Why
Many websites are perfect for human users but difficult for AI systems to interpret.
Examples include:
These issues can prevent AI systems from understanding and citing the brand correctly.
The Geo Audit identifies these exact problems and provides a clear roadmap to make the site “AI-friendly”, increasing the chances that the brand will be correctly referenced in answers generated by AI models.
What
Three key indicators measure visibility in AI-generated answers:
Additionally, Trend shows the direction over time (improving or declining).
How
For each question within every Intent, the ranking is stored: the system identifies if and where the brand is cited, and the metrics are then calculated automatically per intent and overall.
During visualization, it is possible to filter by AI model and country. The charts show how these metrics evolve over time, highlighting significant changes compared to the previous run.
Why
These metrics provide a fast and quantitative overview of the brand’s competitive position.
Citations
Percentage count of responses in which the brand appears.
Visibility Share
(Sum of the brand’s scores across all rankings / Sum of the scores of all brands) × 100
Average Ranking
Average position of the brand in the rankings where it appears.
Additional notes
What
An Intent represents a search scenario or informational need for the target brand. Instead of tracking individual keywords like in traditional SEO, this system defines broader contexts.
Examples include:
For each Intent, the AI automatically generates 10 different questions that a real user might ask.
How
When creating an Intent, you specify a name and description. The platform then generates realistic questions related to that Intent.
These questions are executed through the configured AI models, and the platform collects the responses. By analyzing these responses, it builds visibility rankings.
Why
Different Intents reveal different competitive dynamics.
A brand may dominate certain Intents while being weaker in others. By tracking multiple Intents, you can understand where the brand is strong and where positioning needs improvement.
Additionally, this module enables market analysis for expansion into new markets or categories, identifying which competitors are strongest in those contexts.
What
A fundamental requirement for this module to work correctly is that questions must not be influenced by the brand being tracked.
For this reason, the name of any brand must never appear in the Intents, otherwise the results risk becoming biased.
How
Intent generation is designed so that the reference brand is never passed into the system, nor are generic pieces of information such as:
This ensures that the generated questions remain realistic while keeping the context neutral.
Why
Avoiding brand spoofing is essential for obtaining reliable results.
If the brand name appears in the questions, AI models are much more likely to mention it, even when doing so would not be natural. This would distort visibility metrics and competitive positioning analysis.
Keeping questions neutral ensures that the results reflect the true perception of the brand by AI systems.
What
Competitive Tracking works by creating “Intents”, meaning search scenarios relevant to your business, and analyzing the brand’s positioning by simulating user searches.
For each Intent, the platform automatically generates a set of questions that a potential customer might ask, submits them to different AI models, and analyzes the responses to see:
The result is a ranking that shows your position relative to competitors.
How
An Intent is created by defining a theme (e.g., “Buying running shoes”). The platform generates related questions, which are sent to the configured AI models during a run.
The responses are structured as rankings.
By aggregating all the rankings obtained, the system calculates key metrics such as:
Rankings can be viewed:
It is also possible to filter results by AI model and country to identify significant differences in visibility.
Why
If a user asks ChatGPT “what are the best running shoe brands” and your brand does not appear while competitors do, this represents a loss of visibility.
Competitive Tracking shows exactly where the brand is losing ground and where it is strong, enabling targeted interventions.
This is not traditional SEO. Instead, it focuses on understanding how AI perceives the brand compared to competitors when answering business-relevant questions.
The number of brands cited in rankings varies depending on the responses generated by AI models.
There is no fixed number of competitors because the system does not force the model to respond in a standardized format.
However, the typical range is between 3 and 12 brands per question.
What
Insights are automatic observations generated by the platform during the analysis of Brains. In addition to the textual explanation, each insight is accompanied by the sources used by the models to extract the information.
Insights serve as the foundation for generating Actions, because they automatically identify the brand’s strengths and weaknesses and highlight the areas that require intervention.
How
During the KPI analysis phase, the system detects the reasons why a KPI is evaluated in a certain way.
The response from the analysis call includes:
These sources are then displayed in the Insights section, separating them into:
The score for the selected run is also highlighted.
By opening the details of a specific KPI, you can view the list of insights associated with that KPI, including their sources, allowing you to clearly understand what worked and what did not during that run.
Why
Knowing that a KPI score increased or decreased is not enough. It is crucial to understand why it changed.
Insights provide this explanation by highlighting the factors that contributed to the result.
This makes it possible to precisely identify strengths to reinforce and weaknesses to address, enabling the next actions to be more targeted and effective.
What
This section displays the results of Brain analyses through charts that highlight trends and variations over time. You can view and compare results by filtering for market, provider, AI model, and KPI, making it easier to identify positive and negative trends.
How
Depending on the selected filters, the data is displayed in a chart where:
Each point on the chart represents the result of a run. By clicking on a point, you can see:
The score calculation is obtained by taking the average of all prompt scores associated with that KPI. Naturally, the value changes depending on the selected filters and is updated automatically.
What
A Brain is essentially a container that groups together related KPIs and Prompts. It can be thought of as a structured questionnaire: you define what you want to measure (the KPIs) and which questions to ask (the Prompts), and the platform automatically executes everything across the active AI models.
Each Brain can be one of three types:
How
When creating a Brain, you must define a set of KPIs to track and the Prompts used to query the models.
You can configure the analysis to run automatically at regular intervals or start it manually.
During an analysis:
These scores are displayed in charts showing trends over time, allowing you to monitor brand perception and identify trends or areas for improvement, with the option to filter by model or country.
Why
Without a Brain, you would need to manually query each AI model, copy the responses, read them, and evaluate them one by one.
Brains automate this entire process and make it possible to obtain comparable data over time. This allows you to see:
All in a systematic and repeatable way.
When creating a Brain and its components, it is important to add detailed descriptions to provide context and guide the interpretation of results.
In particular:
Additionally, brand-generated parameters can be used within Prompts. This ensures that the AI already has key information about the brand during the analysis, improving the quality and relevance of the results.
This guide explains how to use the platform and the logic behind its features.
Cards with a green background and a PC icon indicate sections containing technical implementation details, mainly intended for Searchbridge administrators and the development/maintenance team.
Other cards with the megaphone icon contain feature descriptions and platform business logic, which are useful for all users.
Brains are the analytical core of the platform. They are evaluation frameworks used to measure specific aspects of brand perception.
Each Brain consists of:
When a Brain is activated, the platform systematically queries multiple AI models and evaluates the responses according to the defined criteria.
Competitive Tracking shows the brand’s positioning relative to its competitors when AI systems answer industry-related questions.
This is not traditional SEO. The module measures:
Geo Audit is an automated crawler that analyzes a brand’s website from an AI perspective.
It scans pages, checks the technical configuration, and evaluates how easy it is for an AI system to:
The output includes an overall score and a detailed list of what works and what does not.
Actions are concrete recommendations automatically generated by the platform after analyzing all collected data.
If:
the system suggests specific actions to take.
These are not generic suggestions. Each action specifies: