In a digital landscape being reshaped daily by artificial intelligence, the old rules of search are being rewritten. To navigate this new terrain, we sat down with Aisha Amaira, a MarTech expert whose work lives at the fascinating intersection of customer data, technology, and marketing strategy. With a deep background in CRM technology and data platforms, Aisha brings a unique, systems-level perspective to the challenges and opportunities facing marketers today.
Throughout our conversation, we explore the monumental shift from a Google-centric world to a “search everywhere” ecosystem. Aisha breaks down why building brand authority is now about achieving a digital consensus for AI, not just collecting backlinks. We also dive into the technical nuts and bolts of preparing content for intelligent agents, the critical importance of tying every marketing action back to real revenue, and the strategic imperative to create content that remains uniquely human in an increasingly automated world.
Ashley Liddell’s phrase “search everywhere optimization” suggests a major shift beyond Google. When optimizing for platforms like TikTok or Reddit, what are the key strategic differences compared to Google, and how do you measure success on channels where traditional ranking metrics don’t apply? Please share an example of this in action.
That phrase, “search everywhere optimization,” perfectly captures the mindset we need. The fundamental difference is moving from an SEO approach to an engagement strategy. On Google, we’ve been conditioned to think about keywords and rankings. On a platform like Reddit or even in a community on LinkedIn, the goal isn’t to rank number one; it’s to become an influential, trusted part of the conversation. Success isn’t measured by a position on a SERP, but by being the source that gets cited, the brand that gets mentioned when a real user asks a question. It’s about building a presence where people are prompting five times longer than they search on Google, looking for specific, contextual answers.
Imagine a specialized software company. Instead of just targeting keywords on Google, they dedicate resources to a specific subreddit where their ideal users congregate. Their team doesn’t just drop links; they answer questions, provide genuine value, and become recognized experts. The metric for success here is share of voice and positive sentiment within that community. Over time, when an LLM is looking for a consensus on the best software in that niche, it sees dozens of authentic, positive mentions on Reddit and cites that company. That’s a win you can’t measure with a rank tracker, but it has a direct impact on visibility in the new search paradigm.
Kevin Indig emphasizes being “mentioned in the right places,” while Cindy Krum notes that AI seeks a “consensus” from multiple sources. What does a modern digital PR and brand-mention strategy look like in practice, and what specific metrics can prove it’s successfully building authority for AI systems?
This is the absolute core of the new challenge. AI systems aren’t just looking at a link; they’re looking for a consistent story told about you across the web. A modern digital PR strategy is about creating a widely distributed, consistent, but discernibly unique branding message. It’s no longer enough to have a great website, because that’s just one data point. To an AI, a single source isn’t a consensus. In practice, this means shifting focus from just link acquisition to earning contextual mentions on authoritative publishers, industry review sites, and relevant forums. It’s about ensuring that when your brand is mentioned, the surrounding words reflect your positioning and market position accurately.
To prove this is working, we have to move beyond vanity metrics. The key metrics are now about quality and consistency. First, you track your share of voice in AI-generated responses for key topics. Second, you conduct sentiment analysis on your brand mentions across these platforms—are they positive and authoritative? Third, you measure the consistency of your brand messaging. Are review sites, forums, and articles all reinforcing the same core value proposition? When you see your unique data stories or expert insights being cited by LLMs and in AI Overviews, that’s the proof that you’re not just getting mentioned, but that you’re successfully building the kind of authority that these new systems trust.
Duane Forrester says to “optimize for systems that read like machines,” while Andrea Volpini talks about “agentic readiness.” Beyond standard Schema markup, what are the first three practical, technical steps a business should take to make its content and product data truly machine-operable for these new systems?
This is where the future of technical SEO lies, in moving from making content readable to making it truly operable. Standard Schema is table stakes now; it’s the bare minimum. To achieve true “agentic readiness,” you have to think about how an AI agent can not only understand your offerings but act on them.
The first practical step is to develop a precise ontology using structured data. This goes far beyond just marking up a product name and price. It means defining the relationships between your entities—how this product relates to that service, how this expert author is connected to these specific topics. You’re building a knowledge graph that leaves no room for ambiguity.
Second, expose your products and services as machine-operable assets through feeds and APIs. An AI agent can’t book a hotel room or order a product from a block of text on a webpage. It needs a clean, structured feed or an API endpoint it can interact with to compare offers and execute tasks. This is about making your business’s functions accessible to other systems.
Third, ensure you have stable identifiers for all your entities. Whether it’s a product, a person, or a concept, it needs a consistent, unique identifier across your site and data feeds. This allows an agent to resolve entities without confusion, ensuring that when it learns something about your product from one source, it knows it’s the same product mentioned elsewhere. These steps transform your website from a collection of pages into an enterprise system that AI can plug into.
Helen Pollitt cautions against chasing traffic and advises focusing on “revenue impact.” Could you outline a step-by-step process for connecting a specific SEO initiative, such as optimizing for AI Overviews, directly to a measurable business outcome, moving beyond simple click and impression data?
Absolutely, and this is a conversation every SEO needs to be having with their stakeholders. Chasing traffic for traffic’s sake is a fast way to waste resources. The process to connect an initiative like optimizing for AI Overviews to revenue is methodical.
First, you start with a clear commercial goal, not an SEO goal. For instance, “We need to increase qualified leads for our enterprise software by 15% this quarter.” This is the bottom line.
Second, you identify the queries and content formats that matter for that goal. You map the customer journey and pinpoint the high-intent questions potential leads are asking where an AI Overview is likely to appear.
Third, you create content specifically designed for synthesis. This means answer-ready formatting, clear sourcing, and structured data that makes it easy for an AI to retrieve and feature your information as a trusted source for those high-intent queries.
Finally, and this is the most critical part, you measure beyond the click. You need to work with your analytics team to implement a multi-touch attribution model. You track users who engage with that specific AI Overview and follow their journey. Did they visit the site? Did they sign up for a demo? You don’t just report a CTR decline; you quantify the revenue impact by connecting performance gaps directly to business outcomes. This approach lets you say, “Our initiative to get cited in this AI Overview directly contributed to X number of new leads, representing Y dollars in the pipeline.” That’s how you prove genuine value.
Both John Shehata and Alli Berry suggest creating content that is “hard for AI to replicate,” such as proprietary data or unique customer stories. What is a practical workflow for a content team to consistently produce this type of differentiated content, and how do you balance it with traditional SEO needs?
This is the key to standing out as homogenous, AI-generated content floods the internet. The workflow for creating this differentiated content has to be deeply integrated and human-centric. It starts with insight, not keywords. The first step in the workflow is for the content team to work cross-functionally, mining for gold in places AI can’t easily access. This means regularly analyzing customer experience logs, conducting social listening to understand frustrations and motivations, and most importantly, interviewing actual customers to capture their unique stories and experiences.
The second step is to create proprietary data. This isn’t as daunting as it sounds. It could be an annual industry survey, an analysis of your own anonymized user data to reveal a trend, or a piece of investigative work. This creates a unique data point that others, including AI, will have to cite you for.
The third step is the creation process, where you weave these human stories and unique data into a compelling narrative. This is where you focus on strong opinions and authentic expert insights from your team. The balance with traditional SEO comes in the final step: framing. An SEO specialist’s role is to package this unique content—using keyword research for headlines, structuring it with clear headings, and optimizing it for discoverability—so it can be found. The core value is the inimitable human insight, but traditional SEO is the delivery mechanism that ensures it reaches an audience.
What is your forecast for the development of “agentic AI” in search over the next five years, and what is the single most important skill SEO professionals should start developing today to prepare for it?
My forecast is that within the next five years, agentic AI will move from a niche concept to a primary channel for completing complex tasks, especially in commerce and services. We’re talking about agents that don’t just find information but complete actions: booking an entire holiday, personal shopping for a wardrobe based on your style, or managing a weekly grocery order based on a diet. When this happens, being “visible” won’t be about ranking on a page; it will be about being an option an agent can choose and execute. If you’re not included in those closed spaces, you simply won’t be in the game.
Given this, the single most important skill for SEO professionals to start developing today is what I call “systems thinking for discoverability.” This goes beyond on-page optimization. It’s the ability to understand how your company’s entire data ecosystem—its product feeds, its APIs, its internal knowledge graphs—can be structured and exposed so that an AI agent can reliably and safely interact with it. It means collaborating deeply with product and engineering teams, speaking their language, and treating the website not as a marketing brochure but as a functional part of an enterprise system. The SEO professional of the future won’t just optimize content; they will architect their brand’s information to be a tool that AI agents can use to serve human needs.
