As businesses grapple with a seismic shift in how customers find information online, the comfortable rulebook of Search Engine Optimization is being rewritten. With the rise of AI Overviews and Large Language Models acting as intermediaries, the game is no longer just about ranking—it’s about being selected. We’re joined by Aisha Amaira, a MarTech expert who lives at the intersection of marketing strategy and technological innovation. Today, we’ll explore the critical tension between public platform guidance and the on-the-ground business need to adapt. We will delve into how to measure success in a world with fewer clicks, decode the new “stable signals” that AI responds to, and outline practical steps for companies to begin optimizing for this new reality, a practice some are calling Generative Engine Optimization.
Public guidance from search platforms often advises creating good content without over-optimizing for the machine. How should teams navigate the tension between this advice and the business reality of needing to be selected by an AI to achieve their own commercial outcomes? Please provide some examples.
That’s the core tension, isn’t it? On one hand, you have platforms saying, “Just focus on good content,” which is sound advice to a point. You can absolutely hurt your efforts by over-optimizing for the wrong thing or trying to game a system you don’t understand. But on the other hand, the business reality is that we are no longer in a “10 blue links” world. The unit of competition has fundamentally shifted from the page to a portion of the page that gets assembled into an answer the user might never click past. A business doesn’t have the luxury of optimizing for the platform’s ecosystem stability; it has to optimize for its own outcomes. This means understanding that the platform’s advice, while not intentionally misleading, is designed to serve its broad goals of quality control and spam prevention. Your goal, however, is to be the selected source. It’s about recognizing these misaligned objectives so you don’t make a decision that feels safe today but costs you market share tomorrow.
As AI answers reduce clicks, what’s the biggest operational blind spot for executives still focused on traditional traffic metrics? How can teams begin to measure success when “ranking” is no longer the primary goal and the customer journey ends on the results page?
The biggest blind spot is asking the wrong question. Executives look at a dashboard and say, “Where does our traffic come from today?” and see that traditional search is still dominant. It’s a comforting, but dangerously backward-looking, view. The more critical, forward-looking questions are: What happens to our business when discovery shifts from clicks to answers? What is our strategy when the customer journey is completed within an AI Overview on the results page? The operational reality is that zero-click instances have been increasing year-over-year. If the click declines, “ranking” is no longer the ultimate goal. Being selected into the answer becomes the new finish line. Success measurement has to evolve accordingly. It’s less about sessions and pageviews and more about tracking brand mentions, citations, and whether your content is being surfaced and paraphrased in these AI interfaces. It requires a shift from a destination-focused mindset to an information-source mindset.
Research suggests that LLM selection responds to “stable signals” that differ from classic SEO heuristics, favoring machine-usable decision support over narrative persuasion. Could you detail what these signals are and why traditional marketing copy often fails in this new, machine-mediated environment?
This is really the heart of the matter. A fascinating research paper on e-commerce optimization showed that when a system was tasked with improving product selection by an LLM, it consistently converged on certain patterns. These patterns are what we can call “stable signals.” They aren’t magical; they are rooted in clarity and structure. The signals favor content that explicitly states a product’s purpose, its constraints, and its ideal use case. It’s about providing testable claims instead of vague benefits and offering clear comparison hooks. Traditional marketing copy often fails here because it’s built for persuasion and emotion. It uses narrative, brand feel, and sometimes ambiguity to create desire. An LLM, acting as a user’s agent, doesn’t respond to that. It needs clean, structured, machine-usable data to make a decision. Your content has to do a second job now: It must not only persuade a human but also function as a clear, unambiguous spec sheet for a machine.
You’ve outlined an 8-step process for rewriting product descriptions, emphasizing elements like constraints, qualifiers, and testable claims. Could you walk through a before-and-after example for a specific product, explaining how these changes make the content more “selection-ready” for an LLM?
Certainly. Let’s take a common product: a high-end chef’s knife. The “before” description might say: “Experience culinary excellence with our artisan-crafted chef’s knife. Made from premium steel, this durable and perfectly balanced knife will transform your cooking, making every cut feel effortless. It’s the best tool for any home chef.” It’s aspirational but lacks concrete detail for an LLM.
Now, let’s apply the 8-step process for a “selection-ready” version. The “after” description would sound more like this: “This is an 8-inch VG-10 steel chef’s knife designed for precision slicing and dicing in a home kitchen environment, for people who need excellent edge retention. It features a 15-degree blade angle for sharpness and a full-tang G10 handle for balance. It requires hand-washing and is not ideal for heavy-duty tasks like breaking down bones. This knife is for home cooks prioritizing sharpness over ruggedness; it is not for professional chefs in high-volume settings. Its blade holds an edge for up to 12 months of typical home use before needing sharpening. Compared to German steel knives, this offers superior edge retention but requires more care to prevent chipping. Choose this if your priorities are precision, balance, and long-lasting sharpness.”
Notice the difference? We’ve moved from vague emotional appeal to structured decision support. We’ve stated its purpose, included testable claims like the blade angle and steel type, surfaced constraints like hand-washing, and provided direct comparison hooks. This version gives an LLM everything it needs to confidently select and recommend this knife for the right user query.
For a company just starting to explore Generative Engine Optimization, running controlled experiments can seem daunting. What would be the first three practical steps you’d recommend for a small team to test these principles on their site and measure the impact effectively?
It absolutely can feel daunting, but the key is to start small and be systematic. First, I’d recommend picking a small but meaningful slice of your site—say, 10 to 20 product or service pages that have similar user intent and traffic levels. Don’t try to boil the ocean. Second, split those pages into two groups. One is your control group; you leave it completely untouched. The other is your test group. For this group, you’ll rewrite the content using a consistent, structured template, like the 8-step process we just discussed. It’s crucial to document exactly what you changed. Third, measure the impact over a defined period, like 60 or 90 days. But don’t just look at vanity metrics. Track outcomes that actually matter to your business, like lead quality, conversion rates, or even just whether those pages start getting cited or paraphrased in AI answer interfaces. The goal isn’t to win a science fair; it’s to reduce your uncertainty with a controlled test.
The monetization of AI Overviews with ads signals a permanent shift in the search interface. How does this change the strategic calculation for businesses? What new skill sets will marketing and content teams need to develop to compete on this evolving ad surface?
The rollout of ads in AI Overviews is a massive signal. It tells us this answer layer isn’t a temporary experiment; it’s a durable, monetized interface that’s here to stay. This fundamentally changes the strategic calculation because the primary surface for user attention—and therefore advertising—is shifting. The business model is following the eyeballs. For marketing and content teams, this requires a new, blended skill set. It’s no longer enough to have a separate SEO team and a separate PPC team. You’ll need people who understand how to create content that is “selection-ready” for an organic AI answer, while also understanding how that same content can be leveraged for new ad formats that will inevitably live inside that answer. It’s a hybrid of content strategy, technical SEO, and paid media expertise, focused on a single, integrated results page. The ability to write clear, structured, decision-support content will become as valuable for ad performance as it is for organic selection.
What is your forecast for Generative Engine Optimization?
My forecast is that Generative Engine Optimization, or GEO, will not replace SEO, but it will become an absolutely essential and additive competence layer. The fundamentals of SEO—getting your content crawled, indexed, and discovered—will always be the necessary foundation. If the machine can’t find you, it can’t choose you. However, simply being discoverable will no longer be sufficient. The businesses that will win in the next five years are the ones that learn this new layer of optimizing for selection within AI-mediated discovery. They will be the ones rigorously testing, measuring, and refining their content to serve as clear, machine-usable decision support. Those who dismiss this as “just SEO” and stick to the old playbook will be outsourcing their business risk to platform guidance, and they will slowly but surely see their visibility erode as more customer journeys begin and end within an AI answer. Learning this layer isn’t a strategy; it’s becoming a basic cost of competing.
