How Generative Engine Optimisation (GEO) is rewriting SEO tactics

How Generative Engine Optimisation (GEO) is rewriting SEO tactics

It’s the end of search as we know it, and marketers feel fine. 

Sort of.

For over two decades, SEO has been the default playbook for achieving online visibility. 

It spawned an entire industry of keyword stuffers, backlink brokers, content optimisers, and auditing tools, along with the professionals and agencies that operate them. 

But in 2025, search has been shifting away from traditional browsers toward LLM platforms. With Apple’s announcement that AI-native search engines like Perplexity and Claude will be built into Safari, Google’s distribution chokehold is in question. 

The foundation of the $80 billion+ SEO market just cracked. This changing search landscape is characterised by the rise of AI-driven platforms, including LLM-powered assistants and innovative search approaches.

New search intent?

A new paradigm is emerging, one driven not by page rank, but by language models. We’re entering Act II of the search: Generative Engine Optimisation (GEO). GEO represents Generative Engine Optimisation, an emerging method that adapts content strategies for AI-driven search engines.

The shift from SEO to GEO is significant: GEO focuses on creating AI-friendly content and prioritises context and quality over traditional keyword matching. Outdated SEO tactics, such as keyword stuffing, are no longer effective in the GEO era.

These new types of engines, such as Perplexity and Claude, are generative AI engines and generative search engines that are transforming how information is retrieved. They leverage artificial intelligence to understand and generate responses, rather than simply matching keywords.

Unlike traditional search, AI-powered search engines can analyse and understand content beyond keyword matching, delivering more contextually relevant and human-like results. Artificial intelligence is the key driver of this transformation, enabling generative engines and large language models to revolutionise the way search works.

From links to large language models

Traditional search was built on links. GEO is built on language.

In the SEO era, visibility meant ranking high on a results page. Page ranks were determined by indexing sites based on keyword matching, content depth and breadth, backlinks, user experience engagement, and more. 

Today, with LLMs like GPT-4o, Gemini, and Claude acting as the interface for how people find information, visibility means showing up directly in the answer itself, rather than ranking high on the results page. 

Optimising content for AI-driven search engines now requires updating and tailoring existing content to ensure it is relevant, well-structured, and easily understood by both AI and users.

As the format of the answers changes, so does the way we search for them. 

AI-native search is becoming fragmented across platforms like Instagram, Amazon, and Siri, each powered by different models and user intents. Queries are longer (averaging 23 words vs. 4), sessions are deeper (averaging 6 minutes), and responses vary by context and source. Unlike traditional search, LLMs remember, reason, and respond with personalised, multi-source synthesis. This fundamentally changes how content is discovered and how it needs to be optimised. 

Selecting the right keywords for your target audience and aligning with search intent is now essential for ensuring your content is surfaced in relevant AI-driven results.

Traditional SEO rewards precision and repetition; generative engines prioritise content that is well-organised, easy to parse, and dense with meaning (not just keywords). Phrases like “in summary” or bullet-point formatting help LLMs extract and reproduce content effectively. A clear content structure and the use of bullet points improve readability for both AI and human readers, making it easier for LLMs to pull relevant information.

It’s also worth noting that the LLM market is fundamentally different from the traditional search market in terms of business model and incentives. 

Classic search engines like Google monetised user traffic through ads; users paid with their data and attention. In contrast, most LLMs are paywalled, subscription-driven services. This structural shift affects how content is referenced: there’s less incentive for model providers to surface third-party content, unless it enhances the user experience or reinforces product value. Whilst an ad market may eventually emerge on top of LLM interfaces, the rules, incentives, and participants would likely differ significantly from those of traditional search.

When optimising for GEO, leveraging AI tools can streamline content creation and help create content specifically designed for AI-driven platforms. 

High content quality is crucial—generative engines favour authoritative, relevant, and well-crafted content, while keyword stuffing is increasingly ineffective. Incorporating structured data and schema markup into your web content helps AI systems and models understand, categorise, and display information more effectively, improving your content's visibility in rich snippets and AI-powered search results.

LLMs process language using prompt engineering, text generation, and advanced AI models and AI systems built on machine learning. These technologies enable LLMs to interpret natural language, understand user queries, and generate relevant answers by synthesising information from high-quality web content. A strong focus on content creation, content structure, and optimising content ensures your site is well-positioned for both traditional and AI-driven search.

Ultimately, being cited by AI models is becoming increasingly vital for maintaining a strong brand presence. Creating content that is optimised for AI not only increases your content's visibility but also helps establish your authority and reach within generative search environments.

In the meantime, one emerging signal of the value in LLM interfaces is the volume of outbound clicks. ChatGPT, for instance, is already driving referral traffic to tens of thousands of distinct domains.

Understanding Foundation Models

At the heart of generative engine optimisation lies a new breed of technology: foundation models. 

These are the huge models—think GPT-4o, Gemini, Claude—that power today’s most advanced AI-driven search engines and virtual assistants. Foundation models, also known as extensive language models (LLMs), are trained on vast amounts of text, code, and digital content, enabling them to understand, generate, and even reason in human language.

Unlike traditional search engines that rely on crawling and indexing web pages, foundation models learn from vast and diverse training data, absorbing patterns in language, context, and meaning. This allows them to generate text, answer questions, and synthesise information in ways that feel remarkably human. 

Whether you’re chatting with a virtual assistant, asking an AI to write content, or searching for answers in a generative search engine, you’re interacting with the output of these powerful language models.

For marketers and content creators, understanding how foundation models work is essential for effective generative engine optimisation. These models don’t just match keywords—they interpret user intent, context, and conversational language. 

Optimising for GEO means crafting content that’s clear, well-structured, and rich in meaning, so that large language models LLMs can easily reference and reproduce it in their responses.

As content generation becomes increasingly AI-driven, the ability to align your digital content with the way foundation models process and generate language is the new frontier of visibility. In the world of generative engine optimisation, it’s not just about being found by search engines—it’s about being understood and cited by the AI engines shaping the future of discovery.

From search engine rankings to model relevance

It’s no longer just about click-through rates; it’s about reference rates: how often your brand or content is cited or used as a source in model-generated answers. 

In a world of AI-generated outputs, GEO means optimising for what the model chooses to reference, not just whether or where you appear in traditional search. That shift is revamping how we define and measure brand visibility and performance, with brands now needing to track geo performance metrics and maintain ongoing geo efforts to improve their rankings in specific locations and contexts.

Already, new platforms like Profound, Goodie, and Daydream enable brands to analyse how they appear in AI-generated responses, track sentiment across model outputs, and understand which publishers are shaping model behaviour. 

The GEO process is a modern digital marketing strategy focused on optimising content specifically for AI-driven platforms and large language models, aiming to influence AI responses and increase visibility within AI-generated search results. These platforms work by fine-tuning models to mirror brand-relevant prompt language, strategically injecting top SEO keywords, and running synthetic queries at scale. 

The outputs are then organised into actionable dashboards that help marketing teams monitor visibility, messaging consistency, and competitive share of voice.

Canada Goose used one such tool to gain insight into how LLMs referenced the brand — not just in terms of product features like warmth or waterproofing, but brand recognition itself. The takeaways were less about how users discovered Canada Goose, but whether the model spontaneously mentioned the brand at all, an indicator of unaided awareness in the AI era. This highlights the key differences between GEO and traditional SEO: GEO prioritises content quality, contextual relevance, and AI-driven content generation, while SEO has historically focused on keywords and link-building.

This kind of monitoring is becoming as crucial as traditional SEO dashboards. Tools like Ahrefs‘ Brand Radar now track brand mentions in AI Overviews, helping companies understand how they’re framed and remembered by generative engines. Semrush also has a dedicated AI toolkit designed to help brands track perception across generative platforms, optimise content for AI visibility, and respond quickly to emerging mentions in LLM outputs, a sign that legacy SEO players are adapting to the GEO era. 

In this new environment, AI search engines and AI-driven engines, powered by generative AI and huge models, source, interpret, and display content based on unique algorithms, making structured, concise, and easily digestible content more important than ever.

We’re seeing the emergence of a new kind of brand strategy: one that accounts not just for perception in the public, but also for perception within the model. How you’re encoded into the AI layer is the new competitive advantage.

Of course, GEO is still in its experimental phase, much like the early days of SEO. 

With every major model update, we risk relearning (or unlearning) how to best interact with these systems. Just as Google’s search algorithm updates once caused companies to scramble to counter fluctuating rankings, LLM providers are still tuning the rules behind what their models cite. These models are often fine-tuned using reinforcement learning and human feedback to enhance outputs, mitigate biases, and eliminate undesirable outcomes. 

Retrieval-augmented generation is also used to enhance the quality and relevance of generated content. Multiple schools of thought are emerging: some GEO tactics are relatively well understood (e.g., being mentioned in source documents that LLMs cite), whilst other assumptions are more speculative, such as whether models prioritise journalistic content over social media or how preferences shift with different training sets.

Lessons from the SEO era

Despite its scale, SEO never produced a monopolistic winner. Tools that helped companies with SEO and keyword research, such as Semrush, Ahrefs, Moz, and Similarweb, were successful in their own right. 

Still, none captured the whole stack (or grew via acquisition, as did Similarweb). Each carved out a niche: backlink analysis, traffic monitoring, keyword intelligence, or technical audits.

SEO was always fragmented. 

The work was distributed across agencies, internal teams, and freelance operators. The data was messy, and rankings were inferred rather than verified. Google held the algorithmic keys, but no vendor ever controlled the interface. 

Even at its peak, the most prominent SEO players were tooling providers. 

They lacked the user engagement, data control, and network effects necessary to become hubs where SEO activity is concentrated. 

Clickstream data — records of the links users click as they navigate websites — is arguably the clearest window into real user behaviour. 

Historically, however, this data has been prohibitively difficult to access, locked behind ISPs, SDKs, browser extensions, and data brokers. This made building accurate, scalable insights nearly impossible without deep infrastructure or privileged access.

GEO changes that.

How to make the mentions: The emergence of generative engine optimisation (GEO) tools

This isn’t just a tooling shift; it’s a platform opportunity. 

The most compelling GEO companies won’t stop at measurement. 

They’ll fine-tune their models, learning from billions of implicit prompts across verticals. 

They’ll own the loop — insight, creative input, feedback, iteration — with differentiated technology that doesn’t just observe LLM behaviour, but shapes it. 

These platforms may leverage code generation and write code capabilities to automate campaign creation optimisation, streamlining the process and improving outcomes. They’ll also figure out a way to capture clickstream data and combine first- and third-party data sources.

Platforms that win in GEO will go beyond brand analysis and provide the infrastructure to act, generating campaigns in real-time, optimising for model memory, and iterating daily as LLM behaviour shifts. These systems will be operational, often built using modern programming languages and leveraging transformer architecture to handle sequential data and attention mechanisms at scale.

That unlocks a much broader opportunity than visibility. 

If GEO is how a brand ensures it’s referenced in AI responses, it’s also how it manages its ongoing relationship with the AI layer itself. GEO becomes the system of record for interacting with LLMs, allowing brands to track presence, performance, and outcomes across generative platforms. Own that layer, and you own the budget behind it.

That’s the monopolistic potential: not just serving insights, but becoming the channel. If SEO were a decentralised, data-adjacent market, GEO can be the inverse — centralised, API-driven, and embedded directly into brand workflows. 

Ultimately, GEO by itself is perhaps the most obvious wedge, especially as we see a shift in search behaviour, but ultimately, it’s a wedge into performance marketing, more broadly. The same brand guidelines and understanding of user data that power GEO can power growth marketing. This is how a big business gets built, as a software product can test multiple channels, iterate, and optimise across them. 

AI enables an autonomous marketer.

Timing matters. The search landscape is just beginning to shift, but ad dollars move quickly, especially when there’s arbitrage, as seen in the 2000s with Google’s AdWords. In the 2010s, it was Facebook’s targeting engine. 

Now, in 2025, it’s LLMs and the platforms that help brands navigate how their content is ingested and referenced by those models. Put another way, GEO is the competition to get into the model’s mind.

In a world where AI is the front door to commerce and discovery, the question for marketers is: Will the model remember you?

Next
Next

How to get Bluesky followers: 11 tips for follower growth