AI services like ChatGPT, Perplexity and Copilot have become the default search tool for many users. Instead of googling "How to unsend a reply-all email?", we ask AI- which then asks Google, Wikipedia, Reddit, and a wide range of other sources, before responding.

SEO and ranking on Google aren't nearly as important for new startups these days, but what is important is AEO (answer engine optimization) or GEO (generative engine optimization)... or AI SEO. There are lots of ways to try to boost your brand's LLM visibility, but this post isn't about that. Every company should figure out their own strategy based on their product and community.

Once your company has an AEO strategy (or theory/idea), you'll want to test it and monitor for indications that your strategy is working.

THAT- is what this tutorial is about. ☝️

In this guide, I'll show you how to build an AEO monitoring tool in Python, that runs a series of questions against a series of models, then monitors for keywords (brand mentions) in the response and looks for certain domains in the citations.

This guide will cover:

  • Querying multiple models with one API key using OpenRouter
  • Ensuring each model enables web search and returns citations
  • Checking for keywords in the response, matches for your domain in citations
  • Running queries in parallel with asyncio
  • Logging matches in PostHog
  • Running the script on a schedule with GitHub Actions

With this script running your target list of question, you can begin monitoring campaign effectiveness as you try to try to boost your company's LLM visibility. Once the data is in PostHog, you can generate charts and reports manually, or with their MaxAI. This will give you everything you need to measure performance of your AEO strategy- whatever it happens to be!

Querying Multiple Models with OpenRouter

Like any good dev, I try to build things that are dynamic and reusable. I don't want to write 3 different (but very similar) functions to call each model, or have to create 3 API keys. So for this guide, I chose OpenRouter to give me a single endpoint and key to use any model. You could also use you.com or together.ai, etc.

Start out by creating an OpenRouter account, adding a payment method or account credits, and creating an API key. Although not free, OpenRouter's fees are usage based and reasonable, and they provide a nice breakdown of the cost per query and model.

Querying Multiple Models with OpenRouter

Next, open up a new Google Colab notebook or your favorite Python editor.

Google Colab Notebook

Add your OpenRouter API key to the secrets tab in colab, or to your local environment.

Then install PostHog

!pip install posthog

And then import dependencies:

import asyncio
from openai import OpenAI, AsyncOpenAI
from google.colab import userdata
from posthog import Posthog

Next, create the client and define the questions, keywords, and domains you want to monitor:

# Setup Config and Clients
config = {
	"base_url": "https://openrouter.ai/api/v1",
	"api_key": userdata.get("OPENROUTER_API_KEY")
}

client = OpenAI(**config)
async_client = AsyncOpenAI(**config)
posthog = Posthog(
project_api_key=userdata.get("POSTHOG_API_KEY"),
host='https://app.posthog.com'
)

keywords = ["appsmith", "appsmithai"]
domains = ["appsmith.com", "appsmithai.com"]
models = ["openai/gpt-4o", "perplexity/sonar-pro", "anthropic/claude-sonnet-4"]
prompts = [
	"What's the best open source low code tool for building apps in 2025?",
	"What's the best drag and drop app builder with SSO in 2025?"
]

With everything set up, we can begin to loop through the models and ask the set of prompts.

for model in models:
  print(f"Model: {model}")
  for prompt in prompts:
    print(f"Prompt: {prompt}")
    completion = client.chat.completions.create(
      model=f"{model}:online",
      stream=False,
      messages=[
        {
          "role": "user",
          "content": prompt
        }
      ]
    )
    content = completion.choices[0].message.content.lower()
    for keyword in keywords:
      if keyword in content:
        print(f"--MatchType: Keyword, Model: {model}, Prompt: {prompt}, Mentions: {content.count(keyword)}")
        break

    annotations = completion.choices[0].message.annotations
    print(f"{len(annotations)} annotations returned")
    for annotation in annotations:
      url = annotation.url_citation.url.lower()
      print(f"URL: {url}")
      for domain in domains:
        if domain in url:
          print(f"--MatchType: Domain, Model: {model}, Prompt: {prompt}")
          break

The :online tag appended to the model name ensures each model enables web search so that citations can be returned.

This basic example loops through the models, asking each prompt, one at a time- so it's slow and gets exponentially slower with more prompts and models. Querying Multiple Models with OpenRouter

Next let's add some concurrency to speed things up, and error handling in case one of the APIs fails.

Running Queries in Parallel

Here's a slightly more advanced version with concurrency and error handling.

async def test_model_prompt(model, prompt):
    print(f"Starting: {model} - {prompt[:50]}...")
    try:
        completion = await async_client.chat.completions.create(
            model=f"{model}:online",
            stream=False,
            messages=[{"role": "user", "content": prompt}]
        )

        content = completion.choices[0].message.content.lower()

        # Check keywords
        for keyword in keywords:
            if keyword in content:
                print(f"--MatchType: Keyword, Model: {model}, Mentions: {content.count(keyword)}")
                break

        # Check annotations
        annotations = completion.choices[0].message.annotations
        print(f"{len(annotations)} annotations returned for {model}")

        for annotation in annotations:
            url = annotation.url_citation.url.lower()
            for domain in domains:
                if domain in url:
                    print(f"--MatchType: Domain, Model: {model}, URL: {url}")
                    break

    except Exception as e:
        print(f"Error with {model}: {e}")

# Run all combinations concurrently
tasks = [test_model_prompt(model, prompt) for model in models for prompt in prompts]
await asyncio.gather(*tasks)

This brings the 6 queries (2 prompts x 3 models) down from ~2.5 minutes to only 30 seconds!

Logging in PostHog

Next, create a PostHog account and choose Product Analytics in the onboarding (and any other services you want to try). Logging in PostHog

On the next screen, copy your API key and add it to the colab secrets or your local environment. Logging in PostHog

Then create the PostHog client and add a function to log match events when a keyword or domain is detected.


def log_match_event(match_type, model, prompt, **kwargs):
    """
    Log a match event to PostHog for analytics.

    Args:
        match_type: "keyword" or "domain"
        model: The model name that produced the match
        prompt: The prompt used
        **kwargs: Additional properties (mentions, url, keyword, domain, etc.)
    """
    posthog.capture(
        distinct_id='aeo_monitor',
        event='aeo_match_found',
        properties={
            'match_type': match_type,
            'model': model,
            'prompt': prompt,
            **kwargs
        }
    )

async def test_model_prompt(model, prompt):
    print(f"Starting: {model} - {prompt[:50]}...")
    try:
        completion = await async_client.chat.completions.create(
            model=f"{model}:online",
            stream=False,
            messages=[{"role": "user", "content": prompt}]
        )

        content = completion.choices[0].message.content.lower()

        # Check keywords
        for keyword in keywords:
            if keyword in content:
                mentions = content.count(keyword)
                print(f"--MatchType: Keyword, Model: {model}, Mentions: {mentions}")
                log_match_event('keyword', model, prompt, keyword=keyword, mentions=mentions)
                break

        # Check annotations
        annotations = completion.choices[0].message.annotations
        print(f"{len(annotations)} annotations returned for {model}")

        for annotation in annotations:
            url = annotation.url_citation.url.lower()
            for domain in domains:
                if domain in url:
                    print(f"--MatchType: Domain, Model: {model}, URL: {url}")
                    log_match_event('domain', model, prompt, domain=domain, url=url)
                    break

    except Exception as e:
        print(f"Error with {model}: {e}")

tasks = [test_model_prompt(model, prompt) for model in models for prompt in prompts]
await asyncio.gather(*tasks)

Run it again, and this time you should see new events appear in PostHog (if there are any matches).

Logging in PostHog

Running on a Schedule with GitHub Actions

Lastly, we can set this script to run on a schedule to continuously monitor for new mentions or citations. For this guide, I'll be using GitHub Actions, but you could also use CloudFlare Workers, Vercel Cron, etc. In any case, you'll probably want to set up a repo, then connect it to some type of scheduler.

First, set up a new GitHub repo for the project, and save the script as aeo_monitor.py.

This version is slightly different, to import the API keys from GitHub Actions repo secrets instead of Google Colab, and it adds a wrapper around the async function.

import os
import asyncio
from posthog import Posthog
from openai import AsyncOpenAI

# Setup Config and Clients

posthog = Posthog(
    api_key=os.environ["POSTHOG_API_KEY"],
    host=os.environ.get("POSTHOG_HOST")
)

async_client = AsyncOpenAI(
    base_url=os.environ.get("OPENROUTER_BASE_URL") or "https://openrouter.ai/api/v1",
    api_key=os.environ["OPENROUTER_API_KEY"],
    default_headers={
        "HTTP-Referer": os.environ.get("OR_HTTP_REFERRER") or "https://github.com/greenflux/aeo_monitor",
        "X-Title": os.environ.get("OR_X_TITLE") or "AEO Monitor"
    }
)

keywords = ["appsmith", "appsmithai"]
domains = ["appsmith.com", "appsmithai.com"]
models = ["openai/gpt-4o", "perplexity/sonar-pro", "anthropic/claude-sonnet-4"]
prompts = [
    "What's the best open source low code tool for building apps in 2025?",
    "What's the best drag and drop app builder with SSO in 2025?"
    ]


def log_match_event(match_type, model, prompt, **kwargs):
    """
    Log a match event to PostHog for analytics.

    Args:
        match_type: "keyword" or "domain"
        model: The model name that produced the match
        prompt: The prompt used
        **kwargs: Additional properties (mentions, url, keyword, domain, etc.)
    """
    posthog.capture(
        distinct_id='aeo_monitor',
        event='aeo_match_found',
        properties={
            'match_type': match_type,
            'model': model,
            'prompt': prompt,
            **kwargs
        }
    )

async def test_model_prompt(model, prompt):
    print(f"Starting: {model} - {prompt[:50]}...")
    try:
        completion = await async_client.chat.completions.create(
            model=f"{model}:online",
            stream=False,
            messages=[{"role": "user", "content": prompt}]
        )

        content = completion.choices[0].message.content.lower()

        # Check keywords
        for keyword in keywords:
            if keyword in content:
                mentions = content.count(keyword)
                print(f"--MatchType: Keyword, Model: {model}, Mentions: {mentions}")
                log_match_event('keyword', model, prompt, keyword=keyword, mentions=mentions)
                break

        # Check annotations
        annotations = completion.choices[0].message.annotations
        print(f"{len(annotations)} annotations returned for {model}")

        for annotation in annotations:
            url = annotation.url_citation.url.lower()
            for domain in domains:
                if domain in url:
                    print(f"--MatchType: Domain, Model: {model}, URL: {url}")
                    log_match_event('domain', model, prompt, domain=domain, url=url)
                    break

    except Exception as e:
        print(f"Error with {model}: {e}")
        print(f"Error type: {type(e).__name__}")
        print(f"Error details: {str(e)}")
        import traceback
        traceback.print_exc()

async def main():
    tasks = [test_model_prompt(model, prompt) for model in models for prompt in prompts]
    await asyncio.gather(*tasks)

if __name__ == "__main__":
    asyncio.run(main())

Then add a requirements.txt file:

openai>=1.43,<2
posthog>=3.6,<4

Next, add a .github/workflows/aeo-monitor.yml file:

name: AEO Monitor

on:
  workflow_dispatch:
  schedule:
    - cron: "0 0 * * *"  # Daily at 0:00 UTC

env:
  OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
  OPENROUTER_BASE_URL: "https://openrouter.ai/api/v1"
  OR_HTTP_REFERRER: "https://github.com/greenflux/aeo_monitor"
  OR_X_TITLE: "AEO Monitor"

jobs:
  monitor:
    runs-on: ubuntu-latest
    permissions:
      contents: read
    env:
      POSTHOG_API_KEY: ${{ secrets.POSTHOG_API_KEY }}
      OPENROUTER_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
      POSTHOG_HOST: ${{ secrets.POSTHOG_HOST }}
      OPENROUTER_BASE_URL: ${{ secrets.OPENROUTER_BASE_URL }}
      AEO_DISTINCT_ID: ${{ secrets.AEO_DISTINCT_ID }}
    concurrency:
      group: aeo-monitor
      cancel-in-progress: true
    steps:
      - uses: actions/checkout@v4

      - name: Show repo tree (debug)
        run: |
          pwd
          ls -la

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.11"
          cache: "pip"

      - name: Install deps
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt

      - name: Run monitor
        run: python aeo_monitor.py

Lastly, add your API keys as new Repository Secrets:

  • Settings > Security > Secrets & Variables > Actions Running on a Schedule with GitHub Actions

Testing the GitHub Action

Head over to the Actions tab of your repo. You should see the aeo_monitor.py workflow. Click the Run workflow dropdown, then the Run workflow button. Testing the GitHub Action

If all goes well, you should see the monitor running for about 30 seconds, followed by a new set of logs in PostHog. Testing the GitHub Action Testing the GitHub Action

Monitoring Results Over Time

To be a truly effective measure of your AEO campaign efforts over time, it would be best to finalize your list of keywords, domains, models and prompts-- and then run the exact same combo at regular intervals for at least a month. If you decide to change any of the variables, then also change the distinct ID in PostHog, so you can report those results separately.

Conclusion

Regardless of what you want to call it (AEO, GEO, AI SEO, etc), the fact is that AI is becoming a default search tool for many users. Different strategies are needed to appear in AI searches compared to Google and SEO strategies. AEO is still very new, and changing constantly, so companies should do their own research and find the strategy that works best for their product and community. This AEO monitoring tool provides a simple solution to continuously monitor your brand's LLM visibility, allowing you to measure the effectiveness of marketing efforts to boost it.

Note on Reddit

Reddit is the most cited source by LLMs, so it makes sense that companies would try to build an AEO strategy around it-- which is completely fine. But if that's the only reason you are on Reddit, it will fail. Your AEO strategy can be based on Reddit, but your Reddit account should be for more than that. Either use your real/main Reddit account, or if you're new, start following other subreddits and engaging in completely unrelated topics too. I could fill another post just on tips for Reddit, but this is the most important one. Be a real person with real interests, not just an account the marketing team uses to post spam.