Migrate from Heroku and get up to $10k in credits
Get startedWhy deploy AEO analytics on Render?
AEO Analytics is a dashboard tool for tracking how large language models mention and rank your brand compared to competitors. It solves the problem of visibility into AI-generated recommendations by querying multiple LLMs, extracting brand mentions and sentiment, and surfacing trends over time.
This template deploys a complete AEO analytics stack with a managed Postgres database for storing brand mentions, sentiment data, and competitive rankings—all pre-wired with the correct environment variables. Instead of manually configuring multi-LLM API connections, database schemas, and a React dashboard, you get a working competitive intelligence tool in one click. Render's managed Postgres handles backups and maintenance automatically, letting you focus on customizing the prompts and brands you want to track rather than infrastructure.
Architecture
What you can build
After deploying, you'll have a dashboard that tracks how OpenAI, Anthropic, and Google models mention your brand compared to competitors when answering industry-related questions. A scheduled job queries the LLMs daily with prompts you configure, analyzes responses for sentiment and ranking position, and extracts any URLs attributed to each brand. You can use this to monitor whether AI assistants are recommending your product and how that changes over time.
Key features
- Parallel Multi-LLM Queries: Simultaneously queries OpenAI, Anthropic, and Google models to track brand mentions across different AI providers.
- Automated Brand Extraction: LLM-powered analysis parses responses to extract brand mentions, sentiment scores, and competitive rankings.
- URL-to-Brand Attribution: Detects URLs in LLM responses and matches them to tracked brands using configurable domain mappings.
- Drizzle ORM Schema: PostgreSQL database with Drizzle ORM for storing industries, brands, prompts, and historical mention data.
- Render Workflows Integration: Scheduled workflow tasks using Render Workflows SDK to automate daily LLM polling and digest generation.
Use cases
- Marketing lead tracks brand visibility across ChatGPT and Claude responses
- SEO manager monitors competitor mention frequency in AI recommendation queries
- Startup founder discovers which prompts trigger their product recommendations
- Product marketer analyzes sentiment trends when LLMs compare their tool
What's included
Service | Type | Purpose |
|---|
Next steps
- Open the web service URL and navigate to the dashboard — You should see the SprintHub demo configuration with brands like GitHub, GitLab, Linear, and Jira listed, showing empty charts waiting for data
- Configure a Render Workflow to run the daily-job task on a schedule (e.g., daily at 9am) — You should see the workflow appear in your Render Dashboard under Workflows, with the cron schedule displayed
- Test the workflow by triggering a manual run from the Render Dashboard — You should see LLM responses populate in the dashboard within a few minutes, with brand mentions, sentiment scores, and rankings extracted from OpenAI, Anthropic, and Google models