Multi-tenant from day 1
Tenants are isolated in Postgres at the row level. No cross-tenant queries possible by construction.
A multi-tenant SEO/AEO automation platform. Pulls data, decides what is safe to fix, fixes it, and asks permission for the rest.
Pulls keyword, ranking, backlink, and content data on a schedule. Runs SEO and content audits across every page on every tenant site. Scores findings against rules and brand-relevance gates. Safe technical fixes (broken links, duplicate canonicals, schema injection, indexation pings) auto-apply. Anything touching pricing, public content, or money pages gets queued for a human. Outreach to high-authority domains is auto-discovered, drafted in two languages, and queued to send. Every action is logged with the user who approved it.
Before: a marketing team running multiple websites was pasting SEO data from 3 vendors into spreadsheets, catching duplicate posts and broken links weeks late, drafting outreach emails one at a time, approving the same kind of small fix hundreds of times by hand, and reporting "are we progressing?" without data. After: ~50 pipeline runs/day across 2 production tenants, 24 distinct change types (12 auto-apply, 12 approval-required), 70+ outreach drafts per refresh cycle, automated weekly spam-backlink disavow.
Celery workers pull from DataForSEO, Google Search Console, GA4, Serper SERP, WordPress REST, and Lighthouse on schedules per tenant.
For every URL: rank tracking, content audit (thin content, keyword cannibalization, missing schema), technical audit (canonicals, redirects, indexation). Each finding gets a type and a confidence score.
Before queue or execute: classify type → global gate → money-page guard → 3-signal gate (rank drop + traffic dip + competitor gain). Only suggestions clearing all gates proceed.
12 change types auto-apply via WordPress REST (broken-link fix, canonical, schema injection). 12 types route to the changes-queue with a human-readable diff for approval.
High-authority targets discovered from competitor backlinks. Drafts auto-generated in EN + AR. Spam backlinks scanned weekly; a Google-format disavow file auto-generated for the SEO lead to upload.
Production tenants
2
Pipeline runs
~50/day
Change types
24 (12 auto, 12 approval)
Outreach drafts
70+ / refresh
LLM spend cap
$60/month
Last 24h failed runs
0
These are not mockups. Every screenshot below is from the system running in production.






For anyone evaluating the system from an engineering angle: why these choices, and what was traded off.
Tenants are isolated in Postgres at the row level. No cross-tenant queries possible by construction.
Auto-apply only for changes with no rollback risk. Everything touching ranking-sensitive content needs human OK.
Per-tenant daily and monthly LLM spend caps. When hit, the pipeline pauses and posts to the operations channel. No surprise bills.
Every auto-apply, every approval, every reject, every override is logged with the actor. Drives weekly reports and recovery from any bad change.
Spam scanned weekly, disavow file generated, but only a human can submit it to Google. The cost of a wrong disavow is too high to automate.
Share the workflow and the systems you use today. Within 24 hours we reply with scope, KPIs, timeline, and a SAR estimate.
Start nowA multi-tenant SEO/AEO automation platform. Pulls data, decides what is safe to fix, fixes it, and asks permission for the rest.
Pulls keyword, ranking, backlink, and content data on a schedule. Runs SEO and content audits across every page on every tenant site. Scores findings against rules and brand-relevance gates. Safe technical fixes (broken links, duplicate canonicals, schema injection, indexation pings) auto-apply. Anything touching pricing, public content, or money pages gets queued for a human. Outreach to high-authority domains is auto-discovered, drafted in two languages, and queued to send. Every action is logged with the user who approved it.
Before: a marketing team running multiple websites was pasting SEO data from 3 vendors into spreadsheets, catching duplicate posts and broken links weeks late, drafting outreach emails one at a time, approving the same kind of small fix hundreds of times by hand, and reporting "are we progressing?" without data. After: ~50 pipeline runs/day across 2 production tenants, 24 distinct change types (12 auto-apply, 12 approval-required), 70+ outreach drafts per refresh cycle, automated weekly spam-backlink disavow.
Celery workers pull from DataForSEO, Google Search Console, GA4, Serper SERP, WordPress REST, and Lighthouse on schedules per tenant.
For every URL: rank tracking, content audit (thin content, keyword cannibalization, missing schema), technical audit (canonicals, redirects, indexation). Each finding gets a type and a confidence score.
Before queue or execute: classify type → global gate → money-page guard → 3-signal gate (rank drop + traffic dip + competitor gain). Only suggestions clearing all gates proceed.
12 change types auto-apply via WordPress REST (broken-link fix, canonical, schema injection). 12 types route to the changes-queue with a human-readable diff for approval.
High-authority targets discovered from competitor backlinks. Drafts auto-generated in EN + AR. Spam backlinks scanned weekly; a Google-format disavow file auto-generated for the SEO lead to upload.
Production tenants
2
Pipeline runs
~50/day
Change types
24 (12 auto, 12 approval)
Outreach drafts
70+ / refresh
LLM spend cap
$60/month
Last 24h failed runs
0
These are not mockups. Every screenshot below is from the system running in production.






For anyone evaluating the system from an engineering angle: why these choices, and what was traded off.
Tenants are isolated in Postgres at the row level. No cross-tenant queries possible by construction.
Auto-apply only for changes with no rollback risk. Everything touching ranking-sensitive content needs human OK.
Per-tenant daily and monthly LLM spend caps. When hit, the pipeline pauses and posts to the operations channel. No surprise bills.
Every auto-apply, every approval, every reject, every override is logged with the actor. Drives weekly reports and recovery from any bad change.
Spam scanned weekly, disavow file generated, but only a human can submit it to Google. The cost of a wrong disavow is too high to automate.
Share the workflow and the systems you use today. Within 24 hours we reply with scope, KPIs, timeline, and a SAR estimate.
Start now