How I Built an Admin Panel That Replaced 5 SaaS Tools
Published on the BirJob Blog — by the solo developer behind BirJob.com
BirJob is Azerbaijan's largest job aggregator. I built it alone. At any given time, the system is scraping 50+ job boards, serving thousands of daily searches, processing payments through an Azerbaijani payment gateway, managing email campaigns, and running a Telegram bot that pushes job alerts. At some point in 2025, I realized I was paying for — or juggling free tiers of — five separate services just to keep track of what my own product was doing:
- Google Analytics (custom views for event tracking, funnel analysis, referrer breakdown)
- A user management tool (viewing user profiles, CVs, alert keywords, HR subscriptions)
- A content moderation system (reviewing sponsored jobs, contact form spam, blog drafts)
- A payment dashboard (Stripe-like view for tracking ePoint transactions, reconciling webhooks)
- Uptime/scraper monitoring (knowing which of 91 scrapers broke overnight, and why)
So I built one admin panel to replace all of them. This is the story of what it looks like, how it works under the hood, and what I learned along the way.
1. Why Build Custom?
The short answer: context. No SaaS tool knows that my scraper for azercell needs
?json=true instead of ?json, or that a zero-result search for "data analyst baku"
means I should add a new scraping source. Every third-party dashboard I used forced me to mentally
translate between "their world" and mine.
The longer answer involves money and complexity. I was spending time context-switching across five browser tabs, each with its own authentication, its own data model, its own query language. For a solo developer, that tax is real. The admin panel is a single Next.js page that loads everything I care about in one view. One login. One data model. One set of Prisma queries that hit my own PostgreSQL database directly.
The cost of building it was roughly a week of focused work. The cost of not building it was the cognitive overhead of managing five tools, forever.
2. The Architecture: One Page, Thirteen Tabs
The entire admin panel is a single client-side React page at /admin. It uses dynamic imports
for heavier components and lazy-loads data per tab. The navigation has thirteen tabs, organized into
three groups:
const navItems: { id: Tab; label: string; icon: React.ReactNode }[] = [
// Essentials
{ id: 'dashboard', label: 'Dashboard', icon: <LayoutDashboard /> },
{ id: 'analytics', label: 'Analytics', icon: <BarChart2 /> },
{ id: 'search-analytics', label: 'Axtarış Analitikasi', icon: <Search /> },
// Business
{ id: 'sponsored', label: 'Sponsorlu Elanlar', icon: <Briefcase /> },
{ id: 'applications', label: 'Applications', icon: <CheckCircle2 /> },
{ id: 'payments', label: 'Payments', icon: <CreditCard /> },
{ id: 'jobs', label: 'Vakansiyalar', icon: <Database /> },
// System
{ id: 'blog', label: 'Blog', icon: <FileText /> },
{ id: 'errors', label: 'Scraper Errors', icon: <AlertTriangle /> },
{ id: 'scrapers', label: 'Scraper Health', icon: <Activity /> },
{ id: 'email', label: 'Email Marketing', icon: <Mail /> },
{ id: 'users', label: 'Users', icon: <Users /> },
{ id: 'contacts', label: 'Contact Form', icon: <MessageSquareWarning /> },
];
The sidebar is a dark slate-950 vertical nav, sticky on desktop and drawer-style on mobile.
Each tab only fetches its data when you click it. The dashboard tab is the only one that loads automatically
on page open.
Heavy components — Scraper Health, Search Analytics, Contact Submissions — are loaded
with next/dynamic to keep the initial bundle small:
const ScraperHealthPanel = dynamic(
() => import('@/components/admin/ScraperHealthPanel'),
{ ssr: false }
);
const SearchAnalyticsPanel = dynamic(
() => import('@/components/admin/SearchAnalyticsPanel'),
{ ssr: false }
);
const ContactSubmissionsPanel = dynamic(
() => import('@/components/admin/ContactSubmissionsPanel'),
{ ssr: false }
);
3. Admin Authentication: Deliberately Simple
The admin auth is intentionally separate from the main user authentication system. There is no admin
"account." There is one password, stored as an environment variable called ADMIN_SECRET.
You enter the password on /admin/login, and it gets compared server-side:
// POST /api/admin/auth
export async function POST(request: NextRequest) {
const { password } = await request.json();
if (!process.env.ADMIN_SECRET || password !== process.env.ADMIN_SECRET) {
return NextResponse.json({ error: 'Wrong password' }, { status: 401 });
}
const response = NextResponse.json({ success: true });
response.cookies.set('admin_session', process.env.ADMIN_SECRET, {
httpOnly: true,
secure: process.env.NODE_ENV === 'production',
sameSite: 'lax',
maxAge: 60 * 60 * 24 * 7, // 7 days
path: '/',
});
return response;
}
Every admin API route checks for that cookie:
function isAdmin(req: NextRequest) {
return req.cookies.get('admin_session')?.value === process.env.ADMIN_SECRET;
}
Is this "enterprise-grade" auth? No. But I am the only admin. There is no team. The threat model is simple: someone would need to know the secret, or steal the httpOnly cookie. For a solo project, this is the right level of complexity. I can always add proper JWT-based role auth later. The point is: I shipped it in 20 minutes and it works.
4. Replacing Google Analytics: The Analytics Dashboard
Google Analytics is powerful, but I never needed 90% of it. What I actually wanted was: how many people are using my site, what are they doing, and where are they coming from?
So I built a web_event_log table:
model web_event_log {
id Int @id @default(autoincrement())
event String @db.VarChar(50) // register | login | cv_upload | job_view
user_id Int?
ip String? @db.VarChar(64)
country String? @db.VarChar(100)
ua String? @db.VarChar(500)
device String? @db.VarChar(20) // mobile | tablet | desktop | bot
path String? @db.VarChar(500)
referrer String? @db.VarChar(1000)
session_id String? @db.VarChar(64)
meta Json?
created_at DateTime @default(now())
@@index([event])
@@index([user_id])
@@index([created_at])
@@index([country])
@@index([referrer])
@@index([session_id])
}
The analytics API route fires 13 parallel Prisma queries using Promise.all and returns
everything the dashboard needs in a single HTTP response:
const [
totalEvents,
uniqueSessions,
eventBreakdown,
dailyEvents,
topPages,
topCountries,
deviceBreakdown,
topReferrers,
topSearches,
zeroSearches,
searchTotal,
emailSubsTotal,
telegramSubsTotal,
] = await Promise.all([
prisma.web_event_log.count({ where: { created_at: { gte: since } } }),
prisma.web_event_log.findMany({
where: { created_at: { gte: since }, session_id: { not: null } },
select: { session_id: true },
distinct: ['session_id'],
}).then(r => r.length),
prisma.web_event_log.groupBy({ by: ['event'], ... }),
prisma.$queryRaw`SELECT DATE(created_at)::text AS date, COUNT(*) ...`,
// ... 9 more queries
]);
The dashboard renders four primary KPI cards (unique sessions, registrations, job applications, searches)
plus six secondary ones (logins, CV uploads, job views, email subscribers, Telegram subscribers, total events).
Below that, a daily activity bar chart built with pure CSS — no charting library, just
div elements with calculated heights:
{analyticsData.dailyEvents.map(d => (
<div key={d.date} className="flex-1 group relative">
<div
className="w-full bg-orange-500 rounded-t-sm"
style={{ height: `${(d.count / maxDaily) * 100}%` }}
/>
</div>
))}
No Chart.js. No Recharts. No D3. Just percentage-height divs inside a flex container. It is fast, lightweight, and tells me exactly what I need to know at a glance.
5. The Conversion Funnel
One thing I missed from GA was funnel visualization. So I built a dedicated funnel API that tracks the user journey from registration to payment:
const funnel = [
{ step: 'Registration', event: 'register', count: counts['register'] ?? 0 },
{ step: 'Login', event: 'login', count: counts['login'] ?? 0 },
{ step: 'CV Upload', event: 'cv_upload', count: counts['cv_upload'] ?? 0 },
{ step: 'Job Application', event: 'job_apply', count: counts['job_apply'] ?? 0 },
{ step: 'HR Subscribe Init', event: 'hr_subscribe_init', count: counts['hr_subscribe_init'] ?? 0 },
{ step: 'HR Subscribe Paid', event: 'hr_subscribe_paid', count: counts['hr_subscribe_paid'] ?? 0 },
{ step: 'Sponsor Job Init', event: 'sponsor_init', count: counts['sponsor_init'] ?? 0 },
{ step: 'Sponsor Job Paid', event: 'sponsor_paid', count: counts['sponsor_paid'] ?? 0 },
];
This uses the same web_event_log table, just grouped differently. The API also returns
country, device, and referrer breakdowns alongside the funnel, plus daily activity over the selected
time range. Five Promise.all queries, one response.
6. Search Analytics: Zero-Result Searches as a Product Roadmap
This is the feature I am most proud of. Every search that users make on BirJob gets logged into a
search_log table — but only the first page of results, to avoid duplicates from pagination:
model search_log {
id BigInt @id @default(autoincrement())
user_id Int?
session_id String? @db.VarChar(64)
query String @db.VarChar(500)
results_count Int @default(0)
source_filter String? @db.VarChar(100)
country String? @db.VarChar(100)
city String? @db.VarChar(100)
device String? @db.VarChar(20)
ip String? @db.VarChar(45)
browser String? @db.VarChar(100)
os String? @db.VarChar(100)
referrer String? @db.VarChar(500)
created_at DateTime @default(now())
@@index([query])
@@index([results_count]) // find zero-result searches fast
@@index([created_at])
}
The @@index([results_count]) line is the key. That index exists specifically so I can
quickly pull up every search that returned zero results:
// Zero-result searches with context
prisma.search_log.groupBy({
by: ['query'],
where: { created_at: { gte: since }, results_count: 0 },
_count: { id: true },
orderBy: { _count: { id: 'desc' } },
take: 20,
}),
The admin panel renders this as a dedicated "Zero-Result Searches" card with a red border and a subtitle that says (in Azerbaijani): "These queries returned zero results — signal for a new source."
This has directly driven product decisions. When I saw "data analyst" showing up 15 times with zero results, I knew I needed to add more IT-focused job boards. When "freelance" kept appearing, I knew there was demand I was not meeting. The zero-result list is, in practice, my product backlog.
The search analytics panel also shows:
- Top keywords with average result count
- Daily search volume as a bar chart
- Browser, OS, country, and city breakdowns
- A detailed log view where you can expand any individual search to see the user agent, referrer, session ID, and linked user account
The detail view has paginated search logs (50 per page) with a search filter on top, so I can drill into specific queries. Each row is expandable to reveal full metadata.
7. The Dashboard: Promise.all for Parallel Data Loading
The main dashboard tab shows six KPI cards at a glance. The data behind those cards comes from a single API route that fires ten Prisma queries in parallel:
// GET /api/admin/stats
const [
totalJobs,
newJobsToday,
activeSponsoredJobs,
totalSponsoredJobs,
pendingPayments,
paidThisMonth,
totalRevenue,
recentErrors,
topSources,
recentPayments,
] = await Promise.all([
prisma.jobs_jobpost.count(),
prisma.jobs_jobpost.count({ where: { created_at: { gte: yesterday } } }),
prisma.sponsored_job.count({ where: { is_active: true, ends_at: { gte: now } } }),
prisma.sponsored_job.count(),
prisma.sponsored_job.count({ where: { payment_status: 'pending' } }),
prisma.sponsored_job.aggregate({
where: { payment_status: 'paid', created_at: { gte: thisMonth } },
_sum: { amount: true },
}),
prisma.sponsored_job.aggregate({
where: { payment_status: 'paid' },
_sum: { amount: true },
}),
prisma.scraper_errors.findMany({ orderBy: { timestamp: 'desc' }, take: 20 }),
prisma.jobs_jobpost.groupBy({
by: ['source'],
_count: { id: true },
orderBy: { _count: { id: 'desc' } },
take: 10,
}),
prisma.sponsored_job.findMany({
orderBy: { created_at: 'desc' },
take: 10,
select: { id: true, title: true, company: true, payment_status: true,
amount: true, created_at: true, is_active: true, ends_at: true,
contact_email: true, order_id: true },
}),
]);
All ten queries run concurrently against PostgreSQL. On a typical load, this returns in under 200ms. The dashboard then shows:
- Total job count across all scrapers
- Jobs added in the last 24 hours
- Currently active sponsored listings
- Pending payment count (shown as a badge on the nav item)
- Revenue this month (in AZN)
- All-time revenue
- Top 10 job sources by volume
- Last 10 payments with status badges
- Recent scraper errors (last 20)
- A supporters section showing "Buy Me a Coffee" backers
8. User Management: No Separate Tool Needed
The users tab gives me a paginated, searchable list of every user on the platform. The search hits email, first name, last name, and company name simultaneously:
const where = {
AND: [
role ? { role } : {},
search ? {
OR: [
{ email: { contains: search, mode: 'insensitive' } },
{ first_name: { contains: search, mode: 'insensitive' } },
{ last_name: { contains: search, mode: 'insensitive' } },
{ company_name: { contains: search, mode: 'insensitive' } },
],
} : {},
],
};
For each user, the API returns everything I would want to see: their role (candidate or HR),
email verification status, CV profile (stored on Cloudflare R2), active HR subscriptions,
credit balance, keyword alerts, and linked Telegram account. All in one query with Prisma's
nested select:
prisma.user.findMany({
where,
select: {
id: true, email: true, role: true,
first_name: true, last_name: true, company_name: true,
email_verified: true, created_at: true,
cv_profile: { select: { r2_key: true, file_name: true, is_visible: true } },
hr_subscriptions: {
where: { payment_status: 'paid', ends_at: { gt: new Date() } },
select: { plan: true, ends_at: true },
orderBy: { ends_at: 'desc' },
take: 1,
},
hr_credit_transactions: {
where: { payment_status: 'paid' },
select: { amount: true },
},
keywords: { where: { active: true }, select: { keyword: true } },
telegram_subscriber: {
select: { chat_id: true, username: true, active: true, keywords: true },
},
},
orderBy: { created_at: 'desc' },
skip: (page - 1) * limit,
take: limit,
});
I can also create users directly from the admin panel (useful for creating test accounts or manually onboarding HR partners), download a user's CV, and send them notifications via email or Telegram — all from the same interface.
9. Payment Monitoring: Three Revenue Streams, One View
BirJob has three types of paid transactions:
- Sponsored job postings — HR pays to pin a job at the top of results
- HR subscriptions — monthly/annual plans for accessing the CV database
- Credit purchases — one-time credit packs for downloading candidate CVs
Each lives in a different Prisma model (sponsored_job, hr_subscription,
hr_credit_transaction). The payments API unifies them into a single sorted timeline:
const [jobPayments, subscriptions, credits] = await Promise.all([
prisma.sponsored_job.findMany({ orderBy: { created_at: 'desc' }, select: { ... } }),
prisma.hr_subscription.findMany({ orderBy: { created_at: 'desc' }, select: { ... } }),
prisma.hr_credit_transaction.findMany({
where: { type: 'purchase' },
orderBy: { created_at: 'desc' },
select: { ... },
}),
]);
// Normalize into unified format
const unified: UnifiedPayment[] = [
...jobPayments.map(j => ({
id: `job-${j.id}`,
type: 'job',
label: j.title,
amount: j.amount ? Number(j.amount) : null,
payment_status: j.payment_status,
order_id: j.order_id,
epoint_transaction: j.epoint_transaction,
// ...
})),
...subscriptions.map(s => ({ id: `sub-${s.id}`, type: 'subscription', ... })),
...credits.map(c => ({ id: `credit-${c.id}`, type: 'credits', ... })),
];
unified.sort((a, b) =>
new Date(b.created_at).getTime() - new Date(a.created_at).getTime()
);
The composite ID pattern (job-123, sub-456, credit-789) lets
me handle updates and deletes through a single PATCH/DELETE endpoint that parses the prefix to know
which table to hit. This means one unified payments view on the frontend, one endpoint on the backend,
three underlying Prisma models.
Each payment row shows the ePoint transaction ID (Azerbaijan's payment processor), the order ID, the amount in AZN (Azerbaijani manat), and a status badge. I can change statuses directly from the table — mark something as paid, failed, or cancelled — which is useful when a webhook is delayed and I need to manually reconcile.
10. Sponsored Job Management
The sponsored jobs tab handles the full lifecycle of paid job postings. I can:
- Create new sponsored listings manually (useful for deals made outside the self-serve flow)
- Toggle any listing active/inactive with a single click
- See which ones are paid, which are pending, which have expired
- Delete listings that were created by mistake
The job application flow is tracked separately. The applications tab shows aggregate stats (total, pending, reviewed, rejected, accepted) and a per-job breakdown with expandable application details including cover letters and CV download links.
11. Scraper Health Monitoring
This is the feature that saves me the most time. BirJob runs 91 scrapers. Some use
aiohttp for simple HTTP requests, some use Playwright for JavaScript-heavy sites.
They run on a schedule via GitHub Actions, and things break constantly: sites redesign,
APIs change, Cloudflare blocks our IPs, domains go offline.
The Scraper Health panel uses a raw SQL query to get per-source job counts and last-seen timestamps:
prisma.$queryRaw`
SELECT source, COUNT(*) as count, MAX(last_seen_at) as last_seen
FROM scraper.jobs_jobpost
WHERE is_active = TRUE AND source IS NOT NULL
GROUP BY source
ORDER BY count DESC
`
Each source gets a color-coded card:
- Green (OK) — last seen within 12 hours
- Amber (Stale) — last seen 12–36 hours ago
- Red (Dead) — not seen in 36+ hours
function getSourceStatus(lastSeen: string) {
const hoursAgo = (Date.now() - new Date(lastSeen).getTime()) / (1000 * 60 * 60);
if (hoursAgo < 12) return 'ok';
if (hoursAgo < 36) return 'stale';
return 'dead';
}
There is also a scraper_config table that lets me enable/disable individual
scrapers from the admin panel without redeploying:
model scraper_config {
id Int @id @default(autoincrement())
name String @unique @db.VarChar(100)
is_enabled Boolean @default(true)
disabled_reason String? @db.VarChar(500)
updated_at DateTime @default(now()) @updatedAt
}
Each scraper card has a power toggle button. Click it, and the scraper config flips via a PATCH request. The Python scraper manager checks this table before running each scraper. This means I can disable a broken scraper at 2 AM from my phone without touching a terminal.
The panel also shows disabled scrapers in a separate section with their reason for being disabled (e.g., "Cloudflare blocks requests", "API dead (404)", "Site unresponsive ~192s"). And there is a third section: "Enabled but no jobs" — scrapers that are technically turned on but produced zero results, which usually means something silently broke.
Recent errors from the last 7 days are listed at the bottom with timestamps and truncated error messages. The error count also shows up as a red badge next to each source card.
12. Contact Form Moderation with Spam Intelligence
The contact form submissions panel is surprisingly sophisticated for what seems like a simple feature. Every contact form submission gets enriched with:
- IP geolocation (country, region, city, timezone, coordinates)
- ISP and organization lookup
- Proxy/VPN detection
- Hosting/datacenter IP detection
- DNSBL (DNS blacklist) checking
- AbuseIPDB score (0–100 risk rating)
- Email MX record validation
- Disposable email detection
- Device, browser, and OS parsing from User-Agent
model contact_submission {
// ... basic fields ...
is_proxy Boolean @default(false)
is_hosting Boolean @default(false)
is_blacklisted Boolean @default(false)
blacklists String[]
email_has_mx Boolean @default(true)
email_disposable Boolean @default(false)
abuse_score Int?
abuse_reports Int?
status String @default("new") @db.VarChar(20) // new | read | replied | spam
notes String?
}
In the admin panel, each submission shows inline threat indicators: a shield icon for proxy/VPN, a server icon for hosting IPs, colored badges for blacklisted IPs, missing MX records, disposable emails, and high abuse scores. I can immediately tell whether a submission is likely spam before even reading the message.
The expanded view for each submission shows the full network details, geolocation with a Google Maps
link, security assessment, email validation results, device info, and editable notes. I can mark
submissions as read, replied, or spam, and I can reply directly via a mailto: link
that pre-fills the subject with the submission ID.
13. Blog Post Management
The blog management tab is a complete CMS built into the admin panel. Posts are stored in a
blog_post table:
model blog_post {
id Int @id @default(autoincrement())
slug String @unique @db.VarChar(500)
title String @db.VarChar(500)
excerpt String @db.VarChar(1000)
content String
cover_image String? @db.VarChar(1000)
author String @default("BirJob Redaksiyası")
tags String[]
published Boolean @default(true)
published_at DateTime @default(now())
view_count Int @default(0)
}
The editor supports HTML content with image uploads to Cloudflare R2. When you upload an image,
it inserts an <img> tag at the cursor position in the textarea. Posts can be
saved as drafts (unpublished) or published immediately. Tags are comma-separated in the input
and stored as a PostgreSQL string array.
I considered using a rich text editor like TipTap or Slate, but raw HTML in a textarea turned out to be faster for my workflow. I write the content, preview it, and publish. No WYSIWYG quirks.
14. Email Marketing: Built-In Campaign Manager
Instead of paying for Mailchimp or ConvertKit, the admin panel has a full email marketing system built on Resend. It supports:
- Audiences — separate contact lists (e.g., job seekers, HR professionals)
- Contacts — add, remove, toggle subscription status
- Broadcasts — compose HTML emails, send test emails, send to full audience
- Delivery logs — track sent, delivered, opened, clicked, bounced, complained, unsubscribed
- Templates — reusable email templates for different audience types
The compose interface uses a pre-built set of templates (weekly job roundup for job seekers, paid listing promo for HR, general marketing). Each template is a fully styled HTML email with Resend unsubscribe URL placeholders. I can preview templates before sending with variable substitution:
function resolvePreviewHtml(html: string) {
return html
.replace(/\{\{\{RESEND_UNSUBSCRIBE_URL\}\}\}/g, '#')
.replace(/\{\{\{FIRST_NAME\|[^}]*\}\}\}/g, 'orada')
.replace(/\{\{\{[^}]+\}\}\}/g, '[variable]');
}
The delivery log shows aggregate stats (sent, delivered, delayed, opened, clicked, bounced, etc.) and individual log entries. This replaced my need for a separate email analytics dashboard.
15. Telegram Bot Subscriber Management
BirJob has a Telegram bot that sends job alerts based on keyword subscriptions. The admin panel has a dedicated subscribers table that shows:
- Subscriber name, username, and chat ID
- Phone number (if shared)
- Keyword subscriptions as colored tags
- Number of jobs sent to each subscriber
- Active/inactive status
- Aggregate stats: total, active, and top keywords across all subscribers
There is a search bar, status filters, and CSV export. I can deactivate or delete subscribers directly from the table. The top keywords section shows the most popular alert keywords as clickable tags — clicking one filters the subscriber list to those who use that keyword.
16. Real-Time vs. Batch: What I Chose and Why
A question I debated early on: should the admin panel show real-time data, or is batch/on-demand enough?
I went with on-demand. Every tab fetches fresh data when you click it. The dashboard refreshes with a manual button click. There is no WebSocket connection, no polling interval, no Server-Sent Events.
The reasoning: I check the admin panel maybe 3–5 times a day. Real-time updates would mean maintaining persistent connections for a single user who is usually not looking at the screen. The complexity cost of WebSockets or SSE was not justified by the benefit of seeing a number increment without clicking "Refresh."
The one exception is scraper health. I sometimes watch that panel after deploying a scraper fix, hitting refresh every few minutes to see if the error count stops climbing. If I ever find myself doing that for more than a day, I will add auto-refresh. But so far, manual has been fine.
17. The Database Schema: Two PostgreSQL Schemas, One Prisma Client
BirJob uses PostgreSQL with two schemas: scraper (for job data, scraper errors,
scraper config) and website (for everything else: users, payments, search logs,
analytics events, blog posts, email campaigns, etc.). Prisma's multiSchema preview
feature handles the mapping:
generator client {
provider = "prisma-client-js"
previewFeatures = ["multiSchema"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
schemas = ["scraper", "website"]
}
model jobs_jobpost {
// ...
@@schema("scraper")
}
model web_event_log {
// ...
@@schema("website")
}
This split matters because the scraper runs as a separate Python process with its own
database access. The Next.js app reads from both schemas but only writes to website.
The admin panel queries both: scraper health data comes from the scraper schema,
while analytics, users, and payments come from website.
18. Notifications: Admin-to-User Messaging
The admin panel can send direct notifications to individual users via email, Telegram, or both. Every notification is logged:
model admin_notification_log {
id Int @id @default(autoincrement())
user_id Int
user_email String @db.VarChar(255)
channel String @db.VarChar(20) // email | telegram | both
subject String? @db.VarChar(255)
message String
results Json // [{channel, ok, error?}]
sent_at DateTime @default(now())
}
The results JSON field stores the outcome per channel, so I can see if the email
was sent successfully but the Telegram message failed (or vice versa). This has been useful
for reaching out to users about their account, confirming manual payment reconciliations,
or notifying HR users about their subscription status.
19. What I Would Add Next
The admin panel is not done. It never will be, because the product keeps evolving. Here is what I would prioritize next:
- Automated alerts. When a scraper has been "dead" for 24 hours, send me a Telegram notification automatically instead of waiting for me to check the panel.
- Revenue charts. Right now I see total revenue and monthly revenue as numbers. A simple month-over-month line chart would tell a better story.
-
User cohort analysis. How many users who registered 30 days ago are still
active? What percentage of candidates who uploaded a CV eventually applied to a job?
The data is all there in
web_event_log; I just need to write the queries. - A/B test tracking. As BirJob grows, I want to test different landing page variants and track conversion rates per variant directly in the admin panel.
-
Scraper run history. Right now I see errors and last-seen timestamps, but
not a full history of each scraper run (duration, job count delta, success/failure). Adding
a
scraper_run_logtable would make debugging much easier.
20. Lessons Learned
Building your own admin panel is not always the right call. If you are on a team of five engineers with a shared Retool instance, use Retool. If you have a data team that lives in Metabase, use Metabase. But if you are a solo developer, building a custom admin panel is one of the highest-leverage things you can do.
Here is what I learned:
- Promise.all is your best friend. Every admin API route in BirJob fires multiple independent queries in parallel. The stats endpoint runs 10 queries concurrently. The analytics endpoint runs 13. The search analytics endpoint runs 10. If those were sequential, page loads would be 3–5x slower.
- Start with data, not UI. I designed the Prisma schema and API routes first, then built the frontend on top. Getting the data model right means the frontend is just rendering — no gymnastics.
-
CSS bar charts are enough. I do not need a charting library for "show me
daily event counts as bars." A flex container with percentage-height divs and a
group-hovertooltip is simpler, lighter, and faster than importing 50KB of Chart.js. -
Composite IDs solve the multi-table problem. When you need to display
items from three different tables in one list and handle updates/deletes through one endpoint,
prefixing IDs with their type (
job-123,sub-456) is a clean pattern that avoids adding a polymorphic table. - Do not over-engineer auth for admin panels. A single shared secret in an environment variable, set as an httpOnly cookie, is fine for a solo developer. You can always layer on proper role-based auth later. Ship first.
-
Index what you query. The
@@index([results_count])onsearch_logexists because I query for zero-result searches. The@@index([created_at])onweb_event_logexists because every analytics query filters by date range. Indices should reflect your actual access patterns, not theoretical best practices. -
Dynamic imports keep initial load fast. Three of the thirteen tabs use
next/dynamicwithssr: false. The other ten are simple enough to live in the main bundle. This is a conscious tradeoff between code-splitting overhead and perceived performance.
Conclusion
The BirJob admin panel is roughly 3,000 lines of TypeScript spread across one page component, three dynamically imported panels, and 30 API route files. It replaced Google Analytics custom dashboards, a user management tool, a content moderation workflow, a payment reconciliation dashboard, and a scraper monitoring system.
Total cost: zero dollars per month, forever.
It is not the prettiest admin panel in the world. It will never be a SaaS product itself. But it gives me exactly the information I need, in exactly the shape I need it, without leaving the context of my own application. For a solo developer, that is worth more than any off-the-shelf tool.
If you are running a side project or a small product and find yourself juggling multiple dashboards just to understand what your own system is doing, consider building your own. It takes less time than you think, and it pays for itself on the first day.
BirJob.com is Azerbaijan's job aggregator, scraping 50+ job boards and serving them in one searchable interface. Built and maintained by a solo developer. If you are hiring in Azerbaijan, post a sponsored job.
