Global Topics Management
The Global Topics page is the central command center for managing the master taxonomy catalog. Every topic classified in AudienceGPT exists first as a global topic in the topics table, and organizations adopt these into their own libraries as org topics (stored in org_topics with a foreign key back to the global record). This two-tier architecture means that improvements to global topics -- reclassification, name cleanup, taxonomy reassignment -- automatically benefit every organization that has adopted them.
This guide covers browsing, searching, editing, reclassifying, archiving, and auditing global topics from the super admin panel.
Global vs Org Topics
Understanding the relationship between global and org topics is fundamental to admin operations.
| Aspect | Global Topics (topics) | Org Topics (org_topics) |
|---|---|---|
| Scope | System-wide, shared across all orgs | Scoped to a single organization |
| Ownership | Managed by super admins | Adopted by orgs from the global catalog |
| Classification data | Full 7-layer classification, embeddings, platform outputs | References global_topic_id for classification |
| Mutations | Reclassify, merge, archive, tree mapping | Link/unlink, activation, performance tracking |
| Deduplication | pgvector embedding similarity (95% block) | Checked at adoption time via global_topic_id |
| Engine version | Stamped with ENGINE_VERSION at classification | Inherits from global topic |
When reading code or API responses, remember the field name swap at the storage boundary:
- DB column
taxonomy_type(41 specific types like "Auto", "Business Technology") maps to TypeScript fieldparentCategory - DB column
parent_category(13 broad groups like "Technology & Telecom") maps to TypeScript fieldtaxonomyType
This swap is handled automatically by rowToTopicRecord() in the storage layer, but raw SQL routes (including admin routes) perform the swap manually in their response mappers.
Topic Browser
Navigate to Admin > Topics to access the global topics browser.

Searching and Filtering
The browser supports several filter dimensions:
| Filter | Query Parameter | Description |
|---|---|---|
| Search | search | Case-insensitive substring match on topic_name |
| Parent Category | parentCategory | Filter by one of the 41 parent categories (DB: taxonomy_type) |
| Segment Type | segmentType | Filter by B2B, B2C, B2B2C, B2E, or B2G |
| Archived | showArchived | Toggle to view archived topics instead of active ones |
Sorting
Click any column header to sort. Supported sort columns:
topic_name-- Alphabetical by topic nameparent_category-- By the 41-type parent categorytaxonomy_type-- By the 13-group taxonomy typesegment_type-- By segment classificationcreated_at-- By creation date (default, descending)performance_score-- By performance metricaudience_type-- By audience type classification
Pagination
Results are paginated with a configurable page size (1--100, default 25). The API returns total, page, and pageSize in every response for client-side pagination controls.
Viewing Topic Details
Click any topic row to open the detail panel. The detail view displays:
- Classification layers: All 7 classification layers (intent, intensity, awareness, segment, sensitivity, buyer journey, composite)
- Platform outputs: Generated names and descriptions for Trade Desk, LiveRamp, Internal, and any custom templates
- Taxonomy path: Full tree path if the topic is mapped to a taxonomy node
- Engine version: Current version stamp and whether it is outdated
- Keywords: Auto-generated classification keywords
- IAB code: Associated IAB content taxonomy code
- Audience type: Derived audience type (e.g., "In-Market Buyers", "Active Enthusiasts")
Editing a Single Topic
Super admins can edit individual topic fields directly from the detail panel.
Inline Field Editing
- Open a topic's detail panel
- Click the edit icon next to the field you want to change
- Modify the value and confirm
- The system regenerates dependent fields (platform names, descriptions, taxonomy hash)
The PATCH /api/admin/topics/[topicId] endpoint handles single-topic updates. It:
- Validates the incoming field values
- Regenerates platform outputs using the template engine when classification-related fields change
- Updates the
updated_attimestamp - Records the change in the global topic history audit trail
Editing a global topic's parent_category or segment_type triggers a cascade regeneration of all platform output names and descriptions. This is by design -- platform names are derived from classification data via the template engine.
Bulk Add to Organizations
Super admins can seed topics into the global catalog in bulk, or push global topics to specific organizations.
Admin Bulk Seeding
The bulk seeding flow creates new global topics from a list of topic names:
- Navigate to Admin > Topics > Bulk Add
- Provide a list of topic names (paste or CSV upload)
- The system creates an import batch with
org_id = 'system' - Topics are classified in 500-row chunks via
POST /api/admin/topics/bulk/[batchId]/chunk - Monitor progress through the batch status endpoint
API Flow:
POST /api/admin/topics/bulk -- Create batch, get batchId
POST /api/admin/topics/bulk/:batchId/chunk -- Process each chunk
GET /api/admin/topics/bulk/:batchId/status -- Poll progress
POST /api/admin/topics/bulk/:batchId/review-topics -- Post-import review
Pushing to Org Libraries
To make global topics available in an organization's library, users can:
- Use the chatbot catalog browse feature (natural language: "add all automotive topics")
- Use the catalog API directly:
POST /api/topics/catalogwith selected topic IDs
Both methods call neonLinkTopicsBatch() which creates org_topics records referencing the global topics.id.
Bulk Reclassify
Reclassification re-runs the classification pipeline on existing topics, updating their taxonomy, keywords, platform names, and engine version stamp.

Reclassify Modes
| Mode | Engine | Speed | Cost | Web Search |
|---|---|---|---|---|
| Rule-based (Local) | Deterministic regex engine | Fast (~50/sec) | Free | No |
| AI-powered (LLM) | Claude Sonnet 4.6 via Batch API | Slower (batch queued) | $1.50--$7.50/M tokens | Configurable |
Starting a Full Reclassify Job
- Navigate to Admin > Topics > Reclassify tab
- Select the mode (Rule-based or AI-powered)
- Configure options:
- Force: Reclassify all topics, not just outdated ones
- Limit: Cap the number of topics to process
- Parent Category Filter: Only reclassify topics in a specific parent category
- Web Search: Enable/disable web search for LLM mode (disabling saves ~50% cost)
- Review the cost estimate (shown for LLM mode)
- Click Start Reclassify
The system creates a background job of type reclassify and processes it in chunks. See Background Jobs for details on monitoring and management.
Single-Column Reclassify
For targeted corrections, you can reclassify just one field across all topics:
| Target Column | Description |
|---|---|
segment_type | B2B/B2C/B2B2C/B2E/B2G classification |
taxonomy_type | 13-group taxonomy assignment |
parent_category | 41-type parent category assignment |
subcategory | Subcategory label |
audience_type | Audience type classification |
Single-column reclassification uses the same Anthropic Batch API as full reclassify but with a focused prompt that asks only for the targeted field. This is significantly cheaper than a full reclassify.
To start a single-column reclassify:
- Select "Column Mode" in the reclassify panel
- Choose the target column from the dropdown
- Configure force/limit/filter options
- Start the job
LLM Reclassify Processing Phases
AI-powered reclassification uses a three-phase approach via the Anthropic Message Batches API:
- Phase 1 -- Submit: Collects all eligible topics, splits into sub-batches of 100, and submits each to the Anthropic Batch API (50% cost discount over real-time)
- Phase 2a -- Poll & Stage: Each CRON tick polls batch status. Completed batches are streamed into a
batch_results_stagingtable. Time-bounded to 45 seconds per tick. - Phase 2b -- Post-process: Staged results are processed in chunks of 500. Each result is validated, post-processed, and written to the
topicstable with history recording.
For large reclassify jobs (10,000+ topics), the LLM batch approach is both cheaper and more reliable than real-time API calls. The sub-batch architecture means partial results are preserved even if a CRON tick times out.
Archiving Topics
Archiving soft-removes topics from the active catalog without deleting them.
Archive
- Select topics in the browser using checkboxes
- Click Archive Selected
- Confirm the action
API: POST /api/admin/topics/archive with { topicIds: string[] } (max 500 per request)
Archived topics:
- Have
archived_atset to the current timestamp - Are excluded from all standard queries (
WHERE archived_at IS NULL) - Can still be viewed by toggling "Show Archived" in the browser
- Retain their classification data and history
Restore
- Toggle "Show Archived" in the browser
- Select archived topics
- Click Restore Selected
API: DELETE /api/admin/topics/archive with { topicIds: string[] }
Both archive and restore operations are recorded in the global topic history audit trail.
Topic Statistics
The stats endpoint provides aggregate metrics for the global catalog:
API: GET /api/admin/topics/stats
Returns:
total-- Total active (non-merged, non-archived) topicsbyParentCategory-- Count per parent category (41 types)bySegmentType-- Count per segment type (B2B, B2C, etc.)
Topic Metrics
The metrics endpoint provides deeper analysis:
API: GET /api/admin/topics/metrics
Returns:
- Current snapshot: Topic count, parent category coverage, segment distribution, audience types, taxonomy hash coverage, platform output coverage, keyword stats, description coverage
- Per-version revisions: The same metrics broken down by engine version, showing how the catalog has evolved across classification engine updates
History and Audit Trail
Every change to a global topic is recorded in the topic_history table via the fire-and-forget recordGlobalTopicChange() / recordGlobalTopicChangesBatch() functions.
Viewing History
API: GET /api/admin/topics/[topicId]/history
Returns up to 200 entries ordered by changed_at DESC. Each entry contains:
| Field | Description |
|---|---|
changeType | reclassified, merged, rolled_back, archived, restored |
engineVersionBefore | Engine version before the change |
engineVersionAfter | Engine version after the change |
previousValues | Snapshot of field values before the change |
newValues | Snapshot of field values after the change |
changedBy | User ID who made the change |
source | Origin: admin_reclassify_local, admin_reclassify_llm, admin_merge, admin_rollback, admin_ui |
batchId | Job ID for batch operations (enables rollback) |
The batchId field links all history entries from a single job together. This is what makes job-level rollback possible -- the rollback handler queries all topic_history entries with the matching batchId and restores their previousValues.
Outstanding Issues Dashboard
The outstanding issues endpoint surfaces problems requiring admin attention:
API: GET /api/admin/topics/outstanding
Returns:
- Review Queue: Topics with
review_status = 'needs_review'(up to 20 with full details, plus total count) - Classification Errors: Recent reclassify jobs with errors, including topic-level error details
This endpoint powers the proactive issue surfacing in the classify chatbot when a super admin opens the page.
Next Steps
- Dedup & Merge -- Find and merge duplicate topics
- Review Queue -- Handle topics flagged for human review
- Taxonomy Tree -- Map topics to the hierarchical taxonomy tree
- Background Jobs -- Monitor reclassify and other admin jobs