Whitepaper report, staging claims, tag communities (P3), incremental ingest (new assets only), and job history. Set DARKHERD_ADMIN_PASSWORD in container env. After upgrading, rebuild the darkherd image so this page updates.
Other operator surfaces:
/dashboard,
/graph,
/portal,
GET /api/v1/meta →
endpoints.page_admin, endpoints.page_analytical_graph, endpoints.page_dashboard, endpoints.page_portal
(P1bi; extends P1be).
Orchestration readiness (read-only): GET /api/v1/orchestration/status — meta key endpoints.orchestration_status on GET /api/v1/meta (P4an).
Ontology patch lifecycle (P4l / P5a–P5d): use DarkHerd POST/GET /api/v1/orchestration/… paths documented on GET /api/v1/meta (e.g. endpoints.orchestration_validate_ontology_patch, endpoints.orchestration_simulate_ontology_patch, endpoints.orchestration_ontology_patch_head, endpoints.orchestration_apply_ontology_patch, endpoints.orchestration_facets_catalog_status). These are not same-origin proxied via HyperCharts the way /api/darkherd/orchestration/status is (P4ar only). Repo INTEGRATION.md / README.md (P5 boundary).
Graphiti bridge health (read-only): GET /api/v1/graphiti/status — meta endpoints.graphiti_status on GET /api/v1/meta; HyperCharts /api/darkherd/graphiti/status (P6b).
Graphiti temporal search (POST): meta endpoints.graphiti_search; POST /api/v1/graphiti/search; HyperCharts POST /api/darkherd/graphiti/search (P6d).
Ontology facets v0 (read-only): GET /api/v1/graph/ontology-facets-v0 — meta endpoints.graph_ontology_facets_v0; HyperCharts /api/darkherd/graph/ontology-facets-v0 (P6o); ontology_bundle_manifest from ontology/manifest_v0.json (P6p; P6q/P6r ontology_bundle_roles+P6s ontology_narrative_template_meta echoes on this header + lockdown 403 + meta).
Graphiti episode ingest is server-side only (upstream POST …/messages): meta endpoints.graphiti_episodes on GET /api/v1/meta; staging Emit Graphiti + incremental graphiti_episode_replay (P6e). Episode bodies append ontology/facets_v0.json fingerprint lines (P6g); this header echoes that operator cue (P6h). Bodies start with darkherd_episode_v0 JSON (structured_episode_envelope_v0 in graphiti_episodes.py; P6i) including ontology_facets_v0_fp (P6k) plus optional ontology_template_id / ontology_catalog_sizes (P6m); ontology/manifest_v0.json bundle fields in episode JSON + facets GET (P6p); this header echoes that cue too (P6j+P6q+P6r ontology_bundle_roles+P6s ontology_narrative_template_meta). Incremental replay response keys: meta endpoints.admin_incremental_ingest (P6l).
Read-only policy: GET /api/v1/orchestration/rollback-checkpoints-status (P4v / P4bh / P4cg — JSON includes p4cg and supervisor_enqueue_append_rollback_checkpoint for supervisor P4bb / mutating P4cf enqueue readiness).
The same P4cg fields appear on GET /api/v1/orchestration/status (P4a / P4ch parity).
List / read-one / append / delete need constitution orchestration_rollback.checkpoint_sqlite_metadata_wired and DARKHERD_ORCHESTRATION_ROLLBACK_CHECKPOINT_IO_ENABLED=true — otherwise list / single-row GET return 403 (SQLite metadata only; not a graph restore).
Meta keys on GET /api/v1/meta:
endpoints.orchestration_rollback_checkpoints_status,
endpoints.orchestration_rollback_checkpoints,
endpoints.orchestration_rollback_checkpoint_get (P4bl),
endpoints.orchestration_rollback_checkpoint_append,
endpoints.orchestration_rollback_checkpoint_delete,
endpoints.orchestration_rollback_checkpoint_restore_facets_v0 (P4cj disk restore).
HyperCharts same-origin companions: /api/darkherd/orchestration/rollback-checkpoints-status, …/rollback-checkpoints, GET …/rollback-checkpoints/<id> (P4bl), POST …/rollback-checkpoint-append (P4bi), POST …/rollback-checkpoint-restore-facets-v0 (P4cj), DELETE …/rollback-checkpoints/<id> (P4bm).
POST …/rollback-checkpoint-append)Bounded metadata row; optional P4by ontology/facets_v0.json fingerprint hint; optional P4cj bounded JSON snapshot when constitution checkpoint_facets_v0_snapshot_wired is true (see status p4cj_snapshot_wired). P4cd — enqueue append_rollback_checkpoint via POST …/queue-worker-job (P7k) for drain-host execution with the same gates.
POST …/rollback-checkpoint-restore-facets-v0 — P4cj / P4cl)Writes repo ontology/facets_v0.json from the checkpoint row snapshot when not dry-run; requires P4bh I/O gates, constitution snapshot wiring, env DARKHERD_ORCHESTRATION_ROLLBACK_CHECKPOINT_FACETS_RESTORE_ENABLED=true, and a row with ontology_facets_v0_snapshot_present (check GET-by-id). Read p4cj_restore_http_wired on rollback-checkpoints-status. No Neo4j on this route.
Each ingest can create multiple SQLite rows per coin (one per URL tried). Use One row per asset to show a single “current best” row per coin (prefers ok over failed). Sort by fetch time matches ingest order. Files live under the directory shown; errors are from HTTP or text extraction.
| ID | Asset | Status | Disk | Chars | URL | Error / note | Local path |
|---|
Bounded public GET /coins/{id} probes for active source=coingecko rows; HTTP 404 sets corpus_tier=archived with archive_reason=coingecko_coin_not_found (no API key; CoinGecko 429 may stop the batch — see result.stopped_reason).
Wrong ids can also 404 — use POST …/set-corpus-tier when restoring. Full POST contract: GET /api/v1/meta → endpoints.admin_reconcile_coingecko_stale_archive.
Optional durable run: POST …/admin/maintenance/queue-worker-job with {"kind":"reconcile_coingecko_stale_archive","payload":{…}} (P7k).
Bounded SQLite corpus_tier=archived rows (ascending id) are pushed to Bolt via sync_assets_by_ids — use when archive happened outside normal graph sync paths. Does not delete stale nodes. Contract: GET /api/v1/meta → endpoints.admin_sync_archived_assets_neo4j.
Optional queue: POST …/admin/maintenance/queue-worker-job with {"kind":"neo4j_sync_archived_assets","payload":{…}} (P7k).
Chunk-grounded rows in SQLite staging_claims. Each row can show a short evidence excerpt from document_chunks and a capped LLM raw reply excerpt (stored at extract time) next to the parsed claim — compare extraction vs final text (toggles below). After Load / refresh, the Symbol column includes graph and dash links that open /graph?center_asset_id=<SQLite id> and /dashboard?center_asset_id=<SQLite id> in new tabs (P2ar / P2at; same P1ad centre query on both explorer surfaces). Promote needs NEO4J_URI (analytical graph). Rejecting already-promoted rows optionally runs Neo4j DETACH DELETE when you check detach.
Writes new pending rows from document_chunks. Requires MISTRAL_API_KEY; can take minutes (sleep caps Mistral rate). P2q: hard caps and scan window follow constitution/invariants.json → claim_extraction; use Asset offset to page across ranked assets without re-processing the head of the list. Chunk rank offset skips the first N ranked chunks per asset before taking Chunks / asset — multi-pass windows (P2vcj UI for P2vci). Chunk sweep passes runs several consecutive non-overlapping windows (offset steps by Chunks / asset each pass) until empty, cursor cap, or pass limit — P2vck. Corpus sweep chains ranked SQL pages automatically (P2vcz / P2vda) until the stream ends or governance caps. Check Queue to worker to enqueue extract_claims_staging on worker_jobs (P2vcf / P7k) instead of blocking this browser request on Mistral.
Extraction fields (including Queue to worker, Include archived, Corpus sweep) persist in sessionStorage for this tab (debounced; also after each extract run) — P2al v6 / P2vcg / P2vda.
This tab remembers staging list filters, include toggles, the Graphiti / detach checkboxes, and P2ap Filter loaded rows (same sessionStorage key — P2aq) (debounced on edit; also after each list load) — P2ak. On reload, #sc-msg may combine P2ak + P2al + revision panel (P2am) restores. Clear saved staging UI drops those keys and clears the row-filter input (P2an).
Bulk (page scope): actions use only rows currently loaded in the table (status / Kind / limit). They do not scan the full database.
Server bulk by filter — same status / claim_kind / limit / offset as the list query (newest first; offset skips newest N before the cap). JSON includes total_filtered (P2u), has_more (P2w), and optional prev_offset / next_offset (P2y, same step as list P2x). After a Preview dry-run, P2aa buttons apply those offsets and re-run the same preview. Previews default to dry_run; execute affects up to limit matching rows after offset (not only the visible table).
Claim revisions (P2m / P2n / P2o / P2p / P2ab / P2ac / P2ad / P2ae / P2af / P2ag / P2ah): append-only text snapshots; optional unified diff vs prior (GET). After Load revisions, a timeline lists each revision with expandable diff blocks; raw JSON stays under Raw JSON. P2ab: Live diff vs canonical compares your New claim text to canonical_claim_text from the same GET (line-unified, capped). P2ac: after a successful Append operator revision, the page re-runs the same GET (timeline + JSON + live-diff baseline) and clears New claim text. P2af / P2ag: Use claim from selection copies the first checked row’s claim text + Claim id; P2ag also fills Optional kind / Optional conf. when those boxes are still empty. P2ah: ID cell click checks that row (supports P2af without an extra checkbox click). Operator edit: pending / failed — SQLite + revision only (P2n). promoted — check If row is promoted, update Neo4j so the matching :Claim text/kind updates with SQLite (P2o; needs Bolt).
P2am: Claim id, Diff vs prior, Live diff vs canonical, Optional kind/conf, and Neo4j sync persist in sessionStorage for this tab (debounced on edit; also after Load revisions / Append / ID-cell or selection fills). New claim text is intentionally not persisted.
Load revisions to set the canonical baseline.
P2ad: click an ID cell to fill Claim id and check that row (P2ah). P2ae: double-click the same cell to run Load revisions immediately (checkbox-only clicks still skip ID handlers).
| ID | Asset | Status | Kind | Conf. | Hints | Model | Chunk | Source | Evidence (chunk) | LLM excerpt | Created | Promoted | Claim | Note |
|---|
Runs Louvain on the SQLite tag-overlap graph for ranked active assets, then writes tag_overlap_community_id / tag_overlap_community_at / tag_overlap_community_run_id (UUID per batch) on those rows. For Neo4j dashboards, run sync_graph afterward (or targeted sync) so :Asset picks up the same properties. P3ag adds read-only Leiden on the same adjacency (GET /api/v1/analytics/communities/tag-overlap-leiden and Bolt …/neo4j/communities/tag-overlap-leiden); persist + CLI rebuild stay Louvain. P3ah: use the preview buttons below to compare labels without writing SQLite.
Summaries (Mistral): one short blurb per community for a persist batch; stored in tag_overlap_community_summaries. Requires MISTRAL_API_KEY. Leave run id empty to use the latest batch on assets.
P6l — With Post-ingest checked, graphiti_episode_replay controls whether DarkHerd calls
replay_ingest_episodes_for_assets and returns post_ingest.graphiti.episodes_replayed (per-asset profile, snapshot, whitepapers with ok/detail),
or post_ingest.graphiti.episode_replay when skipped. Full POST body and error shape: GET /api/v1/meta → endpoints.admin_incremental_ingest.
P7zae — When new rows are ingested, the Done line also summarizes tag_overlap_followup (P7zac; constitution incremental_ingest_tag_overlap_followup). Staging promote / bulk promote append the same shape for P7zad (staging_promote_tag_overlap_followup).
Queue operations from one panel: read-only checks for snapshot pressure (.../worker-jobs/health), short-term trend (.../worker-jobs/health-trend), and windowed SLO rates (.../worker-jobs/slo), plus remediation controls for retry/cancel/prune. P7zaa: optional exact kind / status filters on the worker-jobs list GET (same WORKER_JOB_KINDS registry as enqueue).
Read-only GET /api/v1/meta returns top-level worker_job_kinds + worker_job_kinds_count and endpoints.worker_job_kinds_registry (P7jh) — same canonical allowlist as POST …/admin/maintenance/queue-worker-job. This button loads that JSON via sameOriginApiPath so HyperCharts P7xq companions hit /api/darkherd/meta.
Constitution-backed operator hints for scheduled full-rebuild cadence and active analytical corpus ceiling — same JSON as
GET /api/v1/orchestration/status (P4cq), top-level GET /api/v1/meta (P4cr), and
GET /api/v1/orchestration/tool-adapters-status (P4cs).
The same GET /api/v1/insights/summary response also includes supervisor invoke prerequisites (P4db — supervisor_wired, supervisor_graph_invoke_enabled, p4db; parity P4cz / meta P4da; P4df aligns with P4dc /portal, P4de /graph//dashboard, P4dd lockdown 403) plus bound-tool governance (P4dh — supervisor_bound_tools, supervisor_mutating_bound_tool_job_classes, p4bc, p4cb; /portal Summary P4dj strip; meta P4di on GET /api/v1/meta).
Cron / P7k jobs remain external; this panel is read-only discovery + parity check.
Queues the same multi-step worker job used for scheduled-style passes: SQLite Louvain persist → optional Bolt sync for the new run → optional Mistral summaries.
A queue_worker host must drain POST …/admin/maintenance/queue-worker-job. Payload uses schema defaults for refresh caps unless you edit the JSON elsewhere.
Queues rebuild_tag_overlap_full — same rebuild_tag_overlap_job path as python -m app.jobs.rebuild_tag_overlap (runbook 9f; meta endpoints.cli_rebuild_tag_overlap).
Default payload uses schema Louvain caps; optional Neo4j sync / Mistral summaries match CLI flags when checked. A queue_worker drainer must run.
Read-only slice of ontology_patch_neo4j_followup_auto_enqueue from GET /api/v1/insights/summary (P5cm) vs
GET /api/v1/orchestration/status (P5cn) — same JSON object when both routes use the same audit snapshot helper; includes
P5cq enqueue_neo4j_sync_graph_constraints / enqueue_neo4j_sync_graph_assets / enqueue_repair_claim_ontology_edges
booleans plus P5cl jobs[]. Uses sameOriginApiPath so HyperCharts P7xq hits /api/darkherd/insights/summary and
/api/darkherd/orchestration/status.
admin_job_runs trace (P7kg)
Read-only slice of analytical_global_refresh_admin_job_runs from
GET /api/v1/insights/summary — newest bounded rows for Vision §5.2 global refresh kinds (correlate external cron + P7k drains with persisted logs).
Uses sameOriginApiPath so HyperCharts P7xq hits /api/darkherd/insights/summary.
Read-only p3ds_global_refresh_autoschedule_metrics from
GET /api/v1/insights/summary — process-local outcome counters + tick exceptions (same state as GET /metrics P3dr families).
/portal Summary surfaces the same object in HTML; this panel shows raw JSON. Uses sameOriginApiPath so HyperCharts P7xq hits /api/darkherd/insights/summary.
Read-only database_dialect and marker p7zaoi on
GET /api/v1/insights/summary
— parity GET /api/v1/ready (P7zaoe)
and GET /metrics darkherd_database_up (P7zaoh).
Matches /portal Summary strip, /graph + /dashboard footers, and lockdown 403 HTML (P7zaok).
Insight fetch panels on this page use sameOriginApiPath so HyperCharts (P7xq) hits /api/darkherd/insights/summary.
Read-only GET /api/v1/costs/summary → provider_export_hints (P7jb) shows which GET /api/v1/costs/provider-export?provider= pulls have API keys configured. Uses sameOriginApiPath so HyperCharts P7xq companions hit /api/darkherd/costs/summary. No invoice HTTP from this panel.
Provider-export previews (P7jj; extends P7jg; P7jl DeepSeek balance snapshot; P7zal OpenRouter credits snapshot; P7zas Stability AI balance snapshot; P7zat Replicate account snapshot; P7zau Hugging Face Hub whoami snapshot): capped JSON from GET /api/v1/costs/provider-export?window_hours=24&provider=mistral|openai|anthropic|darkherd|deepseek|openrouter|stability|replicate|huggingface (e.g. provider=darkherd for P7zc; provider=deepseek for point-in-time P7jl GET /user/balance; provider=openrouter for P7zal GET https://openrouter.ai/api/v1/credits; provider=stability for P7zas GET https://api.stability.ai/v1/user/balance; provider=replicate for P7zat GET https://api.replicate.com/v1/account; provider=huggingface for P7zau GET {hub}/api/whoami; same-origin /api/darkherd/costs/provider-export when companion mode). Shared status + JSON below — wireAdminCostsProviderExport wires each button.
P7zah — After Refresh, the text below the JSON lists detail.tag_overlap_followup for each row with kind: incremental_ingest (same formatter as ingest Done line; P7zac / P7zag parity with HTTP and queue worker AdminJobRun logs).