Read AGENT_RULES.md first. Follow auto-log rule always.

⚠️ **AI ENTRYPOINT: READ FIRST**
> This is a **Sovereign Agent Hub (Engram)**. Before making any changes, review `docs/PROJECT_INTELLIGENCE.md`.
>
> **Core Mandates (Non-Negotiable):**
> 1. **Stealth-by-Default** — No auto-discovery or auto-expansion without explicit user toggles
> 2. **Self-Healing Entrypoints** — All services must auto-regenerate missing configs and fix permissions
> 3. **Hardening Wizard** — Model transitions require explicit user consent and trade-off documentation
> 4. **Hardware Handshake** — Initial provisioning via out-of-band (USB/cable) before network mesh

---

# Engram: Military-Grade Intelligence Orchestration

[![CI](https://github.com/GotThinkSolutions/engram/actions/workflows/ci.yml/badge.svg)](https://github.com/GotThinkSolutions/engram/actions/workflows/ci.yml)
[![Python 3.10+](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)

**Project Engram is a Military-Grade Orchestration layer for Private Intelligence Clouds.** Anchored by the 67 TOPS NVIDIA edge, it acts as a Secure Command Node. It allows users to securely broker Frontier models or toggle a 'Zero-Leak' lockdown to run Nematron-level models on their own private server racks (Ryzen/H100) via an E2EE Mesh.

**Local-first AI memory that survives crashes, rejects hallucinated tags, and searches 50k sessions in 45ms.**

Engram gives autonomous AI agents persistent, searchable memory via the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/). It was built to recover 49,000 corrupted session files from a real infrastructure failure — and to make sure the recovered data is actually trustworthy through a ground-truth tag verification layer.

```mermaid
graph TD
    subgraph Agent_Layer [Agent Interaction Layer]
        A[fa:fa-robot LLM Client] <-->|MCP stdio| B{{fa:fa-network-wired engram.server}}
    end

    subgraph Core_Engine [Core Processing]
        B -->|query| C{fa:fa-shield Pydantic}
        C -->|valid| D[fa:fa-magnifying-glass ripgrep]
        C -.->|invalid| E[fa:fa-triangle-exclamation Error]
        D -->|search| F[(fa:fa-vault Sovereign Vault)]
    end

    subgraph Pipelines [Data & Security]
        G[fa:fa-book-open Librarian] -->|os.scandir| F
        J[fa:fa-vial-check Guardrail] -->|evidence| F
        J -->|pruned| K[fa:fa-scissors Dictionary]
    end

    %% VIBRANT STYLING
    classDef blue fill:#000,stroke:#00d2ff,stroke-width:4px,color:#00d2ff,font-weight:bold
    classDef purple fill:#000,stroke:#6c5ce7,stroke-width:4px,color:#a29bfe,font-weight:bold
    classDef green fill:#000,stroke:#00b894,stroke-width:4px,color:#55efc4,font-weight:bold
    classDef error fill:#000,stroke:#ff7675,stroke-width:2px,color:#ff7675

    class A,B blue
    class F,G purple
    class J,K green
    class E error

    style Agent_Layer fill:none,stroke:#00d2ff,stroke-dasharray: 5 5
    style Core_Engine fill:none,stroke:#6c5ce7,stroke-dasharray: 5 5
    style Pipelines fill:none,stroke:#00b894,stroke-dasharray: 5 5
```


---

## Sovereign Design Philosophy

Engram adheres to **four immutable design mandates**:

1. **Stealth-by-Default** — No auto-discovery, auto-expansion, or unexpected network activity. All features are manual, user-initiated toggles.
2. **Hardening Wizard** — Transitions from Frontier Models to Local/Hardened models require explicit user consent and clear trade-off documentation.
3. **Hardware Handshake** — Initial provisioning uses out-of-band methods (USB, direct cable) before network mesh. Trust is established locally, not remotely.
4. **Self-Healing Autonomy** — Every service includes autonomous entrypoint scripts that fix permissions, regenerate missing configs, and handle initialization without manual intervention.

These mandates ensure Engram operates as a **true Digital SCIF appliance**: sovereign-controlled, secure-by-default, and self-recovering.

**Read [docs/PROJECT_INTELLIGENCE.md](docs/PROJECT_INTELLIGENCE.md)** for the full governance model (required reading for contributors and AI agents).

---

## Installation

### Quick start — just get it running

Read [INSTALL.md](INSTALL.md) for a 3-line guide, or [GETTING_STARTED.md](GETTING_STARTED.md) for step-by-step setup.

### Developer install

```bash
git clone https://github.com/GotThinkSolutions/engram.git
cd engram
pip install -e ".[dev]"

# Start the MCP server (stdio transport)
engram-serve

# Index and tag all entities
engram-index

# Run the test suite
pytest tests/ -v
```

**System requirement:** [ripgrep](https://github.com/BurntSushi/ripgrep) (`rg`) must be on your `PATH`.

### Optional capabilities

Install additional memory backends as needed:

```bash
# Vector/semantic search (sqlite-vec + FastEmbed)
pip install -e ".[vector]"

# Knowledge graph (Graphiti + Kuzu embedded)
pip install -e ".[graph]"

# Everything
pip install -e ".[all]"
```

### Connect to Your Agent

**Claude Desktop** — add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):

```json
{
  "mcpServers": {
    "engram": {
      "command": "engram-serve"
    }
  }
}
```

**Claude Code** — add to `.mcp.json` in your project root:

```json
{
  "mcpServers": {
    "engram": {
      "command": "engram-serve",
      "type": "stdio"
    }
  }
}
```

**AI agent** — add to `~/.config/agent/config.yaml`:

```yaml
extensions:
  engram:
    command: engram-serve
    type: stdio
```

After configuration, your agent can call tools like `search_memory(query="retry logic")`, `save_memory(title="...", content="...")`, or `graph_query(query="how does X relate to Y")` depending on which tiers are installed.

---

## Memory Capabilities

Engram provides three tiers of memory, each building on the last. The core install gives you fast keyword search. Optional packages add semantic understanding and relationship tracking.

### Tier 1: Self-Editing Memory Tools (Phase 1)

Six MCP tools that let agents explicitly manage their own persistent memories. No new dependencies beyond the core install.

| Tool | Purpose |
|------|---------|
| `save_memory` | Create a new memory entity with content, topics, and YAML frontmatter |
| `update_memory` | Modify an existing memory (content, topics, or metadata) |
| `delete_memory` | Remove a memory entity from the vault |
| `search_memory` | Full-text search via ripgrep + FTS5 across all entities |
| `consolidate_memory` | Merge related memories, deduplicate, and cross-link |
| `get_memory_stats` | Return vault statistics (file counts, topic distribution, storage) |

These tools use the same Verified-Retrieval Layer as the core engine: topics are guardrail-checked against source content before being written to frontmatter.

### Tier 2: Vector/Semantic Search (Phase 2)

Adds meaning-based search using embedding vectors. Queries like "that discussion about retry logic" find results even when the exact words differ.

**Install:** `pip install engram[vector]`

**Dependencies:** sqlite-vec, FastEmbed

**CLI:** `engram-vectorize` bulk-indexes existing entities into the vector store.

Semantic search is available alongside keyword search -- agents can use whichever is more appropriate for the query.

### Tier 3: Knowledge Graph (Phase 3)

Adds entity relationships and temporal facts. Useful for tracking how concepts, decisions, and components relate to each other over time.

**Install:** `pip install engram[graph]`

**Dependencies:** Graphiti (graph reasoning), Kuzu (embedded graph database)

| Tool | Purpose |
|------|---------|
| `graph_query` | Query the knowledge graph with natural language or structured patterns |
| `graph_ingest` | Extract entities and relationships from text and add them to the graph |
| `graph_relationships` | Retrieve direct relationships for a given entity |

**Requires:** `ANTHROPIC_API_KEY` or `OPENAI_API_KEY` environment variable (used by Graphiti for entity extraction).

### All 9 MCP Tools at a Glance

| # | Tool | Tier | Install |
|---|------|------|---------|
| 1 | `save_memory` | Core | `pip install engram` |
| 2 | `update_memory` | Core | `pip install engram` |
| 3 | `delete_memory` | Core | `pip install engram` |
| 4 | `search_memory` | Core | `pip install engram` |
| 5 | `consolidate_memory` | Core | `pip install engram` |
| 6 | `get_memory_stats` | Core | `pip install engram` |
| 7 | `graph_query` | Graph | `pip install engram[graph]` |
| 8 | `graph_ingest` | Graph | `pip install engram[graph]` |
| 9 | `graph_relationships` | Graph | `pip install engram[graph]` |

Vector search is integrated into `search_memory` when the vector extra is installed -- no separate tool needed.

---

## The Problem: Why Engram Exists

Engram was born from a real infrastructure failure.

A previous agentic system ([AI agent](https://github.com/block/agent)) accumulated **49,000+ session files** containing months of AI-assisted development work — architecture decisions, debugging sessions, code reviews, research notes. When the host system ran out of memory during a batch export, it triggered catastrophic swap-hammering: the OOM killer terminated processes mid-write, leaving thousands of files in partially-written or corrupted states across multiple directories.

The existing tooling couldn't recover. Session databases were locked. Export scripts loaded all 49k file paths into memory at once, re-triggering the same swap pressure that caused the failure. There was no checkpointing — a crash at file 25,000 meant starting from zero.

**Engram was built to solve this:** a recovery-first memory system that treats every file as potentially corrupted, processes them one at a time through a generator pipeline, and never loses progress.

---

## System Architecture

```text
┌──────────────────────────────────────────────────────────────┐
│                  AI Agent (AI agent / Claude)                    │
│  ┌──────────────────────────────────────┐                    │
│  │  MCP Tool: get_session_context       │  ← stdio transport │
│  └──────────────┬───────────────────────┘                    │
└─────────────────│────────────────────────────────────────────┘
                  ▼
       ┌───────────────────┐        ┌─────────────────────────┐
       │  engram.server     │        │  engram.models          │
       │  Async MCP server  │◄───────│  Pydantic validation    │
       │  (ripgrep backend) │        │  (input sanitisation)   │
       └────────┬──────────┘        └─────────────────────────┘
                ▼
       ┌───────────────────────────────────────────┐
       │              entities/                     │
       │  Markdown vault with YAML frontmatter      │
       │  (49k+ sessions, keyword-tagged)           │
       └────┬──────────────┬───────────────┬───────┘
            │              │               │
   ┌────────▼──┐   ┌──────▼──────┐  ┌─────▼──────────┐
   │ corrupted/ │   │ agent_memory │  │   telemetry/   │
   │ (DLQ)      │   │ (personas)   │  │ (CLI traces)   │
   └───────────┘   └─────────────┘  └────────────────┘
```

### Core Components

| Module | Purpose |
|--------|---------|
| `engram.server` | Async MCP server — non-blocking `asyncio.create_subprocess_exec` with Pydantic-validated inputs, path-traversal rejection, and per-call latency logging |
| `engram.mcp_tools` | Self-editing memory tools — 6 MCP tools for save/update/delete/search/consolidate/stats with path-traversal protection and guardrail-verified topics |
| `engram.vector` | Vector/semantic search — sqlite-vec embeddings with FastEmbed, `engram-vectorize` CLI for bulk indexing (optional: `pip install engram[vector]`) |
| `engram.graph` | Knowledge graph — Graphiti entity extraction with Kuzu embedded graph DB, 3 MCP tools for query/ingest/relationships (optional: `pip install engram[graph]`) |
| `engram.models` | Pydantic v2 schemas for MCP tool inputs — `SessionSearchInput` with field-level constraints and regex-whitelist session ID validation |
| `engram.librarian` | Recovery-hardened ETL pipeline — generator-based `os.scandir`, idempotent JSON checkpointing, Dead Letter Queue for corrupted files |
| `engram.segregation` | Splits agent reasoning blocks and CLI traces into typed subdirectories |
| `engram.guardrail` | **Verified-Retrieval Layer** — anti-hallucination tag verification (see below) |
| `engram.config` | Centralized paths, keyword maps, and validation patterns |
| `engram.utils` | Shared utilities: atomic file writes, structured logging |

---

## Engram Verified-Retrieval Layer

**The core technical differentiator.**

Standard RAG pipelines trust the LLM to generate accurate metadata tags for retrieved documents. This is a known hallucination vector: the model invents plausible-sounding tags that have no basis in the source text, silently poisoning the retrieval index.

Engram eliminates this with a **ground-truth verification step**:

1. **Alias Expansion:** Each canonical tag (e.g., `"edge"`) maps to a curated set of evidence aliases (`["edge", "nano", "nvidia", "orin", "board", "jetpack"]`) defined in a local `ALIAS_MAP`.

2. **Evidence Check:** For every proposed tag, `ripgrep` scans the source file for at least one matching alias. This is a binary, deterministic check — not a probabilistic embedding similarity.

3. **Prune or Promote:** Tags with zero evidence hits are pruned before they enter the index. Only verified tags survive into the YAML frontmatter.

```python
from engram.guardrail import verify_tags

verified, pruned = verify_tags(Path("session.md"), ["edge", "Kubernetes"])
# verified = ["edge"]  (found "nvidia" in text)
# pruned   = ["Kubernetes"]  (no alias match — hallucination caught)
```

**Result:** The retrieval layer serves only tags that are attested in the source document. LLM hallucinations are caught at ingest time, not query time.

---

## Recovery Engineering: The 49k-File Ingestion

The Librarian was designed to recover from the exact failure mode that destroyed the previous system. Every design decision addresses a specific failure:

### Generator-Based Scan (No Bulk Materialisation)

The previous system called `sorted(glob("*.md"))` on 49k files, materialising the entire path list in memory. Under memory pressure, this alone could trigger swap.

Engram uses `os.scandir()` wrapped in a recursive generator. File paths are yielded one at a time — the full list never exists in memory. Processing order is irrelevant because every file is tracked individually in the checkpoint.

### Idempotent Checkpointing

The Librarian writes a JSON checkpoint every N files (configurable via `--batch-size`, default 100). If the process is killed at file 25,000, it resumes from 25,001 on the next run. The checkpoint uses atomic writes (`os.replace`) to prevent corruption from partial flushes.

### Dead Letter Queue (DLQ) Pattern

Files that fail to parse (corrupted YAML, encoding errors, permission issues) are moved to `entities/corrupted/` — a standard Dead Letter Queue. This prevents known-bad files from being re-attempted on every run and provides an audit trail for manual inspection.

### Why Single-Threaded? (The 90-Second Design Note)

We evaluated `ProcessPoolExecutor` for parallel ingestion and rejected it. The back-of-the-envelope calculation:

- 49,000 files at ~1-2ms per file (YAML parse + keyword scan + write-back) = **~90 seconds total**
- `ProcessPoolExecutor` adds: IPC serialisation overhead per file, checkpoint race conditions requiring locking, and increased memory pressure from worker processes — the exact failure mode we're recovering from

A 90-second single-threaded runtime does not justify the complexity, the race conditions, or the risk of re-introducing memory pressure. The bottleneck is disk I/O, not CPU. We chose simplicity and correctness over theoretical throughput.

---

## Performance Characteristics

Measured on a local workstation (AMD 5950X, NVMe storage, ripgrep 13.0.0). The benchmark script (`scripts/benchmark.py`) reports p50 and p99 percentile latencies across 10 rounds per query.

### Search Latency (50,000 files)

| Metric | Value | LLM Tool Budget |
|--------|-------|-----------------|
| **p50 (median)** | **45 ms** | Well within 200ms threshold |
| **p99 (tail)** | **50 ms** | No tail-latency spikes |

### Ingestion Throughput

| Metric | Value |
|--------|-------|
| **Throughput** | ~550 files/sec (single-threaded) |
| **Full 49k recovery** | ~90 seconds |
| **Checkpoint interval** | Every 100 files (configurable) |

### Fault Tolerance

| Metric | Value |
|--------|-------|
| **Pipeline crashes on corrupted input** | 0 (tested with malformed YAML, broken encoding, unreadable files) |
| **Recovery mechanism** | Dead Letter Queue (`entities/corrupted/`) |
| **Checkpoint durability** | Atomic writes via `os.replace` — no partial flush corruption |

### Architectural Decision: Filesystem as Index

We deliberately chose `ripgrep` over SQLite FTS5. At 200,000 files, ripgrep queries complete in ~165ms — still under the 200ms LLM tool-call budget. The filesystem-as-index approach eliminates the stale-index consistency problem: there is no index to diverge from the source data. If a file is modified outside the pipeline, the next search reflects the change immediately. This trade-off favours correctness and operational simplicity over theoretical query speed.

Run the benchmark yourself:
```bash
python scripts/benchmark.py --synthetic 50000 --target /tmp/bench --rounds 10
```

---

## Security Hardening

| Vector | Mitigation |
|--------|-----------|
| **Shell injection** | All subprocess calls use argument lists, never `shell=True`. `dispatcher/bridge.py` uses `shlex.split()`. |
| **Path traversal** | `session_id` validated against `^[A-Za-z0-9_.-]+$` regex whitelist (Pydantic `field_validator`). Inputs like `../../etc/passwd` are rejected before reaching ripgrep. |
| **Input overflow** | Query length capped at 500 characters. Response output capped at 4,000 characters. Both enforced by Pydantic field constraints. |
| **Malformed data** | YAML parse errors caught with `try/except yaml.YAMLError` — corrupted frontmatter is replaced, not propagated. UTF-8 decode uses `errors="replace"`. |

---

## Configuration

Engram is configured via environment variables. Core functionality works with defaults; optional backends require additional configuration.

| Variable | Required | Default | Purpose |
|----------|----------|---------|---------|
| `ENGRAM_ENTITIES_DIR` | No | `./entities` | Path to the Markdown vault |
| `ANTHROPIC_API_KEY` | Graph only | -- | Used by Graphiti for entity extraction (knowledge graph tier) |
| `OPENAI_API_KEY` | Graph only | -- | Alternative to Anthropic key for Graphiti |

Only one of `ANTHROPIC_API_KEY` or `OPENAI_API_KEY` is needed for the graph tier. The core and vector tiers run entirely locally with no API keys.

---

## Project Structure

```
engram/
├── src/engram/          # Core library (installable package)
│   ├── server.py        # Async MCP server
│   ├── mcp_tools.py     # Self-editing memory tools (Phase 1)
│   ├── vector.py        # Vector/semantic search (Phase 2, optional)
│   ├── graph.py         # Knowledge graph (Phase 3, optional)
│   ├── models.py        # Pydantic input validation
│   ├── librarian.py     # Recovery-hardened ETL indexer
│   ├── segregation.py   # Agent/CLI block splitter
│   ├── guardrail.py     # Verified-Retrieval Layer
│   ├── config.py        # Centralized configuration
│   └── utils.py         # Shared utilities
├── tests/               # pytest tests (unit + integration)
│   ├── test_server.py   # MCP server + Pydantic validation
│   ├── test_mcp_tools.py # Self-editing memory tools
│   ├── test_graph.py    # Knowledge graph tools
│   ├── test_guardrail.py # Anti-hallucination guardrail
│   └── test_librarian.py # Recovery pipeline + checkpointing
├── scripts/             # Operational utilities
│   ├── benchmark.py     # Percentile latency benchmark
│   ├── find_dupes.py    # Content-hash deduplication
│   └── ...
├── dispatcher/          # Manifest-driven task execution
├── pyproject.toml       # Packaging, deps, CLI entry points
└── README.md
```

---

## One-Handed Deployment

### Hub Services (Local)

```bash
# Start the FastAPI Hub + entity search engine
docker compose up -d

# Verify health
curl http://localhost:8000/health

# Interactive API docs
open http://localhost:8000/docs
```

**Requirements:**
- Docker (v20.10+) with `docker compose` command (NOT `docker-compose`)
- `.env` file with `MATRIX_REGISTRATION_SECRET` (see `.env.example`)
- 8GB RAM, 2+ vCPU recommended

**Troubleshooting:** See [TROUBLESHOOTING.md](TROUBLESHOOTING.md) for common issues (port conflicts, orphan containers, etc.)

### Edge Deployment

Deploy Engram to edge nodes (edge device, firmware):

```bash
# 1. Generate mesh auth manifest and Tailscale key
python3 scripts/get_mesh_auth.py

# 2. Deploy source code, manifest, and run headless provisioning
bash scripts/deploy_to_node.sh

# 3. Verify node joined mesh
tailscale status | grep edge
```

**Prerequisites (one-time):**
```bash
# SSH key trust
ssh-copy-id -i ~/.ssh/id_rsa.pub user@<host-ip>

# Passwordless sudo on edge
ssh -t user@<host-ip> \
  "echo 'user ALL=(ALL) NOPASSWD: ALL' | sudo tee /etc/sudoers.d/engram-prep"
```

**What happens:**
1. Manifest script generates unique worker ID and Tailscale auth token
2. Deploy script rsyncs source code to edge
3. Headless `prep_node.sh` installs dependencies via `apt-get`, configures Tailscale with auth key
4. Polling loop waits for node to appear in `tailscale status`
5. Hub can then assign work to the node via Tailscale mesh

**Node Integration:**
Once the edge appears in `tailscale status`:
- Access via Tailscale IP (e.g., `ssh user@<tailscale-ip>`)
- Heartbeat script reports status to Hub
- Hub queues work for the node via REST API or MCP

---

## Development

```bash
# Install with dev dependencies
pip install -e ".[dev]"

# Run test suite (52 tests)
pytest tests/ -v

# Run retrieval benchmark
python scripts/benchmark.py --synthetic 10000 --target /tmp/bench --rounds 10
```

---

## Strategic Roadmap

- **Phase 1: Self-Editing Memory (Complete)** — 6 MCP tools for explicit memory management. Agents save, update, delete, search, consolidate, and inspect their own memories. Zero new dependencies.
- **Phase 2: Vector/Semantic Search (Complete)** — sqlite-vec + FastEmbed. Meaning-based search alongside keyword search. `engram-vectorize` CLI for bulk indexing.
- **Phase 3: Knowledge Graph (Complete)** — Graphiti + Kuzu embedded graph. Entity relationships, temporal facts, natural-language graph queries.
- **Phase 4: Dream Cycle** — Automated cron-driven deduplication, summarisation, and cross-linking during idle time.
- **Phase 5: Edge Deployment** — Optimised profiles for NVIDIA edge and other resource-constrained edge devices.

---

---

## Technical Notes: 2026-03-29 Deployment Update

### ASGI App Instance (server.py)

The original `server.py` mixed MCP server and FastAPI code without instantiating a FastAPI app object. Docker startup failed with:
```
ERROR: Error loading ASGI app. Attribute "app" not found in module "engram.server"
```

**Fix:** Refactored to create a clean FastAPI instance at module level:
```python
app = FastAPI(
    title="Engram Hub",
    description="Sovereign knowledge vault and fleet coordinator",
    version="1.0.0"
)

@app.get("/health")
async def health():
    ...

@app.post("/search")
async def search(request: SessionSearchInput):
    ...
```

Docker now runs: `uvicorn engram.server:app --host 0.0.0.0 --port 8000` ✓

### Docker Compose Standardization

**Removed obsolete `version: '3.8'` tag.** Modern Docker (v20.10+) ignores version directives and warns about them. The file now starts directly with `services:` — more compatible with current tooling.

**Changed all documentation to use `docker compose` (space), not `docker-compose` (hyphen).** The standalone `docker-compose` binary is deprecated; `docker compose` is the standard going forward.

### Deployment Scripts (deploy_to_node.sh)

**Fixed rsync exclusions:** Added `synapse_data` and `matrix` to exclude list to prevent permission errors when syncing to edge.

**Corrected username and hostname:** Updated target from `user@edge.local` to `user@<host-ip>` (actual credentials on test edge).

**Improved polling robustness:** Detects edge in `tailscale status` by checking for predicted worker ID in authenticated mesh. (Note: manifest ID may differ from authenticated node ID — see [TROUBLESHOOTING.md](TROUBLESHOOTING.md) item 4.)

### Known Issues & Workarounds

| Issue | Workaround | Status |
|-------|-----------|--------|
| **Synapse auto-config** | Currently disabled; requires manual `homeserver.yaml` generation before re-enabling | ⚠️ Pending |
| **Matrix integration** | Full Synapse/Element setup deferred; focus on Hub + edge mesh first | ⚠️ Phase 2 |
| **Node ID prediction** | Manifest generates predicted ID; actual node ID assigned at first Tailscale auth | ✅ Documented |

---

## License

Released under the [MIT License](LICENSE).

---

> **Note:** This repository contains the engine and architecture. Personal databases, legacy telemetry, and private session entities are air-gapped and not included.
