Initial scaffold: open-memory plugin for OpenCode

- Plugin entry point with hooks: experimental.session.compacting,
  experimental.chat.system.transform, event
- Context tracker: SSE-based token tracking per session with
  green/yellow/red/critical thresholds
- Tools: memory_context, memory_compact, memory_summary,
  memory_sessions, memory_messages, memory_search, memory_plans
- History module: sqlite3 queries + markdown rendering
- Compaction: improved prompt emphasizing self-continuity
- Research docs: ARCHITECTURE.md + opencode-memory reference
This commit is contained in:
2026-04-20 14:55:20 +00:00
commit 9a42dcfb94
16 changed files with 1296 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
dist/
node_modules/
*.lock
bun.lock
bunfig.toml

20
biome.json Normal file
View File

@@ -0,0 +1,20 @@
{
"$schema": "https://biomejs.dev/schemas/2.3.11/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true
},
"formatter": {
"enabled": true,
"indentStyle": "space",
"indentWidth": 2,
"lineWidth": 100
},
"linter": {
"enabled": true,
"rules": {
"recommended": true
}
}
}

View File

@@ -0,0 +1,252 @@
---
name: opencode-memory
description: >
Browse and recall OpenCode local memory stored on the user's machine: local
sessions, plans, conversations, prompt history, and project context. Use
immediately when the user asks to check history, previous sessions, past
chats, what did we do before, last time, check plans, session history,
recall, memory, remember, prior work, previous context, or have we done this
before. Auto-trigger proactively when resuming work, continuing a project,
referencing prior decisions, debugging repeated issues, revisiting earlier
plans, or any follow-up where earlier OpenCode context may help. This means
OpenCode local history/files specifically, not ChatGPT/Claude cloud history,
generic web search, or unrelated product memory systems. Do NOT use for fresh
tasks with no relevant history, or when current files/git already answer the
question.
license: Apache-2.0
compatibility: opencode
---
# OpenCode Memory Browser
Lightweight, read-only access to your local OpenCode history. No injection, no bloat — just the ability to look things up when it would help.
This skill is specifically about OpenCode data stored on the local machine. It is not for ChatGPT history, Claude cloud history, generic browser history, or external memory products.
All data lives in a local SQLite database and plain files. You query them directly using `sqlite3` via bash. No bundled scripts or external dependencies needed.
## When to Use
### Auto-trigger (agent decides)
- You are resuming work on a project and suspect prior sessions exist.
- The user references something done previously ("we did this before", "last time", "that plan we made").
- A recurring issue suggests checking if it was encountered before.
- The user asks about the state of plans, past decisions, or previous approaches.
- You need context that might exist in history but is not in the current session.
### User-triggered (explicit request)
- "Check my history"
- "What did we do in the last session?"
- "Show me my plans"
- "Search for when we discussed X"
- "What projects have I worked on?"
- "Look at previous conversations about Y"
### Do NOT use when
- The task is clearly brand new with no relevant history.
- Fresh repo context (files, git log) is sufficient.
- The user explicitly says they don't care about prior work.
## Storage Locations
```
Database: ${XDG_DATA_HOME:-$HOME/.local/share}/opencode/opencode.db
Plans: ${XDG_DATA_HOME:-$HOME/.local/share}/opencode/plans/*.md
Session diffs: ${XDG_DATA_HOME:-$HOME/.local/share}/opencode/storage/session_diff/<session-id>.json
Prompt history: ${XDG_STATE_HOME:-$HOME/.local/state}/opencode/prompt-history.jsonl
```
The database path respects `$XDG_DATA_HOME` if set (default: `~/.local/share`).
## Database Schema (what matters)
- **project** — `id` (text PK), `worktree` (path), `name` (often NULL, derive from worktree basename)
- **session** — `id` (text, e.g. `ses_xxx`), `project_id` (FK), `parent_id` (NULL = main session, set = subagent), `title`, `summary`, `time_created`, `time_updated`
- **message** — `id`, `session_id` (FK), `data` (JSON with `$.role` = `"user"` or `"assistant"`), `time_created`
- **part** — `id`, `message_id` (FK), `session_id` (FK), `data` (JSON with `$.type` = `"text"` and `$.text` = content)
Timestamps are Unix milliseconds. Use `datetime(col/1000, 'unixepoch', 'localtime')` to display them.
## Ready-to-Use Queries
All queries use `sqlite3` in read-only mode. Always run via bash.
**Shorthand used below:**
```
DATA_ROOT="${XDG_DATA_HOME:-$HOME/.local/share}/opencode"
STATE_ROOT="${XDG_STATE_HOME:-$HOME/.local/state}/opencode"
DB="$DATA_ROOT/opencode.db"
DB_URI="file:${DB}?mode=ro"
```
### Quick summary
```bash
sqlite3 "$DB_URI" "
SELECT 'projects', COUNT(*) FROM project
UNION ALL SELECT 'sessions (main)', COUNT(*) FROM session WHERE parent_id IS NULL
UNION ALL SELECT 'sessions (total)', COUNT(*) FROM session
UNION ALL SELECT 'messages', COUNT(*) FROM message
UNION ALL SELECT 'todos', COUNT(*) FROM todo;
"
```
### List projects
```bash
sqlite3 "$DB_URI" "
SELECT
COALESCE(p.name, CASE WHEN p.worktree = '/' THEN '(global)' ELSE REPLACE(p.worktree, RTRIM(p.worktree, REPLACE(p.worktree, '/', '')), '') END) AS name,
p.worktree,
(SELECT COUNT(*) FROM session s WHERE s.project_id = p.id AND s.parent_id IS NULL) AS sessions
FROM project p
ORDER BY p.time_updated DESC
LIMIT 10;
"
```
### List recent sessions
```bash
sqlite3 "$DB_URI" "
SELECT
s.id,
COALESCE(s.title, 'untitled') AS title,
COALESCE(p.name, CASE WHEN p.worktree = '/' THEN '(global)' ELSE REPLACE(p.worktree, RTRIM(p.worktree, REPLACE(p.worktree, '/', '')), '') END) AS project,
datetime(s.time_updated/1000, 'unixepoch', 'localtime') AS updated,
(SELECT COUNT(*) FROM message m WHERE m.session_id = s.id) AS msgs
FROM session s
LEFT JOIN project p ON p.id = s.project_id
WHERE s.parent_id IS NULL
ORDER BY s.time_updated DESC
LIMIT 10;
"
```
### Sessions for a specific project
Replace the worktree path with the actual project path:
```bash
sqlite3 "$DB_URI" "
SELECT s.id, COALESCE(s.title, 'untitled'),
datetime(s.time_updated/1000, 'unixepoch', 'localtime')
FROM session s
JOIN project p ON p.id = s.project_id
WHERE p.worktree = '/path/to/project'
AND s.parent_id IS NULL
ORDER BY s.time_updated DESC
LIMIT 10;
"
```
To find the worktree for the current directory: `git rev-parse --show-toplevel`
### Read messages from a session
Replace the session ID:
```bash
sqlite3 "$DB_URI" "
SELECT
json_extract(m.data, '$.role') AS role,
datetime(m.time_created/1000, 'unixepoch', 'localtime') AS time,
GROUP_CONCAT(json_extract(p.data, '$.text'), char(10)) AS text
FROM message m
LEFT JOIN part p ON p.message_id = m.id
AND json_extract(p.data, '$.type') = 'text'
WHERE m.session_id = 'SESSION_ID_HERE'
GROUP BY m.id
ORDER BY m.time_created ASC
LIMIT 50;
"
```
### Search across all conversations
Replace the search term:
```bash
sqlite3 "$DB_URI" "
SELECT
s.id AS session_id,
COALESCE(s.title, 'untitled') AS title,
json_extract(m.data, '$.role') AS role,
datetime(m.time_created/1000, 'unixepoch', 'localtime') AS time,
substr(json_extract(p.data, '$.text'), 1, 200) AS snippet
FROM part p
JOIN message m ON m.id = p.message_id
JOIN session s ON s.id = m.session_id
WHERE s.parent_id IS NULL
AND json_extract(p.data, '$.type') = 'text'
AND json_extract(p.data, '$.text') LIKE '%SEARCH_TERM%'
ORDER BY m.time_created DESC
LIMIT 10;
"
```
### List saved plans
```bash
ls -lt "$DATA_ROOT"/plans/*.md 2>/dev/null | head -20
```
To read a specific plan:
```bash
cat "$DATA_ROOT"/plans/FILENAME.md
```
### Show recent prompt history
```bash
tail -20 "$STATE_ROOT"/prompt-history.jsonl
```
Each line is a JSON object. The user's input is typically in the `input` or `text` field.
## Workflow
### Quick recall (most common)
1. Run the **summary** query to see what's available.
2. If you need sessions for the current project, get the worktree with `git rev-parse --show-toplevel`, then run the **project sessions** query.
3. If you need a specific topic, run the **search** query.
4. If you need full conversation detail, run the **messages** query with the session ID.
### Plan review
1. List plans with `ls -lt "$DATA_ROOT"/plans/*.md`.
2. Read a plan with `cat "$DATA_ROOT"/plans/<filename>.md`.
### Deep investigation
1. Run **projects** to see all tracked repos.
2. Run **sessions** for a specific project.
3. Run **messages** for full conversation content.
4. Cross-reference with **search** across all projects.
## Critical Rules
1. **Read-only.** Never write to or modify the database or any OpenCode files.
2. **Use bash + sqlite3.** Do not try to read `opencode.db` with the Read tool — it is a binary file. Always query via `sqlite3` in bash.
3. **Don't dump everything.** Use `LIMIT` and `LIKE` to keep output focused. The database can contain tens of thousands of messages.
4. **Summarize for the user.** After retrieving data, distill the relevant parts. Don't paste raw query output.
5. **Respect privacy.** Session history may contain sensitive data. Only surface what is relevant to the current task.
6. **Set path variables first.** At the start of any memory lookup, set `DATA_ROOT`, `STATE_ROOT`, `DB`, and `DB_URI` exactly as shown above so the commands work on XDG and non-XDG setups and keep SQLite access read-only.
## Fallback: Web UI
If the user needs visual dashboards or a browsable interface:
1. Check if OpenCode web is running: `curl -s http://127.0.0.1:4096/api/health 2>/dev/null || echo "not running"`
2. If running, direct the user to `http://127.0.0.1:4096`.
3. If not running, suggest `opencode web`.
4. Note: `opencode.local` only works with mDNS enabled (`opencode web --mdns`). Don't assume it exists.
## Deep Reference
See [references/storage-format.md](references/storage-format.md) for the full storage layout, all table schemas, and additional query examples.

4
opencode.json Normal file
View File

@@ -0,0 +1,4 @@
{
"plugin": ["@alkdev/open-memory"],
"$schema": "https://opencode.ai/config.json"
}

43
package.json Normal file
View File

@@ -0,0 +1,43 @@
{
"name": "@alkdev/open-memory",
"version": "0.1.0",
"description": "OpenCode plugin for session memory browsing, context awareness, and compaction management.",
"type": "module",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
}
},
"scripts": {
"build": "bun build src/index.ts --outdir dist --target bun --format esm && tsc --emitDeclarationOnly",
"lint": "bunx @biomejs/biome check .",
"format": "bunx @biomejs/biome format --write .",
"typecheck": "tsc --noEmit",
"test": "bun test"
},
"author": "Alkimia Development",
"license": "Apache-2.0",
"repository": {
"type": "git",
"url": "git@git.alk.dev:alkdev/open-memory.git"
},
"keywords": [
"opencode",
"plugin",
"memory",
"context",
"compaction",
"session"
],
"dependencies": {
"@opencode-ai/plugin": "^1.1.3"
},
"devDependencies": {
"@types/bun": "^1.2.0",
"@types/node": "^20.14.0",
"typescript": "^5.7.3"
}
}

205
research/ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,205 @@
# Open Memory: Architecture & Research
## Overview
`@alkdev/open-memory` is a standalone OpenCode plugin providing three capabilities:
1. **Context Awareness** — real-time tracking of context window usage with proactive warnings
2. **Session History Browser** — structured access to past sessions, messages, plans, and search
3. **Compaction Management** — better compaction prompts and on-demand compaction triggering
The core problem: OpenCode's automatic compaction fires at ~92% context usage with no warning. The default prompt frames it as "summarize for another agent" when it's the same agent continuing. This is disorienting and derailing. Open-memory gives agents awareness, control, and better summaries.
## Problem Statement
### Automatic Compaction is Disorienting
- Fires at ~92% with no advance warning
- Default prompt says "summarize for another agent" — misleading
- Agent loses context at an unpredictable point
- No way to compact at a natural breakpoint
### No History Access Within Sessions
- Agents can't reference prior sessions, decisions, or work
- The `opencode-memory.md` skill shows queries are possible via `sqlite3` but require manual bash commands
- No structured tool interface for browsing history
### Context Window Opacity
- The agent has no idea how close it is to compaction
- No visibility into token usage trends within a session
## Architecture
### Three Pillars
#### 1. Context Awareness
**SSE-based token tracking** (same pattern as `open-coordinator`'s detection system):
- Subscribe to `ctx.client.global.event()` SSE stream
- Track `tokens.input` from `message.updated` events per session
- The `tokens.input` on the latest assistant message = current context size
- Compare against model's `limit.context` to compute percentage used
- Model limits available from `ctx.client.config.get()` or provider info
**Thresholds:**
- **Green** (<70%): Healthy, no action needed
- **Yellow** (70-85%): Consider compacting at next break point
- **Red** (85-92%): Strongly recommend compacting now
- **Critical** (>92%): Imminent automatic compaction
**Proactive notification:**
- Use `experimental.chat.system.transform` hook to inject context percentage into system prompt
- Agent always knows its context status without calling a tool
- At yellow/red thresholds, inject an explicit advisory note
**Tool: `memory_context`**
- Returns current token usage, model context limit, percentage, and status
- Includes trend (growing fast vs. stable)
- Lists model info
#### 2. Compaction Management
**`memory_compact` tool:**
- Calls `ctx.client.session.summarize()` to trigger compaction on the current session
- Requires `providerID` and `modelID` — obtained from the session's last user message or config
- This gives the agent explicit control over *when* compaction happens
**`experimental.session.compacting` hook:**
- Replaces the default "summarize for another agent" prompt
- Better prompt emphasizes self-continuity, preserving task context, decisions, and next steps
**Default instructions in system prompt:**
- "When context exceeds 85%, use `memory_compact` at your next natural break point"
- "At 90%+, compact immediately if possible"
#### 3. Session History Browser
All backed by read-only `sqlite3` queries to `${XDG_DATA_HOME:-$HOME/.local/share}/opencode/opencode.db`.
**Tools:**
| Tool | Purpose |
|------|---------|
| `memory_summary` | Quick counts: projects, sessions, messages, todos |
| `memory_sessions` | List recent sessions with metadata, sorted by update time |
| `memory_messages` | Read messages from a specific session as markdown |
| `memory_search` | Full-text search across all conversations |
| `memory_plans` | List and read saved plans |
**Rendering:**
- Markdown tables for session lists
- Formatted conversation transcripts for `memory_messages`
- Snippet + session reference for search results
- All queries use `LIMIT` and `LIKE` to avoid dumping entire DB
## Component Design
```
src/
├── index.ts # Plugin entry: hooks + tool registration
├── tools.ts # Tool definitions (memory_*)
├── context/
│ ├── tracker.ts # SSE token tracking (per-session)
│ ├── thresholds.ts # Context percentage thresholds & status
│ └── notify.ts # System prompt injection for warnings
├── history/
│ ├── queries.ts # SQLite query helpers
│ ├── format.ts # Markdown rendering utilities
│ └── search.ts # Full-text search logic
└── compaction/
└── prompt.ts # Better compaction prompt template
```
## Key Technical Details
### Context Percentage Calculation
From `overflow.ts` in OpenCode source:
```typescript
// The actual check is:
// count >= usable
// where:
// count = tokens.total || (input + output + cache.read + cache.write)
// reserved = config.compaction?.reserved ?? min(20000, maxOutputTokens)
// usable = model.limit.input ? model.limit.input - reserved
// : model.limit.context - maxOutputTokens
```
The `tokens.input` field on the last assistant message represents the context size at the time that message was sent. We track this and compare it against the model's context limit (from config/providers).
### Session Summarize API
The SDK exposes `ctx.client.session.summarize()`:
```typescript
ctx.client.session.summarize({
path: { id: sessionID },
body: { providerID, modelID },
})
```
This triggers the compaction flow in OpenCode's server.
### Plugin Hook: `experimental.session.compacting`
```typescript
"experimental.session.compacting": async (input, output) => {
// output.context: string[] — appended to default prompt
// output.prompt?: string — replaces default prompt entirely
output.prompt = `You are compacting your own session...`;
}
```
### Plugin Hook: `experimental.chat.system.transform`
```typescript
"experimental.chat.system.transform": async (input, output) => {
// Can append strings to the system prompt
const contextInfo = getContextInfo(input.sessionID);
if (contextInfo) {
output.system.push(`Context: ${contextInfo.percentage}% used (${contextInfo.status})`);
}
}
```
## Relationship to `open-coordinator`
- **Open-coordinator** handles worktree orchestration, session spawning, bidirectional communication
- **Open-memory** handles session introspection, context awareness, history browsing
- Both use SSE event streams but for different purposes
- Both can be used together — coordinator for multi-session workflows, memory for context management
- The `experimental.session.compacting` hook in coordinator has a good prompt already; open-memory will provide an enhanced version that includes task context awareness
## References
- OpenCode source: `/workspace/opencode` — especially `packages/opencode/src/session/compaction.ts`, `overflow.ts`, `status.ts`
- OpenCode plugin SDK: `/workspace/opencode/packages/plugin/src/index.ts`
- OpenCode plugin types: see `Hooks` interface for all available hooks
- Open-code coordinator plugin: `/workspace/@alkimiadev/open-coordinator` — architecture pattern reference
- Original memory browsing skill: `docs/research/opencode-memory/opencode-memory.md`
- OpenCode DB schema: `message`, `part`, `session`, `project`, `todo` tables
- OpenCode config schema: `compaction.auto`, `compaction.prune`, `compaction.reserved` fields
## Implementation Phases
### Phase 1: Foundation (current)
- Plugin scaffolding, build setup, basic hooks
- `experimental.session.compacting` hook with better default prompt
- Basic `memory_context` tool (context percentage calculation)
### Phase 2: History Browser
- `memory_summary`, `memory_sessions`, `memory_messages`
- `memory_search` with full-text search
- `memory_plans` for plan access
- Markdown formatting for all outputs
### Phase 3: Context Awareness
- SSE-based token tracker
- Proactive context warnings via `experimental.chat.system.transform`
- `memory_compact` tool calling `session.summarize`
- Default system instructions on when to compact
### Phase 4: Polish
- Configurable thresholds
- Session comparison tools
- Export/import helpers
- Integration tests

42
src/compaction/prompt.ts Normal file
View File

@@ -0,0 +1,42 @@
const DEFAULT_COMPACTION_PROMPT = `You are compacting your own session to free context space. You will continue this session after compaction with this summary as your starting context.
Include what YOU will need to effectively resume your work:
- Current task and progress
- Files being worked on
- Key decisions made and why
- Next steps to take
- Important context that would be hard to rediscover
- Any active debug sessions, in-progress edits, or partial implementations
Be concise but preserve enough detail that you can continue seamlessly.
You are summarizing for yourself, not another agent.
When constructing the summary, try to stick to this template:
---
## Goal
[What goal(s) are you trying to accomplish?]
## Instructions
- [What important instructions are relevant to your current work]
- [If there is a plan or spec, include key details so you can continue using it]
## Discoveries
[What notable things were learned that would be useful to remember when continuing]
## Accomplished
[What work has been completed, what work is still in progress, and what work is left?]
## Relevant files / directories
[Construct a structured list of relevant files that have been read, edited, or created that pertain to the task at hand. If all the files in a directory are relevant, include the path to the directory.]
## Notes
[Anything else you need to remember — patterns observed, gotchas, tool quirks, environment details]
---`;
export const getCompactionPrompt = (): string => DEFAULT_COMPACTION_PROMPT;

26
src/context/notify.ts Normal file
View File

@@ -0,0 +1,26 @@
export const formatAnomalyNotification = (
sessionID: string,
type: string,
percentage: number,
status: string,
): string => {
const lines: string[] = [];
lines.push(`Context threshold reached [${status}]`);
lines.push("");
lines.push(`Session: ${sessionID}`);
lines.push(`Context: ${percentage}% used`);
if (status === "critical") {
lines.push("");
lines.push("Imminent automatic compaction. Consider triggering memory_compact now.");
} else if (status === "red") {
lines.push("");
lines.push("Context is running low. Use memory_compact at your next natural break point.");
} else if (status === "yellow") {
lines.push("");
lines.push("Context usage is getting high. Consider memory_compact when convenient.");
}
return lines.join("\n");
};

14
src/context/thresholds.ts Normal file
View File

@@ -0,0 +1,14 @@
export const THRESHOLDS = {
yellow: 70,
red: 85,
critical: 92,
} as const;
export const getStatusLabel = (percentage: number): ContextStatus => {
if (percentage >= THRESHOLDS.critical) return "critical";
if (percentage >= THRESHOLDS.red) return "red";
if (percentage >= THRESHOLDS.yellow) return "yellow";
return "green";
};
export type ContextStatus = "green" | "yellow" | "red" | "critical";

172
src/context/tracker.ts Normal file
View File

@@ -0,0 +1,172 @@
import type { Event } from "@opencode-ai/sdk";
import type { PluginInput } from "@opencode-ai/plugin";
export type ContextInfo = {
usedTokens: number;
limitTokens: number;
percentage: number;
status: "green" | "yellow" | "red" | "critical";
model: string;
providerID: string;
modelID: string;
trend: "growing" | "stable" | "unknown";
};
type SessionContextData = {
lastInputTokens: number;
lastTotalTokens: number;
providerID: string;
modelID: string;
lastUpdateTime: number;
previousInputTokens: number[];
};
const THRESHOLDS = {
yellow: 0.70,
red: 0.85,
critical: 0.92,
} as const;
const DEFAULT_CONTEXT_LIMIT = 200_000;
export class ContextTracker {
private sessions = new Map<string, SessionContextData>();
private ctx: PluginInput;
private modelContextLimits = new Map<string, number>();
constructor(ctx: PluginInput) {
this.ctx = ctx;
this.loadModelLimits().catch(() => {});
}
private async loadModelLimits() {
try {
const config = await this.ctx.client.config.get();
if (config.data) {
const providers = config.data as Record<string, unknown>;
if (providers && typeof providers === "object") {
const models = (providers as Record<string, unknown>).models;
if (models && typeof models === "object") {
for (const [key, value] of Object.entries(models as Record<string, unknown>)) {
if (value && typeof value === "object") {
const limit = (value as Record<string, unknown>).limit;
if (limit && typeof limit === "object") {
const context = (limit as Record<string, unknown>).context;
if (typeof context === "number") {
this.modelContextLimits.set(key, context);
}
}
}
}
}
}
}
} catch {
// Config not available, will use defaults
}
}
handleEvent(event: Event) {
if (event.type !== "message.updated") return;
const props = event.properties as Record<string, unknown>;
if (!props) return;
const info = props.info as Record<string, unknown> | undefined;
if (!info || info.role !== "assistant") return;
const sessionID = info.sessionID as string | undefined;
if (!sessionID) return;
const tokens = info.tokens as Record<string, unknown> | undefined;
if (!tokens) return;
const inputTokens = typeof tokens.input === "number" ? tokens.input : 0;
const totalTokens =
typeof tokens.total === "number"
? tokens.total
: inputTokens +
(typeof tokens.output === "number" ? tokens.output : 0) +
(typeof (tokens.cache as Record<string, unknown>)?.read === "number"
? (tokens.cache as Record<string, unknown>).read as number
: 0) +
(typeof (tokens.cache as Record<string, unknown>)?.write === "number"
? (tokens.cache as Record<string, unknown>).write as number
: 0);
const infoModel =
typeof info.model === "object" && info.model !== null
? (info.model as Record<string, unknown>)
: {};
const providerID = (info.providerID ?? infoModel.providerID ?? "") as string;
const modelID = (info.modelID ?? infoModel.modelID ?? "") as string;
let existing = this.sessions.get(sessionID);
if (!existing) {
existing = {
lastInputTokens: 0,
lastTotalTokens: 0,
providerID,
modelID,
lastUpdateTime: Date.now(),
previousInputTokens: [],
};
this.sessions.set(sessionID, existing);
}
existing.previousInputTokens.push(existing.lastInputTokens);
if (existing.previousInputTokens.length > 5) {
existing.previousInputTokens.shift();
}
existing.lastInputTokens = inputTokens;
existing.lastTotalTokens = totalTokens;
existing.providerID = providerID || existing.providerID;
existing.modelID = modelID || existing.modelID;
existing.lastUpdateTime = Date.now();
}
getContextInfo(sessionID: string): ContextInfo | null {
const data = this.sessions.get(sessionID);
if (!data || data.lastInputTokens === 0) return null;
const modelKey = `${data.providerID}/${data.modelID}`;
const limitTokens =
this.modelContextLimits.get(modelKey) ?? DEFAULT_CONTEXT_LIMIT;
const percentage = Math.round((data.lastInputTokens / limitTokens) * 100);
const status =
percentage >= THRESHOLDS.critical * 100
? "critical"
: percentage >= THRESHOLDS.red * 100
? "red"
: percentage >= THRESHOLDS.yellow * 100
? "yellow"
: "green";
const prevTokens = data.previousInputTokens;
let trend: ContextInfo["trend"] = "unknown";
if (prevTokens.length >= 2) {
const recentGrowth = prevTokens.slice(-3).reduce((acc, t, i, arr) => {
if (i === 0) return 0;
return acc + (t - arr[i - 1]);
}, 0);
trend = recentGrowth > prevTokens[prevTokens.length - 1] * 0.1 ? "growing" : "stable";
}
return {
usedTokens: data.lastInputTokens,
limitTokens,
percentage,
status,
model: modelKey,
providerID: data.providerID,
modelID: data.modelID,
trend,
};
}
}
export const startContextTracker = (ctx: PluginInput): ContextTracker => {
return new ContextTracker(ctx);
};

41
src/history/format.ts Normal file
View File

@@ -0,0 +1,41 @@
export const formatSessionList = (rows: Record<string, unknown>[]): string => {
if (rows.length === 0) return "No sessions found.";
const lines: string[] = ["# Recent Sessions\n"];
lines.push("| ID | Title | Updated | Messages |");
lines.push("|----|-------|---------|----------|");
for (const row of rows) {
const id = String(row.id ?? "").slice(0, 12) + "...";
const title = String(row.title ?? "untitled").slice(0, 40);
const updated = String(row.updated ?? "");
const msgs = String(row.msgs ?? "0");
lines.push(`| ${id} | ${title} | ${updated} | ${msgs} |`);
}
lines.push("");
lines.push("Use memory_messages with a session ID to read the full conversation.");
return lines.join("\n");
};
export const formatMessageList = (rows: Record<string, unknown>[]): string => {
if (rows.length === 0) return "No messages found.";
const lines: string[] = ["# Conversation\n"];
for (const row of rows) {
const role = String(row.role ?? "unknown");
const time = String(row.time ?? "");
const text = String(row.text ?? "");
const icon = role === "user" ? "👤" : role === "assistant" ? "🤖" : "📝";
const header = `${icon} **${role}** _${time}_`;
lines.push(header);
lines.push(text.slice(0, 2000));
lines.push("---");
}
return lines.join("\n");
};

22
src/history/queries.ts Normal file
View File

@@ -0,0 +1,22 @@
export const runQuery = async (dbUri: string, sql: string): Promise<Record<string, unknown>[]> => {
const proc = Bun.spawn(["sqlite3", "-json", dbUri, sql], {
stdout: "pipe",
stderr: "pipe",
});
const exitCode = await proc.exited;
const stdout = await new Response(proc.stdout).text();
const stderr = await new Response(proc.stderr).text();
if (exitCode !== 0) {
throw new Error(`sqlite3 exited with code ${exitCode}: ${stderr}`);
}
if (!stdout.trim()) return [];
try {
return JSON.parse(stdout) as Record<string, unknown>[];
} catch {
throw new Error(`Failed to parse sqlite3 output: ${stdout.slice(0, 200)}`);
}
};

54
src/history/search.ts Normal file
View File

@@ -0,0 +1,54 @@
import { runQuery } from "./queries.js";
export const searchConversations = async (
dbUri: string,
searchTerm: string,
limit: number,
): Promise<string> => {
const escaped = searchTerm.replace(/'/g, "''");
const query = `
SELECT
s.id AS session_id,
COALESCE(s.title, 'untitled') AS title,
json_extract(m.data, '$.role') AS role,
datetime(m.time_created/1000, 'unixepoch', 'localtime') AS time,
substr(json_extract(p.data, '$.text'), 1, 300) AS snippet
FROM part p
JOIN message m ON m.id = p.message_id
JOIN session s ON s.id = m.session_id
WHERE s.parent_id IS NULL
AND json_extract(p.data, '$.type') = 'text'
AND json_extract(p.data, '$.text') LIKE '%${escaped}%'
ORDER BY m.time_created DESC
LIMIT ${limit}
`;
try {
const rows = await runQuery(dbUri, query);
if (!rows || rows.length === 0) {
return `No results found for "${searchTerm}".`;
}
const lines: string[] = [`# Search: "${searchTerm}"\n`];
for (const row of rows) {
const sessionId = String(row.session_id ?? "").slice(0, 16);
const title = String(row.title ?? "untitled");
const time = String(row.time ?? "");
const role = String(row.role ?? "unknown");
const snippet = String(row.snippet ?? "");
lines.push(`### ${title} (${time})`);
lines.push(`- Session: \`${sessionId}...\``);
lines.push(`- Role: ${role}`);
lines.push(`- Snippet: ${snippet}...`);
lines.push("");
}
lines.push("Use memory_messages with a session ID to read the full conversation.");
return lines.join("\n");
} catch (err) {
return `Search failed: ${err instanceof Error ? err.message : String(err)}`;
}
};

57
src/index.ts Normal file
View File

@@ -0,0 +1,57 @@
import type { Plugin, PluginInput } from "@opencode-ai/plugin";
import { createTools } from "./tools.js";
import { startContextTracker } from "./context/tracker.js";
import { getCompactionPrompt } from "./compaction/prompt.js";
const OpenMemoryPlugin: Plugin = async (ctx) => {
const contextTracker = startContextTracker(ctx);
return {
tool: createTools(ctx, contextTracker),
"experimental.session.compacting": async (_input, output) => {
output.prompt = getCompactionPrompt();
},
"experimental.chat.system.transform": async (input, output) => {
if (!input.sessionID) return;
const info = contextTracker.getContextInfo(input.sessionID);
if (!info) return;
const statusEmoji =
info.status === "critical"
? "🔴"
: info.status === "red"
? "🟠"
: info.status === "yellow"
? "🟡"
: "🟢";
const advisory =
info.status === "critical"
? "Context is nearly full. Use memory_compact immediately if possible."
: info.status === "red"
? "Context is running low. Use memory_compact at your next natural break point."
: info.status === "yellow"
? "Context usage is getting high. Consider memory_compact when convenient."
: null;
const lines = [
`${statusEmoji} Context: ${info.percentage}% used (${info.usedTokens.toLocaleString()} / ${info.limitTokens.toLocaleString()} tokens, ${info.model})`,
];
if (advisory) {
lines.push(advisory);
}
output.system.push(lines.join("\n"));
},
event: async ({ event }) => {
contextTracker.handleEvent(event);
},
};
};
export default OpenMemoryPlugin;

321
src/tools.ts Normal file
View File

@@ -0,0 +1,321 @@
import type { PluginInput, ToolDefinition } from "@opencode-ai/plugin";
import { tool } from "@opencode-ai/plugin";
import type { ContextTracker } from "./context/tracker.js";
import { formatSessionList, formatMessageList } from "./history/format.js";
import { runQuery } from "./history/queries.js";
import { searchConversations } from "./history/search.js";
const z = tool.schema;
const DATA_ROOT = process.env.XDG_DATA_HOME || `${process.env.HOME}/.local/share/opencode`;
const DB = `${DATA_ROOT}/opencode.db`;
const DB_URI = `file:${DB}?mode=ro`;
export const createTools = (
ctx: PluginInput,
tracker: ContextTracker,
): Record<string, ToolDefinition> => ({
memory_context: tool({
description:
"Check current session context window usage. Shows percentage used, token counts, model limit, and status (green/yellow/red/critical). Use when you need to understand how close you are to automatic compaction.",
args: {},
async execute(_args, context) {
if (!context.sessionID) {
return "No active session.";
}
const info = tracker.getContextInfo(context.sessionID);
if (!info) {
return "No context data available yet. Send a message first to establish context tracking.";
}
const statusLabel =
info.status === "critical"
? "CRITICAL — imminent compaction"
: info.status === "red"
? "RED — compact soon"
: info.status === "yellow"
? "YELLOW — consider compacting"
: "GREEN — healthy";
const lines = [
`Context: ${info.percentage}% used`,
`Tokens: ${info.usedTokens.toLocaleString()} / ${info.limitTokens.toLocaleString()}`,
`Model: ${info.model}`,
`Status: ${statusLabel}`,
];
if (info.trend === "growing") {
lines.push("Trend: Context is growing rapidly.");
}
if (info.status === "red" || info.status === "critical") {
lines.push("");
lines.push("Recommendation: Use memory_compact to trigger compaction at a natural break point.");
}
return lines.join("\n");
},
}),
memory_compact: tool({
description:
"Trigger compaction on the current session. This summarizes the conversation so far to free context space. Use when context is getting full (80%+) and you want to control when compaction happens, rather than letting it fire automatically at 92%.",
args: {},
async execute(_args, context) {
if (!context.sessionID) {
return "No active session.";
}
const info = tracker.getContextInfo(context.sessionID);
if (info && info.percentage < 50) {
return `Context is only at ${info.percentage}%. Compaction is not needed yet. Consider waiting until 80%+ for best results.`;
}
const session = await ctx.client.session.get({
path: { id: context.sessionID },
});
if (session.error) {
return `Failed to get session: ${session.error}`;
}
const messages = await ctx.client.session.messages({
path: { id: context.sessionID },
});
if (messages.error) {
return `Failed to get messages: ${messages.error}`;
}
const lastUserMessage = [...(messages.data ?? [])]
.reverse()
.find((m) => m.info.role === "user");
let providerID = info?.providerID ?? "";
let modelID = info?.model ?? "";
if (lastUserMessage) {
const infoAny = lastUserMessage.info as Record<string, unknown>;
const modelObj =
typeof infoAny.model === "object" && infoAny.model !== null
? (infoAny.model as Record<string, unknown>)
: null;
if (modelObj?.providerID && typeof modelObj.providerID === "string") {
providerID = modelObj.providerID;
}
if (modelObj?.modelID && typeof modelObj.modelID === "string") {
modelID = modelObj.modelID;
}
}
if (!providerID || !modelID) {
return "Cannot determine model for compaction. Please ensure the session has at least one message.";
}
const pid = providerID as string;
const mid = modelID as string;
try {
await ctx.client.session.summarize({
path: { id: context.sessionID },
body: { providerID: pid, modelID: mid },
});
const contextNote = info ? ` (was at ${info.percentage}%)` : "";
return `Compaction triggered successfully${contextNote}. The session will be summarized and you'll continue with freed context space.`;
} catch (err) {
return `Failed to trigger compaction: ${err instanceof Error ? err.message : String(err)}`;
}
},
}),
memory_summary: tool({
description:
"Get a quick summary of your OpenCode local memory: count of projects, sessions, messages, and todos.",
args: {},
async execute() {
try {
const rows = await runQuery(DB_URI, `
SELECT 'projects', COUNT(*) FROM project
UNION ALL SELECT 'sessions (main)', COUNT(*) FROM session WHERE parent_id IS NULL
UNION ALL SELECT 'sessions (total)', COUNT(*) FROM session
UNION ALL SELECT 'messages', COUNT(*) FROM message
UNION ALL SELECT 'todos', COUNT(*) FROM todo
`);
if (!rows || rows.length === 0) return "No data found.";
const lines = ["# OpenCode Memory Summary\n"];
for (const row of rows) {
const values = Object.values(row);
lines.push(`- **${values[0]}**: ${values[1]}`);
}
return lines.join("\n");
} catch (err) {
return `Failed to query database: ${err instanceof Error ? err.message : String(err)}`;
}
},
}),
memory_sessions: tool({
description:
"List recent sessions with titles, update times, and message counts. Optionally filter by project path.",
args: {
limit: z.number().optional().describe("Number of sessions to show (default: 10)."),
projectPath: z
.string()
.optional()
.describe("Filter to a specific project worktree path."),
},
async execute(args) {
const limit = args.limit ?? 10;
try {
let query: string;
if (args.projectPath) {
query = `
SELECT
s.id,
COALESCE(s.title, 'untitled') AS title,
datetime(s.time_updated/1000, 'unixepoch', 'localtime') AS updated,
(SELECT COUNT(*) FROM message m WHERE m.session_id = s.id) AS msgs
FROM session s
JOIN project p ON p.id = s.project_id
WHERE p.worktree = '${args.projectPath.replace(/'/g, "''")}'
AND s.parent_id IS NULL
ORDER BY s.time_updated DESC
LIMIT ${limit}
`;
} else {
query = `
SELECT
s.id,
COALESCE(s.title, 'untitled') AS title,
COALESCE(p.name, CASE WHEN p.worktree = '/' THEN '(global)' ELSE REPLACE(p.worktree, RTRIM(p.worktree, REPLACE(p.worktree, '/', '')), '') END) AS project,
datetime(s.time_updated/1000, 'unixepoch', 'localtime') AS updated,
(SELECT COUNT(*) FROM message m WHERE m.session_id = s.id) AS msgs
FROM session s
LEFT JOIN project p ON p.id = s.project_id
WHERE s.parent_id IS NULL
ORDER BY s.time_updated DESC
LIMIT ${limit}
`;
}
const rows = await runQuery(DB_URI, query);
if (!rows || rows.length === 0) {
return "No sessions found.";
}
return formatSessionList(rows);
} catch (err) {
return `Failed to query sessions: ${err instanceof Error ? err.message : String(err)}`;
}
},
}),
memory_messages: tool({
description:
"Read messages from a specific session. Returns formatted conversation with roles and timestamps.",
args: {
sessionId: z.string().describe("Session ID to read messages from."),
limit: z.number().optional().describe("Number of messages to return (default: 50)."),
},
async execute(args) {
const limit = args.limit ?? 50;
try {
const query = `
SELECT
json_extract(m.data, '$.role') AS role,
datetime(m.time_created/1000, 'unixepoch', 'localtime') AS time,
GROUP_CONCAT(json_extract(p.data, '$.text'), char(10)) AS text
FROM message m
LEFT JOIN part p ON p.message_id = m.id
AND json_extract(p.data, '$.type') = 'text'
WHERE m.session_id = '${args.sessionId.replace(/'/g, "''")}'
GROUP BY m.id
ORDER BY m.time_created ASC
LIMIT ${limit}
`;
const rows = await runQuery(DB_URI, query);
if (!rows || rows.length === 0) {
return `No messages found for session ${args.sessionId}.`;
}
return formatMessageList(rows);
} catch (err) {
return `Failed to query messages: ${err instanceof Error ? err.message : String(err)}`;
}
},
}),
memory_search: tool({
description:
"Search across all conversations for a term. Returns matching snippets with session references.",
args: {
query: z.string().describe("Search term to find in conversations."),
limit: z.number().optional().describe("Max results (default: 10)."),
},
async execute(args) {
const limit = args.limit ?? 10;
try {
const results = await searchConversations(DB_URI, args.query, limit);
if (!results || results.length === 0) {
return `No results found for "${args.query}".`;
}
return results;
} catch (err) {
return `Search failed: ${err instanceof Error ? err.message : String(err)}`;
}
},
}),
memory_plans: tool({
description: "List saved plan files from OpenCode's plans directory.",
args: {
read: z
.string()
.optional()
.describe("Filename of a specific plan to read (without path)."),
},
async execute(args) {
const plansDir = `${DATA_ROOT}/plans`;
if (args.read) {
try {
const content = await Bun.file(`${plansDir}/${args.read}`).text();
return content;
} catch {
return `Plan file "${args.read}" not found.`;
}
}
try {
const glob = new Bun.Glob("*.md");
const files: { name: string; mtime: number; size: number }[] = [];
for await (const file of glob.scan({ cwd: plansDir })) {
const stat = await Bun.file(`${plansDir}/${file}`).stat();
files.push({
name: file,
mtime: stat.mtime.getTime(),
size: stat.size,
});
}
if (files.length === 0) {
return "No plans found.";
}
files.sort((a, b) => b.mtime - a.mtime);
const lines = ["# Plans\n", "| File | Size |", "|------|------|"];
for (const f of files) {
const sizeStr = f.size > 1024 ? `${(f.size / 1024).toFixed(1)}KB` : `${f.size}B`;
lines.push(`| ${f.name} | ${sizeStr} |`);
}
lines.push("", `Use memory_plans with a "read" argument to view a specific plan.`);
return lines.join("\n");
} catch {
return "No plans directory found.";
}
},
}),
});

18
tsconfig.json Normal file
View File

@@ -0,0 +1,18 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "Bundler",
"types": ["node", "bun"],
"strict": true,
"noEmitOnError": true,
"declaration": true,
"emitDeclarationOnly": true,
"outDir": "dist",
"rootDir": "src",
"verbatimModuleSyntax": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src"]
}