Simplify to transport-only: remove call protocol, add EventEnvelope, expand stream operators

- Remove src/call.ts (PendingRequestMap, CallEventSchema, CallError) — call protocol belongs in @alkdev/operations
- Add EventEnvelope type ({ type, id, payload }) as the cross-platform serialization contract
- Simplify createPubSub: replace PubSubPublishArgsByKey tuple model with PubSubEventMap; publish(type, id, payload) and subscribe(type, id) use explicit id for topic scoping
- Update Redis adapter to serialize/deserialize full EventEnvelope
- Expand operators: add take, reduce, toArray, batch, dedupe, window, flat, groupBy, chain, join
- Remove @alkdev/typebox runtime dependency (was only used by call.ts)
- Remove ./call sub-path export from package.json and tsup config
- Update all architecture docs to reflect transport-only scope, add Worker adapter, remove call protocol references
- Remove docs/architecture/call-protocol.md
- Update AGENTS.md with new source layout and transport-only principle
This commit is contained in:
2026-05-01 19:40:25 +00:00
parent 04b3464c36
commit de7fc88f99
17 changed files with 446 additions and 764 deletions

View File

@@ -5,8 +5,8 @@
| Document | Content |
|----------|---------|
| [architecture/README.md](architecture/README.md) | Overview, why this exists, interface contract, consumer context |
| [architecture/api-surface.md](architecture/api-surface.md) | createPubSub, PubSub types, operators |
| [architecture/event-targets.md](architecture/event-targets.md) | In-process, Redis, WebSocket adapters |
| [architecture/api-surface.md](architecture/api-surface.md) | createPubSub, EventEnvelope, PubSub types, operators |
| [architecture/event-targets.md](architecture/event-targets.md) | In-process, Redis, WebSocket, Worker adapters |
| [architecture/iroh-transport.md](architecture/iroh-transport.md) | Iroh P2P QUIC transport, framing, identity, hub/spoke |
| [architecture/build-distribution.md](architecture/build-distribution.md) | Dependencies, project structure, tree-shaking, sub-path exports |

View File

@@ -1,11 +1,27 @@
---
status: draft
last_updated: 2026-04-30
last_updated: 2026-05-01
---
# @alkdev/pubsub Architecture
Type-safe publish/subscribe with pluggable event target adapters. The core (`createPubSub` + `TypedEventTarget` + operators) has no transport dependency. Each adapter (Redis, WebSocket, Iroh) is an isolated module that only imports its own peer dependency.
Type-safe publish/subscribe with pluggable event target adapters. The core (`createPubSub` + `TypedEventTarget` + `EventEnvelope` + operators) has no transport dependency. Each adapter (Redis, WebSocket, Worker, Iroh) is an isolated module that only imports its own peer dependency.
This package is a **transport layer only**. It carries events between processes and does not prescribe what those events mean or how downstream systems coordinate. Higher-level protocols (call/response, operation invocation, workflow coordination) belong in downstream packages like `@alkdev/operations`.
## Core Principle
**The TypedEventTarget interface is the contract.** All transports implement the same `addEventListener` / `dispatchEvent` / `removeEventListener` surface. `createPubSub` doesn't know or care which transport is in use — it just dispatches events to whatever `TypedEventTarget` it was given.
**The EventEnvelope is the cross-platform format.** Every event dispatched through pubsub is `{ type, id, payload }`. This is a minimal, JSON-serializable envelope that any transport adapter can route and any downstream consumer can interpret. Domain-specific data lives in `payload`. Correlation lives in `id`. The event type lives in `type`. No `parent` field — causal relationships are managed by downstream coordination layers, not the transport.
Swapping transports is a one-line config change:
```ts
const pubsub = createPubSub<MyEventMap>({
eventTarget: createRedisEventTarget({ publishClient, subscribeClient }),
});
```
## Why This Exists
@@ -16,33 +32,27 @@ Extracted from `@alkdev/alkhub_ts/packages/core/pubsub/`, which itself was adapt
3. **Isolates peer deps** — Redis and Iroh are heavy native dependencies; consumers that don't need them shouldn't carry them
4. **Matches established pattern**`@alkdev/taskgraph` and `@alkdev/typemap` already use the standalone-package pattern
## Core Principle
**The TypedEventTarget interface is the contract.** All transports implement the same `addEventListener` / `dispatchEvent` / `removeEventListener` surface. `createPubSub` doesn't know or care which transport is in use — it just dispatches events to whatever `TypedEventTarget` it was given.
This means swapping from in-process to Redis to WebSocket to Iroh is a one-line config change:
```ts
const pubsub = createPubSub<MyEventMap>({
eventTarget: createRedisEventTarget({ publishClient, subscribeClient }),
});
```
## What This Package Provides
- **Core** — `createPubSub`, `TypedEventTarget`, `TypedEvent`, topic scoping, `filter`/`map`/`pipe` operators, `Repeater` (inlined from @repeaterjs/repeater)
- **Call protocol** (`@alkdev/pubsub/call`) — `PendingRequestMap`, `CallEventSchema`, `CallError`, event types for request/response and streaming operations
- **Core** — `createPubSub`, `TypedEventTarget`, `TypedEvent`, `EventEnvelope`, stream operators (`filter`, `map`, `pipe`, `take`, `reduce`, `toArray`, `batch`, `dedupe`, `window`, `flat`, `groupBy`, `chain`, `join`), `Repeater` (inlined from @repeaterjs/repeater)
- **Adapters** (each is a peer-dep island, importable via sub-path export):
- In-process (default `EventTarget`, no adapter needed)
- Redis (`@alkdev/pubsub/event-target-redis`, peer dep: `ioredis`)
- WebSocket (future: `@alkdev/pubsub/event-target-websocket`)
- Worker (future: `@alkdev/pubsub/event-target-worker`)
- Iroh (future: `@alkdev/pubsub/event-target-iroh`, peer dep: `@rayhanadev/iroh`)
## What This Package Does NOT Provide
- **Call protocol** — request/response coordination, `PendingRequestMap`, `CallEventSchema`, and `CallError` have been moved to `@alkdev/operations`. The pubsub transport is substrate-agnostic.
- **Workflow coordination** — causal chains, parent/child relationships, and abort cascading are domain-level concerns managed by downstream packages.
- **Abort/cancellation primitives** — these belong in the coordination layer, not the transport. The `EventEnvelope` intentionally omits a `parent` field to avoid conflating transport with coordination semantics.
## Consumer Context
### alkhub (hub-spoke coordinator)
The hub uses pubsub for event routing between operations, runners, and the SSE interface. The event map is the call protocol — typed JSON events (`call.requested`, `call.responded`, `session.status`, etc.). Transport choice depends on deployment:
The hub uses pubsub for event routing between operations, runners, and the SSE interface. Transport choice depends on deployment:
| Deployment | Transport |
|------------|-----------|
@@ -50,23 +60,24 @@ The hub uses pubsub for event routing between operations, runners, and the SSE i
| Hub + worker processes | Redis |
| Hub + remote spokes | WebSocket or Iroh |
### Future: standalone spoke SDK
### Downstream packages
Spokes will import `@alkdev/pubsub` directly to create their event target (WebSocket or Iroh) and wire it into `createPubSub`. Call protocol types and `PendingRequestMap` are available from `@alkdev/pubsub/call`.
- `@alkdev/operations` uses `createPubSub` with its own event maps for call/response coordination. It defines its own event schemas and `PendingRequestMap` on top of the pubsub transport.
- `@alkdev/taskgraph` will use pubsub events for task lifecycle notifications and workflow coordination.
## Threat Model
- **Fork provenance** — core pubsub and typed event target are adapted from graphql-yoga (MIT). All original copyright notices are preserved in file headers. See [ADR-001](decisions/001-graphql-yoga-fork.md).
- **Peer dep isolation** — Redis and Iroh are optional peer dependencies. A consumer that only needs in-process transport installs zero extra packages. A consumer using Redis but not Iroh installs `ioredis` only.
- **Type-only imports** — `event-target-redis.ts` imports `ioredis` types only at compile time. At runtime, the consumer must provide the actual `Redis`/`Cluster` instances.
- **Minimal envelope** — the `EventEnvelope` format (`{ type, id, payload }`) is intentionally minimal and JSON-serializable. Any platform that supports JSON can produce or consume these events (Rust, Python, etc.).
## Architecture Documents
| Document | Content |
|----------|---------|
| [api-surface.md](api-surface.md) | createPubSub factory, PubSub types, operators, TypedEventTarget types |
| [call-protocol.md](call-protocol.md) | Call/subscribe protocol — event types, PendingRequestMap, streaming, error model, transport mapping |
| [event-targets.md](event-targets.md) | In-process, Redis, WebSocket adapters — interface, configuration, limitations |
| [api-surface.md](api-surface.md) | createPubSub factory, EventEnvelope, PubSub types, operators, TypedEventTarget types |
| [event-targets.md](event-targets.md) | In-process, Redis, WebSocket, Worker adapters — interface, configuration, limitations |
| [iroh-transport.md](iroh-transport.md) | Iroh P2P QUIC transport — protocol, framing, identity, hub/spoke sides, reconnection |
| [build-distribution.md](build-distribution.md) | Dependencies, project structure, tree-shaking, sub-path exports, targets |
@@ -93,4 +104,5 @@ last_updated: YYYY-MM-DD
- Upstream: `@graphql-yoga/subscription` and `@graphql-yoga/typed-event-target` (MIT)
- alkhub pubsub-redis doc: `@alkdev/alkhub_ts/docs/architecture/pubsub-redis.md`
- alkhub spoke-runner doc: `@alkdev/alkhub_ts/docs/architecture/spoke-runner.md`
- Migration research: `docs/research/migration.md`
- Migration research: `docs/research/migration.md`
- Research: Event sourcing types — `docs/research/event_sourcing/` (not in this repo, in global workspace)

View File

@@ -1,6 +1,6 @@
---
status: draft
last_updated: 2026-04-30
last_updated: 2026-05-01
---
# API Surface
@@ -10,45 +10,60 @@ Core pubsub creation, types, and operators. No transport dependencies.
## `createPubSub`
```ts
function createPubSub<TPubSubPublishArgsByKey extends PubSubPublishArgsByKey>(
config?: PubSubConfig<TPubSubPublishArgsByKey>,
): PubSub<TPubSubPublishArgsByKey>;
function createPubSub<TEventMap extends PubSubEventMap>(
config?: PubSubConfig<TEventMap>,
): PubSub<TEventMap>;
```
Factory function. Accepts an optional `eventTarget` config. If none is provided, uses `new EventTarget()` (in-process).
### Topic Scoping
### Event Envelope
Topics can be scoped with an id:
- `pubsub.publish("session.status", projectId, payload)` → dispatches to topic `session.status:{projectId}`
- `pubsub.subscribe("session.status", projectId)` → subscribes to topic `session.status:{projectId}` only
- `pubsub.publish("session.status", payload)` → dispatches to topic `session.status` (unscoped)
- `pubsub.subscribe("session.status")` → subscribes to topic `session.status` (unscoped)
The topic string is either the routing key directly (unscoped) or `{routingKey}:{id}` (scoped). This maps naturally to Redis channel naming and WebSocket message routing.
### `PubSubPublishArgsByKey`
The type parameter that defines the event map:
Every event dispatched through pubsub uses the `EventEnvelope` format:
```ts
type PubSubPublishArgsByKey = {
[key: string]: [] | [unknown] | [number | string, unknown];
interface EventEnvelope<TType extends string = string, TPayload = unknown> {
readonly type: TType;
readonly id: string;
readonly payload: TPayload;
}
```
The envelope is the cross-platform serialization contract. All transport adapters serialize/deserialize this format. Domain-specific data goes in `payload`.
### Topic Scoping
Topics are scoped by `id` using the `type:id` convention:
```ts
pubsub.publish("call.responded", requestId, { output });
// → dispatches event with CustomEvent type "call.responded:{requestId}", detail = { type, id, payload }
const stream = pubsub.subscribe("call.responded", requestId);
// → subscribes to topic "call.responded:{requestId}"
```
Unlike the previous tuple-based model, `id` is always required. This simplifies the type system and makes correlation explicit.
### `PubSubEventMap`
The type parameter that defines the event map. Maps event type strings to their payload types:
```ts
type PubSubEventMap = {
[eventType: string]: unknown;
};
```
- `[]` — event with no payload (trigger only)
- `[payload]` — unscoped event with payload
- `[id, payload]` — scoped event with id and payload
### `PubSub.subscribe()`
Returns a `Repeater<unknown>` (async iterable). Consumers iterate with `for await`:
Returns a `Repeater<EventEnvelope<TKey, TPayload>>` (async iterable). Consumers iterate with `for await`:
```ts
for await (const payload of pubsub.subscribe("session.status")) {
// handle payload
for await (const envelope of pubsub.subscribe("session.status", sessionId)) {
// envelope.type === "session.status"
// envelope.id === sessionId
// envelope.payload === the typed payload
}
```
@@ -58,21 +73,26 @@ The `Repeater` automatically cleans up its `addEventListener` when the consumer
| Export | Source | Description |
|--------|--------|-------------|
| `EventEnvelope<TType, TPayload>` | `types.ts` | Cross-platform envelope: `{ type, id, payload }`. JSON-serializable. |
| `TypedEvent<TType, TDetail>` | `types.ts` | Event with typed `type` and `detail`. Omits `CustomEvent`'s untyped fields. |
| `TypedEventTarget<TEvent>` | `types.ts` | Extends `EventTarget` with typed `addEventListener`, `dispatchEvent`, `removeEventListener`. |
| `TypedEventListener<TEvent>` | `types.ts` | `(evt: TEvent) => void` |
| `TypedEventListenerObject<TEvent>` | `types.ts` | `{ handleEvent(object: TEvent): void }` |
| `TypedEventListenerOrEventListenerObject<TEvent>` | `types.ts` | Union of the above |
| `PubSub<TPubSubPublishArgsByKey>` | `create_pubsub.ts` | `{ publish, subscribe }` |
| `PubSubConfig<TPubSubPublishArgsByKey>` | `create_pubsub.ts` | `{ eventTarget?: PubSubEventTarget }` |
| `PubSubEvent<TPubSubPublishArgsByKey, TKey>` | `create_pubsub.ts` | Derived `TypedEvent` for a specific event key |
| `PubSubEventTarget<TPubSubPublishArgsByKey>` | `create_pubsub.ts` | `TypedEventTarget<PubSubEvent<...>>` |
| `PubSub<TEventMap>` | `create_pubsub.ts` | `{ publish, subscribe }` — publish takes `(type, id, payload)`, subscribe takes `(type, id)` and returns `Repeater<EventEnvelope>` |
| `PubSubConfig<TEventMap>` | `create_pubsub.ts` | `{ eventTarget?: PubSubEventTarget }` |
| `PubSubEvent<TEventMap, TType>` | `create_pubsub.ts` | Derived `TypedEvent` for a specific event type, with `detail` as `EventEnvelope<TType, TPayload>` |
| `PubSubEventTarget<TEventMap>` | `create_pubsub.ts` | `TypedEventTarget<PubSubEvent<...>>` |
## Operators
All operators return `Repeater` instances and work with any async iterable.
All operators work with any `AsyncIterable`. Operators that return `Repeater` provide backpressure-aware push semantics.
### `filter`
### Repeater-returning operators
These wrap source iterables in a `Repeater` with explicit push/stop control:
#### `filter`
```ts
function filter<T>(filterFn: (value: T) => Promise<boolean> | boolean): (source: AsyncIterable<T>) => Repeater<T>;
@@ -80,22 +100,105 @@ function filter<T>(filterFn: (value: T) => Promise<boolean> | boolean): (source:
Type-narrowing overload available: `filter<T, U extends T>(fn: (input: T) => input is U)`.
### `map`
#### `map`
```ts
function map<T, O>(mapper: (input: T) => Promise<O> | O): (source: AsyncIterable<T>) => Repeater<O>;
```
### `pipe`
#### `pipe`
```ts
function pipe<A, B>(a: A, ab: (a: A) => B): B;
function pipe<A, B, C>(a: A, ab: (a: A) => B, bc: (b: B) => C): C;
// up to 5 arguments
```
Compose operators: `pipe(pubsub.subscribe("myEvent"), filter(isRelevant), map(transform))`
Compose operators: `pipe(pubsub.subscribe("myEvent", id), filter(isRelevant), map(transform))`
### AsyncGenerator operators
These use native `async function*` generators for simpler stream transformations:
#### `take`
Yields only the first `count` items from the source.
```ts
async function* take<T>(source: AsyncIterable<T>, count: number): AsyncIterable<T>
```
#### `reduce`
Reduces the stream to a single value.
```ts
async function reduce<T, U>(source: AsyncIterable<T>, reducer: (acc: U, value: T) => Promise<U> | U, initialValue: U): Promise<U>
```
#### `toArray`
Collects all items into an array.
```ts
async function toArray<T>(source: AsyncIterable<T>): Promise<T[]>
```
#### `batch`
Groups items into arrays of `size`.
```ts
async function* batch<T>(source: AsyncIterable<T>, size: number): AsyncIterable<T[]>
```
#### `dedupe`
Yields only unique items (uses `Set` for deduplication).
```ts
async function* dedupe<T>(source: AsyncIterable<T>): AsyncIterable<T>
```
#### `window`
Sliding window of `size` items, advancing by `step` (default 1).
```ts
async function* window<T>(source: AsyncIterable<T>, size: number, step?: number): AsyncIterable<T[]>
```
#### `flat`
Flattens an `AsyncIterable<T[]>` into `AsyncIterable<T>`.
```ts
async function* flat<T>(source: AsyncIterable<T[]>): AsyncIterable<T>
```
#### `groupBy`
Groups items by key into a `Map`. Terminal operation (consumes entire stream).
```ts
async function groupBy<T, K>(source: AsyncIterable<T>, keyFn: (value: T) => K): Promise<Map<K, T[]>>
```
#### `chain`
Concatenates multiple async iterables into one.
```ts
async function* chain<T>(...sources: AsyncIterable<T>[]): AsyncIterable<T>
```
#### `join`
Streaming join between two sources on matching keys.
```ts
async function* join<T, U, K>(source1: AsyncIterable<T>, source2: AsyncIterable<U>, keyFn1: (value: T) => K, keyFn2: (value: U) => K): AsyncIterable<[T, U]>
```
## Attribution
`createPubSub` and operators are adapted from `@graphql-yoga/subscription` (MIT). `TypedEventTarget` types are adapted from `@graphql-yoga/typed-event-target` (MIT). See file headers for full license text.
`createPubSub`, `filter`, `map`, and `pipe` are adapted from `@graphql-yoga/subscription` (MIT). `TypedEventTarget` types are adapted from `@graphql-yoga/typed-event-target` (MIT). `Repeater` is inlined from `@repeaterjs/repeater` (MIT). See file headers for full license text.

View File

@@ -1,6 +1,6 @@
---
status: draft
last_updated: 2026-04-30
last_updated: 2026-05-01
---
# Build & Distribution
@@ -9,32 +9,38 @@ Dependencies, project structure, tree-shaking, sub-path exports, and build targe
## Dependencies
No runtime dependencies. The `Repeater` class is inlined from `@repeaterjs/repeater` (MIT) — no external package required.
| Package | Type | Purpose |
|---------|------|---------|
| `@repeaterjs/repeater` | direct | Small (~3KB). Core async iterable primitive for `subscribe()`. |
| (none) | runtime | — |
| `ioredis` | peer (optional) | Redis client. Only imported by `event-target-redis.ts`. Type-only import at compile time. |
| `@rayhanadev/iroh` | peer (optional, future) | Iroh NAPI-RS binding. Only imported by `event-target-iroh.ts`. |
No other external dependencies. No logger dependency.
No logger dependency. No TypeBox dependency (call protocol and schemas moved to `@alkdev/operations`).
## Project Structure
```
@alkdev/pubsub/
src/
index.ts # Barrel: re-exports core API
types.ts # TypedEvent, TypedEventTarget, etc.
create_pubsub.ts # createPubSub factory
operators.ts # filter, map, pipe
event-target-redis.ts # createRedisEventTarget (peer dep: ioredis)
index.ts # Barrel: re-exports core API + operators
types.ts # TypedEvent, TypedEventTarget, EventEnvelope
create_pubsub.ts # createPubSub factory (adapted from graphql-yoga)
operators.ts # filter, map, pipe, take, reduce, toArray,
# batch, dedupe, window, flat, groupBy, chain, join
repeater.ts # Inlined from @repeaterjs/repeater (MIT)
event-target-redis.ts # createRedisEventTarget (peer dep: ioredis)
# Future adapters (each is its own entry point + peer dep island):
# event-target-websocket.ts # peer dep: none (web standard)
# event-target-iroh.ts # peer dep: @rayhanadev/iroh
# event-target-websocket.ts # (peer dep: none, web standard)
# event-target-worker.ts # (peer dep: none, web standard)
# event-target-iroh.ts # (peer dep: @rayhanadev/iroh)
test/
create_pubsub.test.ts
operators.test.ts
event-target-redis.test.ts
# event-target-websocket.test.ts
# event-target-worker.test.ts
# event-target-iroh.test.ts
docs/
architecture.md
@@ -56,6 +62,7 @@ We use explicit sub-path exports rather than barrel-only + tree-shaking. Each ad
".": { ... },
"./event-target-redis": { ... },
"./event-target-websocket": { ... },
"./event-target-worker": { ... },
"./event-target-iroh": { ... }
}
}
@@ -86,21 +93,6 @@ Optional peer deps means `npm install @alkdev/pubsub` does NOT install ioredis o
- **Target**: `es2022`
- **Splitting**: enabled (tsup code splitting for shared chunks)
```ts
// tsup.config.ts
import { defineConfig } from 'tsup';
export default defineConfig({
entry: ['src/index.ts', 'src/event-target-redis.ts'],
format: ['esm', 'cjs'],
dts: true,
sourcemap: true,
clean: true,
splitting: true,
target: 'es2022',
});
```
## Testing
- **Runner**: `vitest` — matches taskgraph, natural fit with tsup/Node build pipeline

View File

@@ -1,285 +0,0 @@
---
status: draft
last_updated: 2026-04-30
---
# Call Protocol
Unified event-based protocol for request/response and streaming operations. Built on `@alkdev/pubsub`'s `TypedEventTarget` and `Repeater` primitives.
## Overview
The call protocol provides a single event-based mechanism that works identically whether the operation is local (in-process), remote (hub/spoke over WebSocket or Iroh), or streamed (subscription). It is transport-agnostic — the same event shapes, same `requestId` correlation, same `PendingRequestMap`. Only the `EventTarget` changes.
Two consumption patterns share the same protocol:
- **`call()`**: Publish `call.requested`, subscribe to response events scoped by `requestId`, resolve on first response → `Promise<TOutput>`
- **`subscribe()`**: Publish `call.requested`, subscribe to `call.part` events scoped by `requestId`, yield each part until `call.completed` or `call.error``Repeater<TOutput>`
Both use `call.requested` as the trigger. The `operationId` and `operation.type` on the handler side determine which pattern applies. The protocol itself doesn't distinguish — it's the handler that decides whether to respond once (`respond()`) or stream (`part()` + `complete()`).
## Event Types
All events use TypeBox schemas, compatible with `@alkdev/pubsub`'s `PubSubPublishArgsByKey`. Schemas are exported as `CallEventSchema` for runtime validation.
### `CallEventSchema`
```ts
const CallEventSchema = {
"call.requested": Type.Object({
requestId: Type.String(),
operationId: Type.String(),
input: Type.Unknown(),
parentRequestId: Type.Optional(Type.String()),
deadline: Type.Optional(Type.Number()),
identity: Type.Optional(Type.Object({
id: Type.String(),
scopes: Type.Array(Type.String()),
resources: Type.Optional(Type.Record(Type.String(), Type.Array(Type.String()))),
})),
}),
"call.responded": Type.Object({
requestId: Type.String(),
output: Type.Unknown(),
}),
"call.part": Type.Object({
requestId: Type.String(),
output: Type.Unknown(),
index: Type.Optional(Type.Number()),
}),
"call.completed": Type.Object({
requestId: Type.String(),
}),
"call.aborted": Type.Object({
requestId: Type.String(),
}),
"call.error": Type.Object({
requestId: Type.String(),
code: Type.String(),
message: Type.String(),
details: Type.Optional(Type.Unknown()),
}),
}
```
### Topic Scoping
Response events are scoped by `requestId` using pubsub's built-in topic scoping:
| Event | Publish | Subscribe (caller) | Subscribe (handler) |
|-------|---------|--------------------|--------------------|
| `call.requested` | `pubsub.publish("call.requested", event)` | — | Unscoped: `pubsub.subscribe("call.requested")` |
| `call.responded` | `pubsub.publish("call.responded", requestId, event)` | Scoped: `pubsub.subscribe("call.responded", requestId)` | — |
| `call.part` | `pubsub.publish("call.part", requestId, event)` | Scoped: `pubsub.subscribe("call.part", requestId)` | — |
| `call.completed` | `pubsub.publish("call.completed", requestId, event)` | Scoped: `pubsub.subscribe("call.completed", requestId)` | — |
| `call.aborted` | `pubsub.publish("call.aborted", requestId, event)` | Scoped: `pubsub.subscribe("call.aborted", requestId)` | Scoped: `pubsub.subscribe("call.aborted", requestId)` |
| `call.error` | `pubsub.publish("call.error", requestId, event)` | Scoped: `pubsub.subscribe("call.error", requestId)` | — |
This gives every `requestId` its own event channel. On Redis, this maps to `call.responded:{uuid}` channels. On WebSocket or Iroh, the topic string is a routing key. In-process, it's a `CustomEvent` with `type: "call.responded:{uuid}"`.
**Why scoped instead of unscoped + manual matching?** Scoped topics avoid O(n) fanout. A caller only receives events for its own request. This matters especially on Redis (pub/sub channels) and Iroh (topic订阅), where unscoped subscriptions would deliver every response to every listener.
## Event Flow
### Call (request/response)
```
Caller Handler
│ │
│─── call.requested ───────────────>│
│ {requestId, operationId, │
│ input, identity, deadline} │
│ │
│<── call.responded:{requestId} ────│
│ {requestId, output} │
```
On error:
```
│<── call.error:{requestId} ───────│
│ {requestId, code, message, │
│ details} │
```
On timeout or caller cancellation:
```
│─── call.aborted:{requestId} ────>│
│ {requestId} │
```
### Subscribe (request/stream)
```
Caller Handler
│ │
│─── call.requested ───────────────>│
│ {requestId, operationId, │
│ input, identity} │
│ │
│<── call.part:{requestId} ────────│
│ {requestId, output, index?} │
│ │
│<── call.part:{requestId} ────────│
│ {requestId, output, index?} │
│ │
│<── call.completed:{requestId} ────│ ← stream ends normally
│ {requestId} │
```
On stream error:
```
│<── call.error:{requestId} ───────│
│ {requestId, code, message} │
```
On caller cancellation (consumer breaks out of `for await`):
```
│─── call.aborted:{requestId} ────>│
│ {requestId} │
```
### Nesting
Nested calls include `parentRequestId` to track the call chain:
```
│─── call.requested ───────────────>│ {requestId: A, parentRequestId: P}
```
This enables call graph reconstruction and abort cascading — every nested call includes its parent's `requestId`.
## `PendingRequestMap`
The primary consumer interface. Wraps `createPubSub` internally and manages the full call/subscribe lifecycle.
### Construction
```ts
const callMap = new PendingRequestMap(eventTarget?)
```
- Creates an internal `PubSub<CallPubSubMap>`
- If `eventTarget` is provided, passes it to `createPubSub` for transport-level event routing
### `call(operationId, input, options?)` → `Promise<unknown>`
1. Generate `requestId` via `crypto.randomUUID()`
2. Subscribe to `call.responded:{requestId}`, `call.error:{requestId}`, `call.aborted:{requestId}` (scoped)
3. If `deadline` is set, start a timeout timer that publishes `call.aborted` on expiry
4. Publish `call.requested`
5. Return a Promise — resolves on `call.responded`, rejects on `call.error` or `call.aborted`
6. Cleanup: close all scoped subscriptions on settlement
### `subscribe(operationId, input, options?)` → `Repeater<unknown>`
1. Generate `requestId` via `crypto.randomUUID()`
2. Publish `call.requested`
3. Create scoped subscriptions: `call.part:{requestId}`, `call.completed:{requestId}`, `call.error:{requestId}`
4. Return a `Repeater` that:
- Yields `output` from each `call.part` event
- Completes on `call.completed`
- Rejects on `call.error`
- On consumer break (Repeater `stop`), publishes `call.aborted:{requestId}` and closes all subscriptions
This means consumers can use operators:
```ts
const stream = callMap.subscribe("events.live", { topic: "sensors" });
const filtered = pipe(stream, filter(isRelevant), map(transform));
for await (const value of filtered) {
// handle each filtered/mapped stream value
}
```
### Handler-side methods
| Method | Description |
|--------|-------------|
| `respond(requestId, output)` | Publish `call.responded:{requestId}` — single response for call |
| `part(requestId, output, index?)` | Publish `call.part:{requestId}` — next chunk in subscription stream |
| `complete(requestId)` | Publish `call.completed:{requestId}` — stream ended normally |
| `emitError(requestId, code, message, details?)` | Publish `call.error:{requestId}` — error response |
| `abort(requestId)` | Publish `call.aborted:{requestId}` — caller cancellation |
## Transport Mapping
Same protocol, same event shapes, same `PendingRequestMap` — different `EventTarget`:
| Transport | Use Case | EventTarget impl |
|-----------|----------|-----------------|
| In-process | Local operations | Browser `EventTarget` (default) |
| Redis | Cross-process events | `RedisEventTarget` from `@alkdev/pubsub/event-target-redis` |
| WebSocket | Hub ↔ spoke bidirectional | `WebSocketEventTarget` (future) |
| Iroh | P2P QUIC | `IrohEventTarget` (future) |
| SSE | Server → client streaming | `SSEEventTarget` (future) |
## Error Model
### `CallError`
```ts
class CallError extends Error {
readonly code: string;
readonly details?: unknown;
}
```
### Infrastructure Error Codes
| Code | When | Details |
|------|------|---------|
| `OPERATION_NOT_FOUND` | No operation matches `operationId` | `{ operationId: string }` |
| `ACCESS_DENIED` | Missing scopes | `{ requiredScopes?: string[] }` |
| `VALIDATION_ERROR` | Input fails schema check | Wrapped from `Value.Errors` |
| `TIMEOUT` | Deadline exceeded | `{ deadline: number }` |
| `ABORTED` | Call/stream cancelled | — |
| `EXECUTION_ERROR` | Handler threw, no `errorSchemas` match | `{ message: string }` |
| `UNKNOWN_ERROR` | Non-Error thrown | `{ raw: string }` |
## TypeBox Schemas and Validation
All event shapes are defined as TypeBox schemas in `CallEventSchema`. Consumers can use `Value.Check()` or `Value.Errors()` from `@alkdev/typebox` for runtime validation:
```ts
import { Value } from "@alkdev/typebox";
import { CallEventSchema } from "@alkdev/pubsub/call";
if (!Value.Check(CallEventSchema["call.requested"], incoming)) {
const errors = [...Value.Errors(CallEventSchema["call.requested"], incoming)];
// reject with VALIDATION_ERROR
}
```
This enables validation on the Iroh and SSE transports where incoming data is untrusted JSON.
## Relationship to `@alkdev/operations`
`@alkdev/operations` provides the `OperationRegistry`, access control, and handler dispatch. It uses `@alkdev/pubsub/call` for:
- `PendingRequestMap` — call/subscribe client interface
- `CallEventSchema` — runtime validation of incoming events
- `CallError` and `CallErrorCode` — error construction and matching
- Type exports — `CallRequestedEvent`, etc. for handler signatures
The `CallHandler` in operations receives `call.requested` events, looks up the operation, validates input, checks access, and dispatches to the handler. For query/mutation handlers, it calls `respond()`. For subscription handlers, it calls `part()` and `complete()`.
## Operators and Stream Composition
Since `subscribe()` returns a `Repeater<unknown>` (which implements `AsyncIterable`), all pubsub operators work on streams:
```ts
import { pipe, filter, map } from "@alkdev/pubsub";
const stream = callMap.subscribe("events.live", { topic: "sensors" });
const filtered = pipe(
stream,
filter((e) => e.priority > 5),
map((e) => ({ ...e, enriched: true })),
);
```
This works the same regardless of whether the stream source is in-process, remote via Redis, or remote via Iroh/SSE.

View File

@@ -1,11 +1,11 @@
---
status: draft
last_updated: 2026-04-30
last_updated: 2026-05-01
---
# Event Target Adapters
In-process, Redis, and WebSocket event targets. All implement `TypedEventTarget<TEvent>`.
In-process, Redis, WebSocket, and Worker event targets. All implement `TypedEventTarget<TEvent>`.
## Interface Contract
@@ -13,10 +13,12 @@ Every adapter must implement:
| Method | Behavior |
|--------|----------|
| `addEventListener(type, callback)` | Register listener for event type. Callback receives `CustomEvent` with typed `detail`. |
| `addEventListener(type, callback)` | Register listener for event type. Callback receives `CustomEvent` with typed `detail` (an `EventEnvelope`). |
| `dispatchEvent(event)` | Send/dispatch event. Returns `boolean` (always `true` for non-cancelable events). |
| `removeEventListener(type, callback)` | Unregister listener. Clean up underlying subscription when no listeners remain for a topic. |
All adapters use the `EventEnvelope` format (`{ type, id, payload }`) as the serialization contract. Adapters that cross process boundaries (Redis, WebSocket, Iroh) serialize/deserialize the full envelope as JSON.
## In-Process (Default)
No adapter needed. `createPubSub` uses `new EventTarget()` by default. This works for single-process deployments where all pubsub participants share the same memory.
@@ -49,10 +51,13 @@ function createRedisEventTarget<TEvent extends TypedEvent>(
- `dispatchEvent``publishClient.publish(event.type, serializer.stringify(event.detail))`
- `addEventListener``subscribeClient.subscribe(topic)`, track callbacks per topic
- `removeEventListener` → remove callback; if no callbacks remain for topic, `subscribeClient.unsubscribe(topic)`
- On message: deserializes with `serializer.parse`, reconstructs `CustomEvent(channel, { detail: envelope })`
The `detail` of the `CustomEvent` dispatched to local listeners is the full `EventEnvelope` object (`{ type, id, payload }`).
### Channel Naming
Currently uses raw event type as Redis channel name (e.g., `session.status:proj_123`). Architecture recommends `alk:events:{eventType}` prefix but this is not yet implemented. Should be configurable: `createRedisEventTarget({ ..., prefix: "alk:events:" })`.
Currently uses the topic string directly as the Redis channel name (e.g., `session.status:proj_123`). Architecture recommends `alk:events:{eventType}` prefix but this is not yet implemented. Should be configurable: `createRedisEventTarget({ ..., prefix: "alk:events:" })`.
### Limitations (Current)
@@ -62,7 +67,7 @@ Currently uses raw event type as Redis channel name (e.g., `session.status:proj_
### Test Coverage
5 tests in alkhub (publish path only, mocked ioredis). No tests for subscription-receive path, unsubscribe cleanup, or error handling.
No tests yet (test directory is empty). Previous alkhub had 5 Redis tests (publish path only, mocked ioredis).
## WebSocket
@@ -77,16 +82,17 @@ class WebSocketEventTarget implements TypedEventTarget<any> {
constructor(private ws: WebSocket) {
ws.onmessage = (msg) => {
const { type, payload } = JSON.parse(msg.data as string)
const event = new CustomEvent(type, { detail: payload })
for (const listener of this.listeners.get(type) ?? []) {
const envelope = JSON.parse(msg.data as string) // { type, id, payload }
const topic = `${envelope.type}:${envelope.id}`
const event = new CustomEvent(topic, { detail: envelope })
for (const listener of this.listeners.get(topic) ?? []) {
listener(event)
}
}
}
dispatchEvent(event: CustomEvent): boolean {
this.ws.send(JSON.stringify({ type: event.type, payload: event.detail }))
this.ws.send(JSON.stringify(event.detail)) // sends { type, id, payload }
return true
}
@@ -101,11 +107,49 @@ class WebSocketEventTarget implements TypedEventTarget<any> {
- **Per-connection** — hub creates one per spoke connection
- **JSON framing** — WebSocket provides native message boundaries (no length-prefix needed)
- **No native deps** — works in browsers and Node
- **Envelope serialization** — sends/receives the full `EventEnvelope` JSON (`{ type, id, payload }`)
### Gap: Reconnection
WebSocket connections drop. On reconnect, the spoke must re-register with the hub (same `hub.register` flow). The `WebSocketEventTarget` itself is per-connection — a new connection means a new event target instance. Reconnection logic belongs to the spoke lifecycle, not the event target.
### Gap: Hub-Side Architecture
## Worker
The hub needs per-connection event target + `PendingRequestMap` creation on accept, cleanup on disconnect. This is a hub architectural concern, not a pubsub concern. See `@alkdev/alkhub_ts/docs/architecture/spoke-runner.md`.
**Import**: `@alkdev/pubsub/event-target-worker` (not yet implemented)
**Peer dep**: none (Web Worker API is standard)
### Design
A `WorkerEventTarget` implementing `TypedEventTarget` over `postMessage`/`onmessage`. This enables `createPubSub` to work across Web Worker boundaries.
The worker message protocol uses the `EventEnvelope` format:
```json
{ "type": "call.responded", "id": "uuid-123", "payload": { "output": 42 } }
```
### Two-Sided Design
- **Main thread** (`WorkerPoolManager` side): dispatches typed messages to workers via `worker.postMessage()`, receives responses via `worker.onmessage`
- **Worker thread**: dispatches to main thread via `parentPort.postMessage()`, receives from main thread via `globalThis.onmessage`
Both sides wrap `postMessage`/`onmessage` to implement the `TypedEventTarget` interface:
```ts
// Main thread side
const workerEventTarget = createWorkerEventTarget(worker);
// Worker thread side
const mainEventTarget = createMainThreadEventTarget();
```
### Key Properties
- **Bidirectional** — both sides can publish and subscribe
- **Per-worker** — each worker gets its own event target
- **Structured clone** — Web Workers use structured clone algorithm for serialization, but JSON-serializable `EventEnvelope` ensures cross-platform compatibility
- **No native deps** — works in any environment with Web Worker support
### Relationship to Taskgraph / Operations
The worker event target enables distributed operation execution. Workers can subscribe to `call.requested` events and publish `call.responded` events through the event target, allowing `@alkdev/operations` to dispatch work to worker threads via the same pubsub transport.

View File

@@ -1,6 +1,6 @@
---
status: draft
last_updated: 2026-04-30
last_updated: 2026-05-01
---
# Iroh Transport
@@ -36,7 +36,7 @@ Using `@rayhanadev/iroh` (v0.1.1) as the NAPI-RS binding. Community binding, one
| `Connection.remoteNodeId()` | Get peer's public key |
| `Connection.sendDatagram()` / `readDatagram()` | Unreliable datagrams |
Not exposed (not critical): `Endpoint.watch_addr()`, `Connection.close_reason()`, `Connection.stats()`.
Not exposed (not critical): `Endpoint.watchAddr()`, `Connection.close_reason()`, `Connection.stats()`.
## Protocol
@@ -54,13 +54,13 @@ QUIC streams are byte streams (no message boundaries). We use 4-byte big-endian
### Message Format
Same `type` + `detail` shape as all other transports:
All transports use the `EventEnvelope` format:
```json
{ "type": "call.requested", "detail": { ... } }
{ "type": "call.responded", "id": "uuid-123", "payload": { "output": 42 } }
```
Maps directly to `new CustomEvent(type, { detail })`.
On the wire, this serializes as the JSON payload after the length prefix. When received, it maps to `new CustomEvent("call.responded:uuid-123", { detail: envelope })`.
## Two-Sided Design

View File

@@ -1,5 +1,7 @@
# Research: `@alkdev/pubsub` Package Extraction
> **Note (2026-05-01):** This document reflects the original migration from alkhub_ts. Since extraction, the architecture has been simplified — `call.ts` (PendingRequestMap, CallEventSchema, CallError) has been removed from this package and moved to `@alkdev/operations`. The `EventEnvelope` type (`{ type, id, payload }`) is now the cross-platform serialization contract. The `PubSubPublishArgsByKey` tuple-based model has been replaced with a simpler `PubSubEventMap`. Stream operators have been expanded beyond `filter`/`map`/`pipe`. See `docs/architecture/` for current spec.
## Goal
Extract `packages/core/pubsub/` into a standalone `@alkdev/pubsub` package, following the same peer-dependency tree-shaking pattern as `@alkdev/typemap`. Each event target adapter (Redis, WebSocket, Iroh) is an isolated module that only imports its own peer dependency. The core `createPubSub + TypedEventTarget + operators` has no peer deps beyond `@repeaterjs/repeater`.