Files
taskgraph_ts/docs/architecture/api-surface.md
glm-5.1 bde1cc4e70 Decompose monolithic architecture.md into modular docs/architecture/ documents
The 751-line architecture.md violated the SDD process modular documentation
target (~500 lines). It also had duplicate TaskGraph class definitions (one
monolith, one decomposed) that directly contradicted each other, and embedded
consumer-specific tool dispatch mappings that belong in downstream projects.

Changes:
- Split into 8 focused documents + 7 ADR records + redirect page
- Removed the monolithic TaskGraph class (kept only decomposed version)
- Moved CLI→plugin dispatch mapping out (belongs in plugin architecture)
- Extracted implementation code (frontmatter splitter, findCycles, DAG
  propagation) into WHAT/WHY descriptions per architect role spec
- Added proper ADR format for all resolved design decisions
- Fixed review issues: C_fail mapping, DuplicateNodeError/DuplicateEdgeError
  types, ValidationError/GraphValidationError definitions, mutation error
  handling contract, enum naming convention, validation timing clarification
2026-04-26 06:38:52 +00:00

8.1 KiB
Raw Blame History

status, last_updated
status last_updated
draft 2026-04-26

API Surface

The library's public API: a thin TaskGraph data class for graph construction/mutation/basic queries, plus standalone composable analysis functions.

Design Principle: Decomposition over Monolith

The TaskGraph class handles graph construction, mutation, and basic queries only. All analysis functions (parallel groups, critical path, cost-benefit, etc.) are standalone functions that take a TaskGraph as their first argument.

Why: Both consumers (alkhub, OpenCode plugin) need the same analysis functions but through different dispatch mechanisms. The library exports pure functions; each consumer wraps them in its own dispatch. This avoids duplicate work and prevents the class from becoming a 25+ method monolith.

The operations/dispatch pattern belongs at the consumer layer, not the library layer. The library is a toolkit, not a service.

TaskGraph Class

class TaskGraph {
  // Construction
  static fromTasks(tasks: TaskInput[]): TaskGraph
  static fromRecords(tasks: TaskInput[], edges: DependencyEdge[]): TaskGraph
  static fromJSON(data: TaskGraphSerialized): TaskGraph
  addTask(id: string, attributes: TaskGraphNodeAttributes): void
  addDependency(prerequisite: string, dependent: string): void

  // Mutation
  removeTask(id: string): void
  removeDependency(prerequisite: string, dependent: string): void
  updateTask(id: string, attributes: Partial<TaskGraphNodeAttributes>): void
  updateEdgeAttributes(prerequisite: string, dependent: string, attrs: Partial<TaskGraphEdgeAttributes>): void

  // Queries
  hasCycles(): boolean
  findCycles(): string[][]
  topologicalOrder(): string[]  // throws CircularDependencyError if cyclic
  dependencies(taskId: string): string[]
  dependents(taskId: string): string[]
  taskCount(): number
  getTask(taskId: string): TaskGraphNodeAttributes | undefined

  // Subgraph
  subgraph(filter: (taskId: string, attrs: TaskGraphNodeAttributes) => boolean): TaskGraph

  // Export
  export(): TaskGraphSerialized
  toJSON(): TaskGraphSerialized

  // Reactivity
  get raw(): Graph  // underlying graphology instance for direct event listener attachment
}

Notes:

  • topologicalOrder() throws CircularDependencyError (with cycles populated) when cyclic — see ADR-003
  • subgraph() returns a new TaskGraph with matching nodes and only edges where both endpoints are in the filtered set — see ADR-007
  • addDependency uses addEdgeWithKey with deterministic keys (${source}->${target}) — see ADR-006
  • addTask throws DuplicateNodeError if the ID already exists, addDependency throws DuplicateEdgeError if the edge already exists, and TaskNotFoundError if either endpoint doesn't exist in the graph — see errors-validation.md

Standalone Analysis Functions

All analysis functions take a TaskGraph (or its raw graphology Graph) as their first argument. They are composable and stateless.

Graph analysis

function parallelGroups(graph: TaskGraph): string[][]
function criticalPath(graph: TaskGraph): string[]
function weightedCriticalPath(graph: TaskGraph, weightFn: (taskId: string, attrs: TaskGraphNodeAttributes) => number): string[]
function bottlenecks(graph: TaskGraph): Array<{ taskId: string; score: number }>

Cost-benefit analysis

function riskPath(graph: TaskGraph): RiskPathResult
function shouldDecomposeTask(attrs: TaskGraphNodeAttributes): DecomposeResult
function workflowCost(graph: TaskGraph, options?: WorkflowCostOptions): WorkflowCostResult
function riskDistribution(graph: TaskGraph): RiskDistributionResult

Note on shouldDecomposeTask: Takes TaskGraphNodeAttributes (nullable categorical fields) and internally calls resolveDefaults for risk and scope. Unassessed fields (null) use defaults that are below the decomposition threshold, so only explicitly-assessed high-risk or broad-scope tasks are flagged. See cost-benefit.md.

Note on workflowCost vs calculateTaskEv: calculateTaskEv is a pure math function (takes numeric inputs, returns EvResult). workflowCost orchestrates the per-task calls, handles DAG propagation, and enriches results with taskId and name from the graph's node attributes. The per-task EvResult is a subset of WorkflowCostResult.tasks[i].

Categorical enum numeric methods

function scopeCostEstimate(scope: TaskScope): number       // 1.05.0
function scopeTokenEstimate(scope: TaskScope): number      // 50010000
function riskSuccessProbability(risk: TaskRisk): number    // 0.500.98
function riskWeight(risk: TaskRisk): number                // 0.020.50
function impactWeight(impact: TaskImpact): number          // 1.03.0

function resolveDefaults(attrs: Partial<TaskGraphNodeAttributes>): ResolvedTaskAttributes

Cost-benefit core

function calculateTaskEv(p: number, scopeCost: number, impactWeight: number, config?: EvConfig): EvResult

See schemas.md for the enum definitions and numeric mapping tables.

Return Types

All return types are defined as TypeBox schemas (for runtime validation + JSON Schema export) with corresponding static TypeScript types.

RiskPathResult

const RiskPathResult = Type.Object({
  path: Type.Array(Type.String()),
  totalRisk: Type.Number(),
})

DecomposeResult

const DecomposeResult = Type.Object({
  shouldDecompose: Type.Boolean(),
  reasons: Type.Array(Type.String()),
})

WorkflowCostOptions

const WorkflowCostOptions = Type.Object({
  includeCompleted: Type.Optional(Type.Boolean()),
  limit: Type.Optional(Type.Number()),
  propagationMode: Type.Optional(
    Type.Union([Type.Literal("independent"), Type.Literal("dag-propagate")])
  ),
  defaultQualityDegradation: Type.Optional(Type.Number()),
})

WorkflowCostResult

const WorkflowCostResult = Type.Object({
  tasks: Type.Array(
    Type.Object({
      taskId: Type.String(),
      name: Type.String(),
      ev: Type.Number(),
      pIntrinsic: Type.Number(),
      pEffective: Type.Number(),
      probability: Type.Number(),
      scopeCost: Type.Number(),
      impactWeight: Type.Number(),
    })
  ),
  totalEv: Type.Number(),
  averageEv: Type.Number(),
  propagationMode: Type.Union([
    Type.Literal("independent"),
    Type.Literal("dag-propagate"),
  ]),
})

EvConfig / EvResult

const EvConfig = Type.Object({
  retries: Type.Optional(Type.Number()),
  fallbackCost: Type.Optional(Type.Number()),
  timeLost: Type.Optional(Type.Number()),
  valueRate: Type.Optional(Type.Number()),
})

const EvResult = Type.Object({
  ev: Type.Number(),
  pSuccess: Type.Number(),
  expectedRetries: Type.Number(),
})

RiskDistributionResult

const RiskDistributionResult = Type.Object({
  trivial: Type.Array(Type.String()),
  low: Type.Array(Type.String()),
  medium: Type.Array(Type.String()),
  high: Type.Array(Type.String()),
  critical: Type.Array(Type.String()),
  unspecified: Type.Array(Type.String()),
})

Full schema definitions with Static type exports are in schemas.md.

Validation API

// On TaskGraph instances:
validateSchema(): ValidationError[]    // TypeBox validation on input data
validateGraph(): GraphValidationError[] // Graph-level invariants (cycles, dangling refs)
validate(): ValidationError[]          // Both, for convenience

See errors-validation.md for error types and validation details.

Constraints

  • No write actions in analysis functions — all analysis functions are pure reads. shouldDecomposeTask only inspects attributes, it doesn't modify the graph.
  • throw-on-cycle for topo sorttopologicalOrder throws rather than returning a partial result. See ADR-003.
  • Analysis functions are independent — they can be called in any order, without prerequisites beyond a valid graph.