Complete Runtime Engine
Not a template system. A tick-based execution engine with component lifecycle, reactive state, and the Context Object Model (COM).
Other frameworks: configure an agent, call it, hope for the best.
AIDK: your component renders before every model call. The model responds, your code runs, you decide what happens next.
render(com, state) {
// This runs on EVERY tick. Not once. Every time.
const lastResponse = getLastAssistantMessage(state);
// Swap models based on what just happened
const needsUpgrade = lastResponse?.includes("I'm not sure");
return (
<>
<Model model={needsUpgrade ? gpt5 : gpt5mini} />
{needsUpgrade && (
<System>The user needs more help. Take your time. Be thorough.</System>
)}
<Timeline>{this.timeline()}</Timeline>
</>
);
}No configuration for this. No "model fallback" setting. You just... do it.
Each tick: compile JSX → call model → execute tools → update state → your code runs again.
Compose agents like UI components:
render() {
const messages = this.timeline();
const cutoff = messages.length - 10;
return (
<System>{this.systemPrompt()}</System>
<Grounding title="Knowledge Base">
<Document src={this.activeDoc()} />
<List title="Related">{this.relatedDocs().map(d => d.title)}</List>
</Grounding>
<SearchTool onResult={(r) => this.results.set(r)} />
<Timeline>
{messages.map((msg, i) => (
<Message key={msg.id} role={msg.role}>
{i < cutoff && msg.role === 'user' && <Meta>({formatRelative(msg.timestamp)})</Meta>}
{msg.content}
</Message>
))}
</Timeline>
);
}Route to specialized agents by rendering them:
render() {
const intent = this.detectedIntent();
if (intent === "refund") return <RefundAgent customer={this.customer()} />;
if (intent === "technical") return <TechSupportAgent />;
return <TriageAgent onIntent={(i) => this.detectedIntent.set(i)} />;
}Tools that show the model what they know:
const InventoryTool = createTool({
name: "check_inventory",
description: "Check stock levels for a product",
input: z.object({
sku: z.string().describe("Product SKU to check"),
}),
async onMount(com) {
com.setState("inventory", await fetchInventory());
},
render(com) {
const items = com.getState("inventory") || [];
return (
<Grounding title="Current Inventory">
{items.map(i => `${i.sku}: ${i.qty} in stock`).join("\n")}
</Grounding>
);
},
handler: async ({ sku }) => { /* ... */ }
});Intercept context before it goes to the model:
onAfterCompile(com, compiled) {
const tokens = estimateTokens(compiled);
if (tokens > 100000) {
const compacted = compactOldMessages(this.timeline(), 20);
this.timeline.set(compacted);
com.requestRecompile();
}
}Fork parallel work, await results:
<Fork root={<FactChecker claim={claim} />} waitUntilComplete={true}
onComplete={(result) => this.verified.set(result)} />
<Fork root={<SourceFinder topic={topic} />} waitUntilComplete={true}
onComplete={(result) => this.sources.set(result)} />
{/* Both complete before the parent continues */}Fire-and-forget background work:
<Spawn root={<AuditLogger interaction={state.current} />} />It's just code. There's no config option to enable these things. You write what you need.
You don't have to rewrite anything. Use AIDK for the parts that need it.
Keep your existing generateText calls. Use JSX for dynamic context.
import { compile } from "aidk-ai-sdk";
import { generateText } from "ai";
const { messages, tools, system } = await compile(<MyAgent />);
const result = await generateText({
model: openai("gpt-5.2"),
messages,
tools,
system,
});The compiler manages the tick loop. You still control model selection.
import { createCompiler } from "aidk-ai-sdk";
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const compiler = createCompiler();
for await (const event of compiler.stream(<MyAgent />, async (input) => {
return streamText({ model: openai("gpt-5.2"), ...input });
})) {
console.log(event);
}Real-time updates, execution tracking, state recovery.
import { createEngine } from "aidk";
import { createExpressMiddleware } from "aidk-express";
const engine = createEngine();
app.use("/agent", createExpressMiddleware({ engine, agent: TaskAgent }));Four execution types. Use what makes sense.
SERVER - Runs on your backend
const SearchTool = createTool({
name: "search",
type: ToolExecutionType.SERVER,
handler: async ({ query }) => {
return searchService.query(query);
},
});CLIENT - Runs in the browser
// Render UI from structured data
const ChartTool = createTool({
name: "render_chart",
type: ToolExecutionType.CLIENT,
intent: ToolIntent.RENDER,
// React renders from tool input
});
// Collect user input
const FormTool = createTool({
name: "collect_info",
type: ToolExecutionType.CLIENT,
requiresResponse: true,
});
// Client-side actions
const NavTool = createTool({
name: "navigate",
type: ToolExecutionType.CLIENT,
});PROVIDER - Handled by the model provider
const WebSearch = createTool({
name: "web_search",
type: ToolExecutionType.PROVIDER,
// OpenAI/Anthropic handles execution
});MCP - From MCP servers
const mcpTools = await discoverMCPTools({
config: {
serverName: "filesystem",
transport: "stdio",
connection: { command: "npx", args: [...] },
},
include: ["read_file", "write_file"],
toolPrefix: "fs_",
});Escalate tool execution requests
const DeleteTool = createTool({
name: "delete_file",
requiresConfirmation: true, // User must approve
confirmationMessage: (input) => `Delete ${input.path}?`,
handler: async ({ path }) => fs.unlink(path),
});
const DangerousTool = createTool({
name: "execute_sql",
// Conditional confirmation
requiresConfirmation: (input) => input.query.includes("DELETE"),
handler: async ({ query }) => db.execute(query),
});Tools aren't just execution. They contribute to what the model sees.
const TodoTool = createTool({
name: "todo",
description: "Manage tasks",
input: z.object({
action: z.enum(["add", "complete", "list"]),
task: z.string().optional(),
}),
handler: async (input) => TodoService.perform(input),
// Load state on mount
async onMount(com) {
com.setState("tasks", await TodoService.getTasks());
},
// Render current state as context
render(com) {
const tasks = com.getState("tasks") || [];
return (
<Grounding title="Current Tasks">
<List task>
{tasks.map((t) => (
<ListItem key={t.id} checked={t.done}>{t.text}</ListItem>
))}
</List>
</Grounding>
);
},
});Define channel routers. Handle events. Broadcast to rooms.
// Backend: Define a channel router
const todoChannel = new ChannelRouter<{ userId: string }>("todo", {
scope: { user: "userId" },
}).on("add_task", async (event, ctx) => {
const task = await TodoService.add(ctx.userId, event.payload.text);
return { success: true, task };
});
// Broadcast updates to user's room
todoChannel
.publisher()
.to(userId)
.broadcast({ type: "task_added", payload: task });// Frontend: Subscribe to execution events
function TaskList() {
const [tasks, setTasks] = useState<Task[]>([]);
const { subscribe } = useExecution();
useEffect(() => {
return subscribe((event) => {
if (event.type === "channel" && event.channel === "todo") {
setTasks((prev) => [...prev, event.payload.task]);
}
});
}, [subscribe]);
return (
<ul>
{tasks.map((t) => (
<li key={t.id}>{t.text}</li>
))}
</ul>
);
}| Package | What it does |
|---|---|
aidk | Core framework. Engine, components, state, tools. |
aidk-ai-sdk | Vercel AI SDK adapter. Most people start here. |
aidk-express | Express middleware. SSE streaming, channels. |
aidk-nestjs | NestJS module with decorators. |
aidk-react | useEngine, useExecution, channel hooks. |
aidk-angular | Services and components for Angular. |
aidk-openai | Direct OpenAI adapter (no AI SDK dependency). |
aidk-google | Google AI / Vertex adapter. |
Not a managed service
AIDK is a library. You run it on your infrastructure. Bring your own API keys.
Not a black box
No magic. You see exactly what goes to the model because you're the one constructing it.
Not opinionated about architecture
No mandatory planning steps. No required summarization. Build what you need, skip what you don't.
npm install aidk aidk-ai-sdk ai @ai-sdk/openai