Skip to content

AIDKJSX Runtime for AI Applications

AIDK
ALPHAv0.1.x — APIs may change.
Get started anyway

Your code runs between model calls.

Other frameworks: configure an agent, call it, hope for the best.

AIDK: your component renders before every model call. The model responds, your code runs, you decide what happens next.

tsx
render(com, state) {
  // This runs on EVERY tick. Not once. Every time.
  const lastResponse = getLastAssistantMessage(state);

  // Swap models based on what just happened
  const needsUpgrade = lastResponse?.includes("I'm not sure");

  return (
    <>
      <Model model={needsUpgrade ? gpt5 : gpt5mini} />

      {needsUpgrade && (
        <System>The user needs more help. Take your time. Be thorough.</System>
      )}

      <Timeline>{this.timeline()}</Timeline>
    </>
  );
}

No configuration for this. No "model fallback" setting. You just... do it.

Each tick: compile JSX → call model → execute tools → update state → your code runs again.


See it in action

Compose agents like UI components:

tsx
render() {
  const messages = this.timeline();
  const cutoff = messages.length - 10;

  return (
    <System>{this.systemPrompt()}</System>

    <Grounding title="Knowledge Base">
      <Document src={this.activeDoc()} />
      <List title="Related">{this.relatedDocs().map(d => d.title)}</List>
    </Grounding>

    <SearchTool onResult={(r) => this.results.set(r)} />

    <Timeline>
      {messages.map((msg, i) => (
        <Message key={msg.id} role={msg.role}>
          {i < cutoff && msg.role === 'user' && <Meta>({formatRelative(msg.timestamp)})</Meta>}
          {msg.content}
        </Message>
      ))}
    </Timeline>
  );
}

Route to specialized agents by rendering them:

tsx
render() {
  const intent = this.detectedIntent();

  if (intent === "refund") return <RefundAgent customer={this.customer()} />;
  if (intent === "technical") return <TechSupportAgent />;

  return <TriageAgent onIntent={(i) => this.detectedIntent.set(i)} />;
}

Tools that show the model what they know:

tsx
const InventoryTool = createTool({
  name: "check_inventory",
  description: "Check stock levels for a product",
  input: z.object({
    sku: z.string().describe("Product SKU to check"),
  }),

  async onMount(com) {
    com.setState("inventory", await fetchInventory());
  },

  render(com) {
    const items = com.getState("inventory") || [];
    return (
      <Grounding title="Current Inventory">
        {items.map(i => `${i.sku}: ${i.qty} in stock`).join("\n")}
      </Grounding>
    );
  },

  handler: async ({ sku }) => { /* ... */ }
});

Intercept context before it goes to the model:

tsx
onAfterCompile(com, compiled) {
  const tokens = estimateTokens(compiled);

  if (tokens > 100000) {
    const compacted = compactOldMessages(this.timeline(), 20);
    this.timeline.set(compacted);
    com.requestRecompile();
  }
}

Fork parallel work, await results:

tsx
<Fork root={<FactChecker claim={claim} />} waitUntilComplete={true}
      onComplete={(result) => this.verified.set(result)} />

<Fork root={<SourceFinder topic={topic} />} waitUntilComplete={true}
      onComplete={(result) => this.sources.set(result)} />

{/* Both complete before the parent continues */}

Fire-and-forget background work:

tsx
<Spawn root={<AuditLogger interaction={state.current} />} />

It's just code. There's no config option to enable these things. You write what you need.


Start where you are

You don't have to rewrite anything. Use AIDK for the parts that need it.

1

Just compile JSX to messages

Keep your existing generateText calls. Use JSX for dynamic context.

tsx
import { compile } from "aidk-ai-sdk";
import { generateText } from "ai";

const { messages, tools, system } = await compile(<MyAgent />);

const result = await generateText({
  model: openai("gpt-5.2"),
  messages,
  tools,
  system,
});
2

Let AIDK handle multi-turn

The compiler manages the tick loop. You still control model selection.

tsx
import { createCompiler } from "aidk-ai-sdk";
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const compiler = createCompiler();

for await (const event of compiler.stream(<MyAgent />, async (input) => {
  return streamText({ model: openai("gpt-5.2"), ...input });
})) {
  console.log(event);
}
3

Full engine with channels and persistence

Real-time updates, execution tracking, state recovery.

tsx
import { createEngine } from "aidk";
import { createExpressMiddleware } from "aidk-express";

const engine = createEngine();
app.use("/agent", createExpressMiddleware({ engine, agent: TaskAgent }));

Tools run where they need to

Four execution types. Use what makes sense.

SERVER - Runs on your backend

tsx
const SearchTool = createTool({
  name: "search",
  type: ToolExecutionType.SERVER,
  handler: async ({ query }) => {
    return searchService.query(query);
  },
});

CLIENT - Runs in the browser

tsx
// Render UI from structured data
const ChartTool = createTool({
  name: "render_chart",
  type: ToolExecutionType.CLIENT,
  intent: ToolIntent.RENDER,
  // React renders from tool input
});

// Collect user input
const FormTool = createTool({
  name: "collect_info",
  type: ToolExecutionType.CLIENT,
  requiresResponse: true,
});

// Client-side actions
const NavTool = createTool({
  name: "navigate",
  type: ToolExecutionType.CLIENT,
});

PROVIDER - Handled by the model provider

tsx
const WebSearch = createTool({
  name: "web_search",
  type: ToolExecutionType.PROVIDER,
  // OpenAI/Anthropic handles execution
});

MCP - From MCP servers

tsx
const mcpTools = await discoverMCPTools({
  config: {
    serverName: "filesystem",
    transport: "stdio",
    connection: { command: "npx", args: [...] },
  },
  include: ["read_file", "write_file"],
  toolPrefix: "fs_",
});

Confirmation and feedback

Escalate tool execution requests

tsx
const DeleteTool = createTool({
  name: "delete_file",
  requiresConfirmation: true, // User must approve
  confirmationMessage: (input) => `Delete ${input.path}?`,
  handler: async ({ path }) => fs.unlink(path),
});

const DangerousTool = createTool({
  name: "execute_sql",
  // Conditional confirmation
  requiresConfirmation: (input) => input.query.includes("DELETE"),
  handler: async ({ query }) => db.execute(query),
});

Tools render context

Tools aren't just execution. They contribute to what the model sees.

tsx
const TodoTool = createTool({
  name: "todo",
  description: "Manage tasks",
  input: z.object({
    action: z.enum(["add", "complete", "list"]),
    task: z.string().optional(),
  }),

  handler: async (input) => TodoService.perform(input),

  // Load state on mount
  async onMount(com) {
    com.setState("tasks", await TodoService.getTasks());
  },

  // Render current state as context
  render(com) {
    const tasks = com.getState("tasks") || [];
    return (
      <Grounding title="Current Tasks">
        <List task>
          {tasks.map((t) => (
            <ListItem key={t.id} checked={t.done}>{t.text}</ListItem>
          ))}
        </List>
      </Grounding>
    );
  },
});

Real-time channels

Define channel routers. Handle events. Broadcast to rooms.

tsx
// Backend: Define a channel router
const todoChannel = new ChannelRouter<{ userId: string }>("todo", {
  scope: { user: "userId" },
}).on("add_task", async (event, ctx) => {
  const task = await TodoService.add(ctx.userId, event.payload.text);
  return { success: true, task };
});

// Broadcast updates to user's room
todoChannel
  .publisher()
  .to(userId)
  .broadcast({ type: "task_added", payload: task });
tsx
// Frontend: Subscribe to execution events
function TaskList() {
  const [tasks, setTasks] = useState<Task[]>([]);
  const { subscribe } = useExecution();

  useEffect(() => {
    return subscribe((event) => {
      if (event.type === "channel" && event.channel === "todo") {
        setTasks((prev) => [...prev, event.payload.task]);
      }
    });
  }, [subscribe]);

  return (
    <ul>
      {tasks.map((t) => (
        <li key={t.id}>{t.text}</li>
      ))}
    </ul>
  );
}

The package ecosystem

PackageWhat it does
aidkCore framework. Engine, components, state, tools.
aidk-ai-sdkVercel AI SDK adapter. Most people start here.
aidk-expressExpress middleware. SSE streaming, channels.
aidk-nestjsNestJS module with decorators.
aidk-reactuseEngine, useExecution, channel hooks.
aidk-angularServices and components for Angular.
aidk-openaiDirect OpenAI adapter (no AI SDK dependency).
aidk-googleGoogle AI / Vertex adapter.

What AIDK is not

Not a managed service

AIDK is a library. You run it on your infrastructure. Bring your own API keys.

Not a black box

No magic. You see exactly what goes to the model because you're the one constructing it.

Not opinionated about architecture

No mandatory planning steps. No required summarization. Build what you need, skip what you don't.


Ready?

bash
npm install aidk aidk-ai-sdk ai @ai-sdk/openai

Released under the MIT License.