andibase

Data Model

Define data, fields, records, operations, and lifecycle hooks in andibase

Open Markdown

What is data?

Data in andibase starts with a data definition, such as Customer, Order, Ticket, or PurchaseOrder.

Data definitions and their rows are the core unit that agents and apps read, update, and act on.

Data shape

Each data definition includes a common base structure:

  • id: unique identifier for the record.
  • name: human-readable label.
  • description: optional context for people and agents.
  • attributes: the API field that stores your typed field definitions.

In the docs, "fields" refers to the domain-specific properties you define for a data definition. In API payloads, those field definitions and field values live under the attributes key.

Data definition (the schema):

{
  key: "Order",
  name: "Order",
  description: "Represents a customer purchase order",
  attributes: {
    customerId: { type: "string", required: true },
    amount: { type: "number", required: true },
    status: { type: "string", required: true },
    createdAt: { type: "datetime", required: true }
  }
}

Data row (one row of data that follows that schema):

{
  id: "ord_123",
  name: "Order #123",
  attributes: {
    customerId: "cus_789",
    amount: 1499,
    status: "pending",
    createdAt: "2026-03-05T12:00:00Z"
  }
}

Data operations

Common record operations supported by andibase data:

  • Get by id: retrieve one data row by identifier.
  • Append many: insert multiple new data rows in bulk.
  • Upsert: create data rows if they do not exist, or update if they do.
  • Update many: update multiple data rows in one operation.
  • Delete many: remove multiple data rows in one operation.
  • Query: list data rows using filters, sorting, pagination, and other constraints.

Lifecycle hooks

Record operations now expose lifecycle phases in the service layer so custom logic can run consistently around reads and writes.

Currently wired phases:

  • before create
  • after create
  • before update
  • after update
  • before delete
  • after delete
  • before read
  • after read

The default implementation is currently a no-op. There is no persisted hook configuration or runtime execution layer yet. The intended next step is to back these hooks with a sandboxed runtime such as QuickJS.

Hook contract

Hooks are batch-oriented. Bulk create, update, delete, and list operations invoke one hook call per lifecycle phase with the full batch instead of one call per row.

The hook payload includes the relevant state for that phase:

  • Create hooks receive the pending new rows.
  • Update hooks receive both the previous row state and the next row state.
  • Delete hooks receive the previous row state that is about to be deleted.
  • Read hooks receive the request shape, and post-read hooks receive the resolved records.

Before hooks can rewrite the pending request payload. After hooks can inspect outcomes and, for create, update, and read, rewrite the records returned to callers.

To block an action, hook implementations should reject with a typed application error:

{
  code: "record_validation_failed",
  message: "Status transitions from archived to active are not allowed.",
  details: {
    field: "status",
    transition: "archived->active"
  }
}

Those typed errors are surfaced back through the API as structured error payloads, currently using HTTP 400, so apps can branch on code instead of parsing generic messages.

Record events

Every lifecycle phase also emits a typed record event through the central RecordEventsService.

That event stream is intended to support:

  • external webhooks
  • automation triggers
  • internal subscribers that consume record activity centrally

Auditing stays separate. The lifecycle event stream is for record-domain orchestration, not command audit storage.

On this page