AI-friendly benchmark: CRUD admin screen
This page defines a small, repeatable benchmark for comparing the same CRUD-oriented admin requirement in a SPA architecture and a Marionette architecture.
Scenario
The benchmark target is an operations admin screen for managing customer orders. The screen must show a table of records, filter that table by status, submit forms for workflow actions, and update only the affected region after each action.
- CRUD-style record list with detail navigation.
- Table filtering by status such as
Active,Review, andBlocked. - Form submissions for login and row-level workflow actions.
- Partial updates that refresh the main content instead of rebuilding a full SPA state tree.
Comparison axes
The comparison intentionally avoids token-reduction percentages until measured token counts exist. The first version focuses on structural factors that change how much context an AI assistant must keep aligned.
| Axis | Typical SPA implementation | Marionette implementation | What to measure later |
|---|---|---|---|
| Changed files | Usually split across client routes/components, client state, API client, server route, validation, and tests. | Expected to concentrate UI rendering, action handling, and state updates in Go files close to the Marionette app. | Count files touched by the same requirement in each sample branch. |
| Languages used | Usually TypeScript/JSX for UI plus backend language for API and validation. | Primarily Go for UI, actions, and state; htmx attributes express browser-side partial updates. | List implementation languages needed to explain and review the change. |
| API schema | Often requires an explicit API schema or duplicated request/response types between frontend and backend. | No separate JSON API schema is required for this server-rendered fragment workflow unless the product adds one intentionally. | Record whether OpenAPI, generated clients, or duplicated DTOs are part of the sample. |
| Concepts to explain to AI | Route component, client state, API contract, loading/error state, optimistic update or cache invalidation, server handler, validation, and response mapping. | Page/component rendering, ActionForm, server action, shared Go state, htmx target/swap, and the fragment being returned. | Write the minimal prompt/context bundle needed for an AI assistant to safely implement the change. |
Repeatable procedure
- Start from the Marionette admin sample:
cmd/admin-sample. - Create an equivalent SPA sample with the same visible requirement: table filter, row action form, detail route, validation/error display, and partial UI refresh behavior.
- Apply one identical change request to both samples, for example: add a new
Pausedorder status, allow filtering by it, and show a confirmation flash after the status changes. - For each implementation, record changed file count, implementation languages, API schema artifacts, and the list of concepts included in the AI prompt.
- Only after collecting prompt and completion token counts for both runs, add measured token counts. Do not publish a token reduction rate without those measurements.
Current Marionette sample mapping
The current admin sample already covers the Marionette side of the benchmark: orders/filter updates the selected status and returns the main content fragment, while orders/toggle-status mutates workflow state and returns the same target region.
The row action form targets #main-content with outerHTML, and the table is rendered from Go state. These are the key Marionette constructs to compare against an equivalent SPA state/update path.