Explorer

The Explorer is an AI-powered design exploration tool built into the Studio. Enter a prompt, set how many variants to generate, and the AI produces multiple component options using your extracted design tokens, all in a single pass. Use it to explore directions quickly before committing to an implementation.

Features

Multi-variant generation

Enter a prompt, set the variant count between 2 and 6, then press Cmd+Enter to generate. The AI produces each variant independently using your extracted design tokens as context, so every output is on-brand without manual token lookup.

Image upload

Attach a reference image to your prompt by pasting from the clipboard, dragging and dropping a file onto the canvas, or using the file picker. The AI uses the image as visual context alongside your prompt and design tokens, which is useful when you have a rough sketch, a screenshot of an existing UI, or a competitor reference to work from.

Variant refinement

Select any generated variant and send a follow-up prompt to refine it. Refinement is iterative. You can repeat the cycle as many times as needed without losing the other variants in the grid. Each refinement pass maintains design token adherence.

Comparison view

The A/B comparison view generates the same component twice: once with your layout.md context active, once without. It displays them side by side. This is the fastest way to see the concrete value your design system context is providing to the AI.

Promote to Library

Save any variant directly to your organisation's component library. Promoted variants automatically inherit the project's design tokens so they arrive in the library already wired up to your colour, typography, and spacing values.

AI Image Generation

When you prompt for full-page layouts, marketing pages, or any component that includes imagery, the AI automatically generates real images using Google Gemini instead of placeholder services. Images are generated in parallel after the component code is produced, then seamlessly replaced in the preview. You can control image style (photo, illustration, icon, abstract) and aspect ratio through your prompt.

Push to Figma

Select any variant and push it directly to a Figma file. The push modal lets you choose viewport sizes (mobile, tablet, desktop), optionally target an existing Figma file URL, and generates a ready-to-paste command for Claude Code or other AI agents with the Figma MCP server installed.

Push to Paper

Select any variant and push it to Paper.design as editable HTML/CSS. Paper is a design canvas with full MCP write access, so your AI agent can create artboards and place content directly. The push modal generates the MCP command, you copy it into Claude Code or another AI agent with the Paper MCP server connected, and the variant lands on your canvas as live HTML. The artboard is auto-named from your variant name.

Health Scoring

Each generated variant shows a 0 to 100 health score measuring how faithfully it uses your design system tokens. Hover the score badge to see a grouped breakdown by rule type: colour token usage, spacing compliance, typography, accessibility, and motion. Variants scoring 80 or above are ready for production use. Lower scores highlight specific issues to fix, such as hardcoded hex values or missing interactive states.

Component Reuse

When codebase components are synced via layout scan --sync, the Explorer includes them in the AI generation context. This covers both React component exports and Storybook stories. The AI sees what you already have and generates code with production import comments showing exactly which components to reuse from your codebase, preventing duplicate implementations. The preview renders correctly while the code shows your real import paths.

How to use

  1. 1

    Navigate to a project in the Studio that has already completed extraction and has a generated layout.md.

  2. 2

    Switch to Explore mode using the toggle in the top bar. The three-panel editor will be replaced by the Explorer interface.

  3. 3

    Enter a prompt describing the component or pattern you want to explore. For example: "a pricing card with three tiers" or "a navigation header with a search bar and user avatar".

  4. 4

    Optionally attach a reference image using paste, drag-drop, or the file picker.

  5. 5

    Set the number of variants between 2 and 6, then press Cmd+Enter to generate.

  6. 6

    Review the generated variants in the grid view. Each variant is rendered live in an isolated sandbox.

  7. 7

    Select a variant to refine it with a follow-up prompt or promote it to the component library.

Using the comparison view

The comparison view runs two parallel generation requests from the same prompt. The left pane uses your full layout.md context. The right pane sends the prompt with no design system context at all.

A strong layout.md will produce a left-pane result with correct token values, consistent typography, and on-brand spacing, while the right-pane result defaults to generic Tailwind utility classes and arbitrary colours. If both panes look similar, your layout.md may need more specific token examples or stronger anti-pattern guidance.

The comparison view is the fastest way to see how much value your layout.md provides. Run it before sharing your context bundle with the team.

Tips

TipDetail
Be specific in promptsReference the component type, state, and content structure. "A card" is vague. "A product card with image, title, price, and a primary CTA" gives the AI a clear target.
Use reference imagesPaste a screenshot of an existing component you want to riff on. The AI combines the visual reference with your design tokens for more accurate output.
Start with 3-4 variantsGenerating 6 variants takes longer and the marginal value drops off. Start with 3-4 to get a range of directions, then refine the best one.
Promoted variants inherit tokensYou do not need to manually wire up colours or spacing after promoting. The library variant is already scoped to your project's design tokens.
Refine iterativelySubmit follow-up prompts on a selected variant rather than regenerating from scratch. Each iteration builds on the previous output and converges faster.
AI images need a Gemini keyImage generation requires a GOOGLE_AI_API_KEY environment variable. Without it, image placeholders will remain unprocessed. Self-hosted users should add this to their environment.
The Explorer requires a project with a completed extraction and a generated layout.md. If the Explore mode toggle is disabled, generate layout.md in the Editor panel first.
Push to Design System overwrites existing token values. Review the diff carefully before confirming the batch update, particularly if other team members have active projects using the same design system.

Next steps

  • Studio Guide. Learn how to extract design tokens and generate layout.md before using the Explorer.
  • layout.md Spec. Understand what makes a strong context file so the Explorer generates better output.
  • Claude Code integration. Take promoted variants into your codebase with the CLI and MCP server.