EngineeringAIWeb Development

Figma Just Opened Its Canvas to AI Agents — What This Means for Designers, Developers, and Business Owners

Figma's new use_figma MCP tool lets AI agents write directly to your canvas using your design system. Paired with a Skills system that teaches agents your team's conventions via markdown, it transforms design-to-code workflows. Here's what it does, how it works, and what it costs.

Isaac··13 min read

On March 24, 2026, Figma announced one of the most significant updates to their platform since the introduction of Dev Mode. AI agents can now design directly on the Figma canvas — not by generating screenshots or pixel approximations, but by creating and editing real Figma components, variables, frames, and auto layout structures using your existing design system.

This is not another "AI generates a wireframe" story. This is agents reading your component libraries, understanding your spacing tokens, using your naming conventions, and producing native Figma assets that are structurally identical to what a human designer would build. And with a new Skills system, you can teach these agents your team's exact conventions using nothing more than a markdown file.

I am going to break down exactly what was announced, how the technical pieces fit together, what it costs, and why this matters if you run a business that depends on digital products.

The Problem Figma Is Solving

If you have used any AI tool to generate a UI in the last two years, you know the result. The layout is close enough to be tantalising but wrong enough to be useless. The fonts are wrong. The spacing is arbitrary. The colours are hex values that exist nowhere in your brand guidelines. The components look like they came from a different product entirely.

The reason is straightforward. AI agents have had no access to the decisions your team made. They do not know your colour tokens. They do not know your component library. They do not know that your primary buttons use 16px horizontal padding and your secondary buttons use 12px. They do not know your heading hierarchy or your grid system.

So they guess. And guessing produces generic output that needs to be rebuilt from scratch.

Figma's announcement changes this by giving agents two things they never had: direct write access to the canvas, and a structured way to encode your team's design decisions.

The use_figma Tool — What It Actually Does

The centrepiece of this announcement is a new tool called use_figma, exposed through Figma's MCP (Model Context Protocol) server. MCP is a protocol that lets AI agents connect to external tools and data sources. If you use Claude Code, Cursor, Codex, or any other MCP-compatible coding tool, you already have the infrastructure.

The use_figma tool is a general-purpose interface for creating, editing, and inspecting any object in a Figma file. That means agents can now:

  • Create new frames, components, and component variants
  • Apply variables (colour tokens, spacing tokens, typography tokens) from your existing libraries
  • Build layouts using auto layout with the correct spacing, padding, and alignment
  • Read existing components and replicate their patterns in new designs
  • Edit properties of existing elements — changing text, swapping variants, updating styles
  • Search your connected design system libraries for components, variables, and styles

This is fundamentally different from Figma's earlier generate_figma_design tool, which was released in February 2026. That tool was a one-way code-to-canvas translator — it took a live web page and converted it into Figma layers. Useful, but limited. The new use_figma tool is bidirectional and operates natively within Figma's own data model.

A Practical Example

Say you ask Claude Code: "Create a new settings page component using our design system. It needs a header, a form with text inputs and a toggle group, and a primary action button at the bottom."

Previously, Claude would generate HTML and CSS — a flat rendering that you would then need to manually recreate in Figma. Now, with use_figma, Claude can:

  1. Search your connected Figma libraries for your existing header component, text input component, toggle group, and button component
  2. Read the variables attached to those components — your spacing tokens, colour tokens, border radius values
  3. Create a new frame in your Figma file using auto layout
  4. Compose the page from your actual library components, with the correct variants selected
  5. Apply your spacing and layout tokens so the result is pixel-identical to what a designer on your team would produce

The output is not a flat image. It is a fully structured Figma frame made of real components that are linked to your libraries. A designer can open it, adjust it, and push it through the normal design review process.

The Full MCP Server Tool Suite

The use_figma tool is the headline, but Figma's MCP server now exposes 16 tools in total. Understanding the full set helps you see how agents can operate across the entire design workflow.

ToolWhat It DoesAccess
use_figmaGeneral-purpose: create, edit, inspect any Figma objectRemote only (beta)
generate_figma_designConverts live web pages into editable Figma layersRemote only
get_design_contextExtracts design data for code generation (React + Tailwind default)Both
get_variable_defsReturns variables and styles — colours, spacing, typographyBoth
get_screenshotCaptures visual reference of a selectionBoth
get_metadataReturns XML with layer IDs, names, types, positionsBoth
search_design_systemQueries connected libraries for components and tokensBoth
get_code_connect_mapMaps Figma node IDs to codebase componentsBoth
add_code_connect_mapEstablishes connections between Figma elements and codeBoth
get_code_connect_suggestionsDetects and suggests component mappingsBoth
send_code_connect_mappingsConfirms suggested Code Connect mappingsBoth
create_design_system_rulesGenerates rule files for agent contextBoth
create_new_fileCreates blank Figma Design or FigJam filesRemote only
generate_diagramCreates FigJam diagrams from Mermaid syntaxRemote only
get_figjamConverts FigJam diagrams to XML with screenshotsBoth
whoamiReturns authenticated user identity and seat infoRemote only

The combination is powerful. An agent can search your design system, read your existing conventions, create new designs, screenshot the result, compare it against the intended output, and iterate — all without a human touching Figma.

Skills — Teaching Agents Your Team's Conventions

This is the part of the announcement that I think has the most long-term impact. Tools give agents capability. Skills give agents context.

A Skill is a markdown file that tells an agent how to approach a specific workflow in Figma. It encodes your team's conventions, sequencing, and decision-making rules. Think of it as a runbook, but written for an AI agent instead of a human.

A Skill can define:

  • Which tools to use and in what order
  • Which components to reach for in specific scenarios
  • Naming conventions for layers, frames, and variants
  • Spacing and layout rules specific to your team
  • How to handle edge cases — what to do with empty states, error states, loading states
  • Quality checks to run after generating output

Why Markdown?

Figma made a deliberate choice here. Skills are plain markdown files. Not plugins. Not code. Not JSON configurations. Just text files with instructions.

This means anyone on your team can write a Skill. Your lead designer can document how new components should be structured. Your design system manager can encode naming conventions. Your accessibility specialist can write rules for colour contrast and focus states. No developer needed, no Figma plugin API knowledge required.

You add Skills to your project directory and your MCP client picks them up automatically. When an agent is working in your Figma file, it reads the relevant Skills and follows the instructions.

Nine Launch Skills

Figma shipped nine example Skills at launch, built by a mix of internal teams and community contributors:

  • /use-figma — The foundational skill that gives agents a baseline understanding of how Figma works. Every other skill builds on top of this one.
  • /figma-generate-library — Generates component libraries from an existing codebase. If you have React components, this skill teaches the agent to create matching Figma components.
  • Token syncing (Firebender) — Syncs design tokens between code and Figma variables, with drift detection. When your code-side tokens change, the agent can identify the drift and update Figma to match.
  • Screen reader spec generation (Uber) — Built by a designer at Uber. Takes a UI design and generates a full accessibility specification for screen readers.
  • Design system application — Teaches agents to apply an existing design system correctly when building new screens.
  • Component generation — Guides agents through creating component sets with proper variants, naming, and properties.

The remaining skills cover various workflow patterns. Figma is also launching a community skill-sharing mechanism so teams can publish and discover Skills.

The Self-Correcting Loop

One of the more interesting technical details in the announcement is the self-correction workflow. Because agents have access to both the use_figma tool and the get_screenshot tool, they can:

  1. Generate a design on the canvas
  2. Screenshot the result
  3. Compare the screenshot against the intended output
  4. Identify discrepancies
  5. Make corrections using the same use_figma tool
  6. Repeat until the output matches the intent

Because the agent is working with real Figma objects — components, variables, auto layout — corrections are structural, not cosmetic. If the spacing is wrong, the agent adjusts the auto layout gap property. If the wrong variant is selected, the agent swaps it. This is fundamentally different from pixel-based image editing where corrections tend to create artifacts.

Independent testing by SFAI Labs found 85 to 90 percent styling inaccuracy when translating Figma's SVG node tree into web code through the MCP server. However, the use_figma tool operates natively within Figma's format, avoiding the SVG translation step entirely. The accuracy concern applies to the design-to-code direction, not the agent-to-canvas direction.

Supported MCP Clients

The use_figma tool works with any MCP-compatible client. At launch, the following are officially supported:

  • Claude Code (Anthropic)
  • Codex (OpenAI)
  • Cursor
  • Copilot CLI (GitHub)
  • Copilot in VS Code (GitHub)
  • Augment
  • Factory
  • Firebender
  • Warp

If you are using Claude Code — which is what I use daily — the setup is straightforward. You install the Figma MCP server, authenticate with your Figma account, and the tools become available in your agent context. From there, you can prompt Claude to work with your Figma files directly.

The Figma MCP server runs remotely, so you do not need the Figma desktop app open. You just need a Figma account with the appropriate seat type.

Code Connect — Closing the Loop

A detail that is easy to miss in this announcement is how Code Connect ties everything together. Code Connect is Figma's feature for mapping design system components to their code implementations.

When you have Code Connect set up, an agent working in Figma does not just know that you have a "Button" component. It knows that your Button component maps to the Button component at src/components/ui/Button.tsx in your codebase, and it knows the exact props, variants, and usage patterns.

This means an agent can:

  • Generate a Figma design using your components
  • Then generate the code implementation that references the exact same components
  • Maintain a 1:1 relationship between design and code

This is the design-to-code roundtrip that the industry has been chasing for years. It has always broken down because the design and the code were disconnected artifacts. Code Connect plus the use_figma tool creates a shared data layer that both sides reference.

Pricing and Access

Figma is using a familiar playbook here — free beta to drive adoption, followed by usage-based pricing.

During Beta (Now)

  • The use_figma tool is free to use
  • Standard rate limits apply (matching REST API tiers)
  • Write-to-canvas operations are currently exempt from rate limit caps
  • Starter plan users are limited to 6 MCP tool calls per month
  • Dev and Full seat holders get Tier 1 REST API rate limits

Post-Beta (Pricing TBA)

  • Figma has confirmed this will become a usage-based paid feature
  • Pricing details have not been announced
  • The concern for teams running heavy agent workflows: hundreds of tool calls per session could add up quickly under metered pricing
  • Figma currently charges per-seat — adding usage-based API fees on top creates a dual pricing model

My take: get in during the beta. Build your Skills, test the workflows, understand the value before pricing kicks in. If your team uses this heavily during the free period, you will have real data to evaluate whether the paid tier is worth it when it arrives.

The Competitive Picture

Figma is not making this move in isolation. The entire design tool landscape is shifting toward agentic workflows.

OpenAI's Codex design lead publicly endorsed this announcement, noting the capability to "find and use all the important design context in Figma." OpenAI and Figma have been deepening their partnership — the generate_figma_design tool was a joint effort, and Codex was one of the first clients to support the new use_figma tool.

Lovable, the AI-native frontend builder, has been positioning itself as the "first agentic AI canvas." Vercel's v0 generates UI from prompts. Bolt and other tools generate entire applications.

But none of these tools solve the design system problem. They generate code from scratch every time. Figma's approach is different — it gives agents access to the decisions your team has already made. The output is not generic. It is yours.

This is a significant moat. If your design system lives in Figma and your agents can read and write to it, there is little reason to use a separate AI design tool that starts from zero every time.

What This Means for Design System Quality

Messy systems will produce messy outputs. Mature, well-organized systems will produce something genuinely useful.

This is the line from the announcement that every design system team needs to internalise. The use_figma tool does not fix bad design systems. It amplifies whatever is already there.

If your component library has inconsistent naming, missing variants, detached styles, and hard-coded values instead of tokens — the agent will produce output that reflects all of those problems. Garbage in, garbage out.

If your library is well-structured with proper variables, consistent naming, complete variant sets, and documented usage patterns — agents will produce output that is genuinely production-ready.

This creates a concrete ROI for design system investment that was always hard to quantify. Before, the argument for a clean design system was consistency and designer efficiency. Now, the argument is: your design system directly determines the quality of AI-generated output across your entire product team.

What This Means for Small Teams and Business Owners

If you are a business owner in New Zealand running a product or a web application, here is why you should care about this.

The bottleneck in software development has always been the handoff between design and development. Designers create mockups. Developers interpret them. Things get lost in translation. The design says 16px padding, the developer eyeballs it at 14px. The design uses a specific shade of blue, the developer uses a similar but different shade. Multiply these small discrepancies across a hundred screens and you get an inconsistent product.

This update does not eliminate designers or developers. But it dramatically reduces the friction between them. If an agent can read your Figma design system and produce both the design and the code from the same source of truth, the gap between "what was designed" and "what was built" shrinks to nearly zero.

For small teams especially, this matters. You might not have a dedicated designer. You might have a developer who also does design, or a founder who uses Figma to sketch ideas that a freelancer then builds. In both cases, having an agent that understands your design system and can produce consistent output means less revision, less back-and-forth, and faster time to market.

Practical Steps to Get Started

If you want to take advantage of this now while it is free, here is what I would recommend:

  1. Set up the Figma MCP server with your preferred coding agent (Claude Code, Cursor, Codex, or any supported client). The setup takes about five minutes.
  2. Clean up your design system. Make sure your components use variables instead of hard-coded values. Name things consistently. Remove unused or duplicate components.
  3. Start with the /use-figma foundational skill. This gives your agent the baseline understanding of Figma's canvas model.
  4. Write a team-specific Skill. Start simple — a markdown file that lists your naming conventions, your spacing scale, and your component hierarchy. You can iterate from there.
  5. Test the workflow on a low-stakes project. Ask your agent to create a new page or component from your existing library. Evaluate the output.
  6. Set up Code Connect if you have not already. Mapping your Figma components to your codebase components is what makes the full roundtrip work.
  7. Iterate on your Skills. Every time the agent produces output that does not match your expectations, add a rule to your Skill file. Over time, the agent gets better because your instructions get better.

The Bigger Picture

Figma's canvas has been read-only for agents until now. Agents could inspect designs and generate code from them, but they could not contribute to the design itself. That wall is gone.

The canvas is now a shared workspace where humans and agents collaborate. Designers set the rules through the design system and Skills. Agents execute within those rules. The output is native Figma — components, variables, auto layout — not a flat rendering that needs to be rebuilt.

We are at the very beginning of this. The use_figma tool is in beta. The Skills system is new. The community skill library is just starting. But the architecture is sound and the direction is clear.

Design tools are becoming agent-native. The teams that invest in clean design systems, well-written Skills, and tight Code Connect mappings will be the ones that move fastest. Everyone else will still be copying hex codes from a screenshot.

Need Help Setting This Up?

At Tally Digital, we build on the Figma-to-code pipeline every day. We are already running the use_figma MCP tool across our client projects and writing custom Skills for teams that want to accelerate their design-to-development workflow. If you want to get this set up for your team — or if your design system needs a cleanup before agents can use it effectively — book a call and we will scope it out.

Share this article

#Figma#AI Agents#MCP#Design Systems#Claude Code#Cursor#Developer Tools