A Deep Dive into LightSpeed’s AI Workflow

Learn about an AI-assisted workflow that links Figma, VS Code, GitHub, and Copilot into one cohesive loop, using custom instructions, prompt libraries, and MCP to give Copilot real context. Ship consistent WordPress work faster with less guesswork.

How the journey began

Earlier this year I found myself in Berlin, immersed in Figma design systems and prepping for a WordCamp talk. I spent weeks buried in design systems, chatting to engineers, and sketching ideas in cafés. A single question kept returning: What if our tools could see our design system, our docs, and our repo structure, use them as context and then act on them? Back home in Cape Town, we started turning that spark into practice—wiring design, code and our tools together using an AI-assisted workflow so the idea became our everyday way of building.

A cartoon image of a person discovering figma mcp

The benefits of an AI-assisted workflow

AI helps us spend more time on meaningful work by reducing context‑switching and keeping us in the flow. Instead of manually copying colours or hunting through documentation, we can ask Copilot or ChatGPT to fetch relevant information.

Studies show that developers using Copilot experience significant productivity gains and greater confidence in their code. When the AI handles boilerplate, our team can focus on architecture, performance and accessibility. We notice fewer mistakes, smoother reviews and less fatigue after long coding sessions because the AI catches many errors before they reach the pull request stage.

These benefits go beyond speed. AI encourages consistent practices, knowledge sharing and team‑wide learning. When everyone uses the same instructions and prompt templates, the output is more predictable and easier to review.

Research indicates that adoption of Copilot leads to higher job satisfaction and enjoyment in coding. For us, AI acts like an assistant and a coach: it suggests improvements, writes tests and sometimes explains unfamiliar parts of our codebase. This frees us to mentor juniors, explore new tools or tackle technical debt. In short, AI transforms the work from a slog of repetitive tasks into a more collaborative and creative process.


VS Code: Our command centre

VS Code is more than our editor; it’s our command centre. We run every project inside a dev container, so no one has to struggle with mismatched PHP or Node versions. That consistency means we spend less time on environment setup and more time writing code. VS Code’s extensibility also lets us fold design context, review tools and GitHub workflows directly into the editor.

Our extension stack remains core to this setup:

These extensions give us consistent prompts, smarter suggestions and a smooth hand‑off from design to code.

A cartoon image of a man working on a laptop with code and design screens

Copilot: Our second pair of hands

GitHub Copilot began life as an autocomplete tool. Today it has grown into a family of assistants. You still get ghost‑text suggestions as you type, but you can also open a chat panel to ask for explanations, generate documentation, or request that the AI refactor the code, or request that the AI add a feature across multiple files. 

For large‑scale changes there’s a dedicated agent mode. This mode acts like an autonomous peer programmer: it can understand the context of multiple files, perform multi‑file refactorings, write and run tests, migrate legacy code, generate documentation and integrate new libraries.

In agent mode Copilot loops through a plan. It determines which files need editing, suggests code and terminal commands, compiles code, installs packages, runs tests and iterates until errors are resolved. The workflow is transparent—each tool invocation appears in the UI, and you can undo or intervene at any point. For routine, well‑defined changes you can stick to the usual edit mode; for bigger, open‑ended tasks, agent mode shines. You can also refine the agent’s behaviour with custom instructions to ensure it follows your coding guidelines—we will get more into custom instructions and how we use them later.

Why we use it

Copilot shortens the path from intent to code. It applies our standards, drafts tests, and helps explain unfamiliar parts of a codebase. GitHub’s research has linked Copilot to faster delivery and improved focus, which matches our day-to-day experience.

How we use it

We use Copilot for more than snippets:

  • Code assistance: from Gutenberg blocks to Theme JSON tweaks, with our rules in-context.
  • Testing and bug checks: draft unit tests, highlight likely causes, and suggest fixes.
  • Issue write-ups: turn notes into well-scoped GitHub Issues with acceptance criteria.
  • Pull requests: propose summaries and checklists; pair with Coderabbit for structured review.

Copilot works best when it reads our instructions and project layout. That is where custom instructions and MCP come in.

2 connected puzzle pieces with a design icon on one and a code icon on the other

Custom instructions: How we teach our tools

We treat Copilot custom instructions like onboarding for an AI teammate. They live in the repo so they travel with the code. We created organisation‑wide custom instructions containing our coding and styling guidelines. For example, we have defined what WordPress coding standards Copilot should follow, recommended theme.json colours/spacing values, used core blocks wherever possible, outlined the accessibility standards to consider and what inline documentation is required.

We have defined contribution guidelines that map out clear workflow expectations for both Copilot and the team. The instructions define how to use GitHub Issue templates to map out our project work in a consistent manner and how the Pull Request template should be used to write consistent PRs for peer or Copilot review. We use GitHub Projects to track and display the status of issues and pull requests, with their status automatically updated via GitHub Actions.

For every new project, we also configure repository specific custom instructions in a custom-instructions.md, which allows us to tune Copilot’s responses to align with the needs of that specific project or repository. These custom instructions apply automatically in VS Code so we don’t have to repeat them with every prompt.

File-type instructions

Inside .github/instructions/ we have .instructions.md files with detailed guidance for specific file types. The coding standards instructions remind Copilot to escape all dynamic output, use meaningful names and document functions. The theme.json instructions encourage use of design tokens and discourage inline styles. We also have instructions for PHP block registration and HTML templates. When you edit a matching file in VS Code, Copilot automatically applies these rules.

Prompt templates and libraries

In .github/prompts/ we store reusable agent prompts. For example, generate-pattern.prompt.md provides a template to generate a pricing table pattern using our theme tokens. There’s also an accessibility audit prompt that asks the AI to review a template for missing alt text, heading order and contrast, and a code review prompt that instructs the AI to check naming conventions and code quality. The README explains how to add new prompts and emphasizes clear filenames and YAML frontmatter.

Issue templates

Our .github/ISSUE_TEMPLATE folder includes patterns for bugs, feature requests and refactoring. The code refactoring template prompts contributors to outline the scope, goals, modularisation plans and performance considerations. It even includes a checklist to ensure code follows naming conventions, removes dead code and updates documentation. This structure helps Copilot’s coding agent understand tasks when we delegate work.

Instructions remove ambiguity. Prompts reduce rework. Templates improve delegation. Together, they help Copilot follow our way of working, not the internet’s average.

A developer opens a pattern file. Copilot sees the file-type instruction and suggests code that aligns with our naming, accessibility rules, and token usage. If the task is larger, the developer grabs a prompt template, fills in the blanks, and uses Chat to plan steps.

Tips and tricks for better custom instructions

Through trial and error we’ve refined our use of instructions and prompts. Here are some insights:

  • Write instructions like a mentor – Our custom-instructions.md reads like onboarding notes, reminding the AI (and new devs) about WordPress coding standards, design tokens and accessibility. Keep instructions declarative and project‑agnostic so they apply across repos.
  • Use file‑type contexts – Copilot automatically matches instructions.md files to file patterns via the applyTo field. For example, instructions for **/*.php include translation, security and pattern registration guidelines. This ensures that when you edit a pattern file, Copilot suggests using register_block_pattern() and the correct naming conventions.
  • Create task‑specific prompts – Our prompt library includes generation, audit and review prompts. For instance, the accessibility audit prompt lists common issues to check (missing alt attributes, heading order, contrast). We treat prompts as templates: copy, paste, customise specifics (e.g., file names), then run. When making new prompts, we include YAML front matter with a descriptive title and mode.
  • Leverage issue templates – When delegating tasks to Copilot’s coding agent, we use issue templates to define scope, goals and acceptance criteria. Our refactoring template asks for scope and goals, suggests using automated tools like linters and Copilot, and includes a review checklist. Clear issues lead to better AI results.

Update instructions as standards evolve – As WordPress evolves or our design system changes, we update the instruction files. Because Copilot reads them in real time, new standards propagate quickly.

A man conversing with a robot

MCP: the context bridge your IDE was missing

The Model Context Protocol (MCP) is an open standard that lets an AI client securely talk to trusted data and tools. Think of it as an adapter: servers expose structured resources and safe actions; clients request what they need and show their work. MCP reduces guessing by standardising how assistants pull real context—design specs, docs, tests—into the IDE. The GitHub Blog

How we use it

We run Figma’s Dev Mode MCP server on the desktop. In VS Code, Copilot can then fetch exact variables, component names, and styles from a selected frame. That context combines with our repo instructions, so generated code uses our tokens and semantics rather than hard-coded guesses. The result is less tweaking and fewer design mismatches.

What it unlocks for us

MCP also meshes with Copilot’s agent workflows. In the IDE, agent mode plans steps, edits files, installs packages, runs tests, and loops until checks pass—while keeping each action visible. On GitHub, the coding agent can pick up a well-scoped GitHub issue, work in a protected space, and open a reviewable pull request. Together, MCP and agent mode bring design context, project rules, and execution into one flow.ts. When working on a blog post or internal playbook, we’ll paste in notes and let ChatGPT organise them into a coherent draft. If we need to convert a list of Figma variables into a theme.json snippet or refine the language in an instructions file, ChatGPT is up to the task. It’s also invaluable for sifting through lengthy email chains to highlight what’s relevant and summarise action items.

What really elevates these tasks is ChatGPT’s agent mode. In agent mode, ChatGPT can use its own virtual computer to navigate websites, run code and assemble deliverables like slides or spreadsheets. It combines a visual browser with a text browser and terminal, and can connect to apps like Gmail and GitHub. We’ve taken advantage of this to generate configuration files—like our instructions.md and other .github settings—by having the agent gather context from our repos and documentation and draft initial versions for us. 

When researching a new post, we let the agent browse articles and summarise key points, then ask it to pull details from specific documentation or code files. 

For email triage, we connect our inbox via the agent so it can extract the important bits and answer questions based on relevant threads. Because the agent always asks for confirmation before taking an action, it remains a safe and controlled partner in our workflow.

A robot with a design screen on one hand and a development screen on the other

ChatGPT in our workflow

ChatGPT complements Copilot. Copilot lives in the editor; ChatGPT shines around it. We use it for quick research, first-draft writing, and structured summaries. It turns long Slack or email threads into action lists. It also converts small datasets—like design variables—into clean theme.json snippets.

We lean on GPT agent mode for short, scoped missions. The agent has a browser and scratch environment, so it can gather sources, run small scripts, and assemble tidy artefacts. Typical outcomes include an initial instructions.md, a set of .github config files, or a concise brief drawn from specific docs or repos. The agent narrates each step and pauses for confirmation, so we review outputs before anything lands in GitHub.

A few common micro-workflows:

  • Turn a messy thread into tasks and a draft update.
  • Ask targeted questions about a repo and get linked answers
  • Review Figma docs and produce developer notes
  • Transform variable lists into theme.json.

This keeps research, writing, and light automation close to the work, without pulling developers out of flow.

4 circles with icons depicting a workflow

Models and choosing the right model

Copilot gives you a menu of models, each with different strengths. Model choice affects response quality, latency, and how much code context the model can juggle. GitHub’s comparison groups them by task—general-purpose coding and writing, fast help for simple edits, deep reasoning and debugging, and working with visuals like diagrams or screenshots. Pick for the task, not the brand name. 

Our practice mirrors that guidance. We default to a general-purpose model for everyday coding and Chat. We switch to deeper-reasoning or larger-context models when a change spans many files or architectural edges. For quick lint fixes, refactors, or test scaffolds, we use a faster model to reduce latency. When a task involves UI screenshots or diagram reasoning, we choose a vision-capable model. We also keep an eye on usage: some models consume more of your monthly allowance due to premium request multipliers. 

Our rule of thumb:

  • Use the default Copilot model for everyday coding and Chat.
  • Switch to a deeper-reasoning model when you must reason across many files.
  • Prefer faster models for quick refactors, lint fixes, or test scaffolds.
  • Pick a vision-capable model for diagrams, screenshots, or UI analysis.
  • Watch premium multipliers so heavy tasks don’t burn your allowance.
2 connected squares with rounded corners, one depicting design, the other development

Continuously learning

We treat learning as part of the job, not an after-hours extra. Each week we block time on learn.microsoft.com and focus on GitHub and Copilot skills. The portal’s structured paths, sandboxes, and quizzes make progress steady and practical.

We fold lessons back into our workflow the same day. Notes become updates to custom-instructions.md, prompt templates, and issue checklists. We test new techniques in Codespaces, pair them with Copilot, and capture what worked in PR descriptions.

If you want a starting point, try these two modules. They map directly to daily tasks like writing prompts, reviewing code, and planning refactors:

I have set the objective for our developer team to work towards getting GitHub Certified. We have already seen significant benefits in the team GitHub Skills as a result of preparing for the GitHub Certification exams. Our team feel more confident in their abilities, they are producing higher quality work and have demonstrated improved productivity. It is one of the best investments we have ever made, combined with a GitHub Copilot Business subscription for each developer.


What’s next for LightSpeed

We’re excited about the next steps for LightSpeed, we’re wiring tests into the loop via MCP. We’re planning on integrating Playwright and Jest with the BrowserStack AI Agents SO THAT IT can run accessibility and regression checks as it writes. We’re also adding measurement; using GitHub’s Copilot Metrics API, we’ll track suggestion acceptance and time saved on patterns, refactors, and bug fixes to see where AI delivers the most value. Finally, we’re broadening integrations—because MCP is open, we’re exploring servers for our CMS to expose content schemas and hooks for testing frameworks, turning the tools we rely on into first-class context for the agent.


Final thoughts

AI has not replaced our craft. It has removed friction that used to slow us down. By wiring Figma, VS Code, GitHub, Copilot, and clear instructions together, we have built a workflow that feels modern and humane. MCP gives Copilot real context. Instructions capture our standards. ChatGPT fills the gaps around research and writing. The result is simple: less guessing, more building. If you are starting this journey, begin by capturing what your team already knows—your standards, your tokens, your patterns—and let your tools use that knowledge.

If you’re thinking about introducing AI to your workflow, start NOW by capturing and organising your team’s knowledge and content — coding standards, design tokens, workflow rules — into dedicated GitHub Copilot Spaces, which will empower your team to do the following:

  • Get more relevant, specific answers from Copilot.
  • Enable YOUR team to stay in flow by collecting what ythey need for specific tasks in one place.
  • Reduce your repeated questions by sharing knowledge within your team.
  • Support onboarding of new staff and reuse with self-service context that lives beyond chat history.

We have already seen how Copilot Spaces have reduced questions to our lead developer and sysadmin. The payoff is not just faster code and shared knowledge, but a happier, more focused team.

A silhouette of a man with a lightbulb above his head, with the Figma logo on one side of it and a code symbol on the other