Coding for designers

Something not making sense? For help or questions, ping @mikestilling in Slack.

Course / Lesson 5 / Do more with AI
30m

Augment your new skillset using AI

Now that you understand the workflow, tools, and code fundamentals, let's go full send. In this final lesson, we'll use AI to 100x all the things. 😂

AI in the workflow

Disclaimer: While the first four lessons were written by Mike. This lesson has been entirely generated by AI based on the first four lessons Mike has written. He'll be cleaning this up and refining it over time.

If you've made it here, you've already done the hard part. You understand how the workflow fits together—editor, GitHub, deploy. You've written HTML, styled with CSS, animated with JavaScript, and even vibe coded a WebGL effect. That foundation changes everything about how useful AI is going to be for you.

Think of it this way: AI is a multiplier. If you know nothing, 1000x of nothing is still nothing. But now that you understand enough—the structure, the terminology, the feedback loop—AI takes that and cranks it way, way up.

What you know
The workflow, the tools, HTML, CSS, JS basics
×
AI assistance
Generates, explains, and refines code for you
=
You, 100x'd
Build things you couldn't have imagined building before

This lesson is where it all comes together. We're going to cover:

  • Prompting effectively in Cursor so AI actually does what you want
  • Pulling in Figma assets so your designs make it into code
  • Vibe coding an entire page from scratch using nothing but prompts
  • Tips and gotchas so you don't waste hours on avoidable mistakes

Let's go full send.


Write effective prompts

In Lesson 4, we got our first taste of prompting when we vibe coded a WebGL effect. Now let's dig into what makes a prompt actually good—because the difference between a mediocre prompt and a great one is the difference between "this is broken garbage" and "holy cow, that's exactly what I wanted."

Cursor's Agent mode

Cursor has several ways to interact with AI, but the one we'll focus on is Agent mode. When you open Cursor's chat panel (the sidebar on the right, or ⌘L), you can type prompts that tell the AI what to do. In Agent mode, the AI can:

  • Read your files to understand your codebase
  • Write and edit code directly in your files
  • Run Terminal commands like npm start
  • Create new files when needed

It's like having a junior developer sitting next to you who's read every piece of documentation ever written—but who still needs you to tell them what to build and where to put it.

Give AI context with @ references

The single most impactful thing you can do to improve AI output is give it context. In Cursor, you do this with @ references. These let you point the AI at specific parts of your codebase so it understands your patterns and conventions.

Here are the most useful ones:

  • @filename — reference a specific file (e.g. @first-page.webc)
  • @foldername — reference an entire folder (e.g. @src/assets/css/)
  • @codebase — let AI search your entire codebase for relevant context

For example, if you want AI to create a new page that matches your existing style, referencing an existing page gives it a working template to follow.

Good prompts vs. bad prompts

The quality of AI output is directly proportional to the quality of your input. Vague prompts produce vague results. Specific prompts produce specific results.

Vague prompt
"Make a nice landing page"
AI has to guess everything
vs
Specific prompt
"In @first-page.webc, add a new section below the existing one. Use helm-section and helm-container. Create a 3-column grid of cards with rounded-6 borders, border-edge color, and bg-highlight backgrounds. Each card should have an icon placeholder, a bold title, and a description in text-neutral-600."
AI knows exactly what to do

The specific prompt works better because:

  1. It tells AI which file to edit
  2. It references existing components in the codebase
  3. It describes the layout (3-column grid of cards)
  4. It specifies styling details using utility classes AI already has context for
  5. It outlines the content structure (icon, title, description)

You don't need to specify every single class—just enough for AI to understand the vibe. It'll fill in the gaps. And if it doesn't get it right, that's what iteration is for.

Iterative prompting

Here's the thing most people get wrong: they try to nail it in one prompt. That almost never works. Instead, treat AI like you'd treat a conversation with a collaborator. Start broad, then refine.

A typical flow looks like this:

  1. First prompt: describe the general structure and layout
  2. Review the output: save, check in the browser
  3. Follow-up prompt: fix what's off
    "make the gap between cards 24px instead of 16px" or "change the headline to text-48"
  4. Repeat until it looks right

This iterative loop is the same feedback loop from Lesson 3—write a little, save, preview, adjust. The only difference is that AI is writing the code for you now.

Pro-tip: Prompting is closer to being a creative director than an engineer. You're describing the vision, reviewing the work, and giving feedback. The AI is the one actually typing the code. The better you are at articulating what you want, the better the output. Sound familiar?

A practical example

Let's say we want to add a new section to first-page.webc with a centered headline, a subhead, and a row of three feature cards below it. Here's how I'd prompt this in Cursor:

In @first-page.webc, add a new helm-section and helm-container below the existing section. Center a headline and subhead at the top, then add a 3-column grid of cards below. Each card should have a rounded-6 border with border-edge, bg-highlight background, and padding of 24. Inside each card, add a bold title and a short description in text-neutral-600. Use the existing styling patterns from this file.

Notice how I'm referencing the file, using component names AI can find in my code, and describing the design using the same utility class language we learned in Lesson 3. This is exactly why learning the fundamentals first matters. You can now speak the language.


Pull in design assets

So you know how to prompt and you've got a feel for the code. But what about when you have a specific design in Figma that you want to recreate? Let's cover two approaches: describing designs in prompts, and feeding Figma directly into Cursor.

Approach 1: Describe it and export assets

The most straightforward way to get a Figma design into code is to describe it to AI in your prompt and manually export any images or assets you need.

Since you now know the basics of HTML/CSS and utility classes, you can describe a Figma design using the same language:

  • "A 2-column grid with 16px gap" instead of "two things next to each other"
  • "text-14 text-neutral-600 with 160% line-height" instead of "small gray text"
  • "rounded-6 border border-edge" instead of "rounded rectangle with a border"

For images and assets, export them from Figma (right-click → Export) in 2x resolution as .png or .jpg. Drop them into your project's /src/assets/images/ folder and reference them in your prompt:

In @first-page.webc, add an image inside the visual div. The image file is at /assets/images/util/my-export.png. Make it fill the container with w-full h-auto and add the appropriate width and height attributes.
Pro-tip: Keep your exported assets organized. Create subfolders in /src/assets/images/ for different pages or features. It'll save you from a cluttered mess later.

Approach 2: Feed Figma into Cursor

If you'd rather skip the manual description and let AI see the design, you have a couple options.

Screenshots: The simplest method. Take a screenshot of your Figma design and drag it directly into Cursor's chat panel. AI can analyze the image and generate code that matches the layout, colors, and spacing it sees.

Figma MCP: For a tighter integration, you can connect Figma to Cursor using an MCP (Model Context Protocol) server. This lets AI pull design data—like component structures, styles, and layer names—directly from your Figma file without screenshots. Setting up MCP is a bit more involved, but once it's configured, it's a little more reliable.

Your design
Figma
Get it to Cursor
Screenshot into chat
Describe the design
Figma MCP integration
AI writes the code
Cursor

My recommendation: start with screenshots. They require zero setup and work great for 80% of what you'll need. As you get more comfortable, you can explore MCP if you want a tighter Figma-to-code pipeline.


Vibe code a full page

Time to put everything together. We're going to vibe code an entire page in protohelm using only prompts. No hand-writing code. Pure vibes.

The goal here isn't to produce a pixel-perfect page—it's to show you how quickly you can go from zero to something tangible using AI and the foundations you've built.

Step 1: Create the page

Our first prompt will set up the new page with the right boilerplate. Since AI has access to our codebase, we can reference existing files for it to follow:

Create a new page at @src/vibes.webc. Use the same front matter structure as @src/first-page.webc but update the title, seoTitle, and ogTitle to "Vibes page". Include helm-nav, a main element, and helm-footer.

After AI creates the file, save it and check the browser at localhost:8080/vibes/ — you should see a blank page with the nav and footer. That's our canvas.

Step 2: Build the hero section

Now let's add a hero section. We'll describe the layout using the terminology and components we've learned:

In @src/vibes.webc, inside the main element, add a hero section using helm-section and helm-container. Center-align the content with text-center and add py-96 px-16 with border-x border-edge on the container. Add a large headline (text-48/110, tracking-tight, text-balance, max-w-640 mx-auto), a subhead below it in text-18 text-neutral-600 with a max-width of 480px and mx-auto, and a call-to-action link styled as a button with bg-brand-600 text-neutral-0 rounded-full px-24 py-12 below the subhead.

Save, preview. If the spacing feels off or the text sizing isn't right, just follow up:

Add 16px of margin between the headline, subhead, and button. Also bump the headline up to text-56.

See how this works? Broad structure first, then dial in the details. This is exactly how we designed in Lesson 3, just faster.

Step 3: Add more sections

Keep going. Add a feature grid, a testimonial, a full-bleed image—whatever you want. Each new section is just another prompt:

Below the hero, add another helm-section with a helm-container. Create a 3-column grid (gap-16) of feature cards. Each card should have bg-foreground, border border-edge, rounded-6, and p-24. Inside each card put a short bold title and a 2-line description in text-14 text-neutral-600. Give me 6 cards total (so 2 rows). Use border-x border-edge on the container.

Step 4: Add animation

Remember the GSAP animation from Lesson 4? Let's have AI add that, too:

In @src/vibes.webc, add a page load animation using GSAP. Animate the hero headline, subhead, and button in with a stagger. Also animate the feature cards in with a stagger when the page loads. Use the same pattern from @src/first-page.webc — hide the sections with invisible/noscript:visible classes and use SplitText for the text elements.

At this point, you've built a multi-section, animated page entirely through prompts. Save your work, preview it, and push it to GitHub.

1. Prompt
"Add a hero with a headline and CTA..."
2. AI writes code
<helm-section> ...
3. Preview
Save & check localhost
4. Refine
"Make the gap 24px..."

This loop—prompt, review, refine—is your new superpower. It works for simple tweaks and entire pages alike.


Tips and gotchas

Before you go off and start building everything with AI, here are some hard-won tips that'll save you time and frustration.

1. Expect iteration

AI rarely gets everything perfect on the first try. That's normal. Think of the first output as a rough draft. Two to three follow-up prompts usually gets you where you want to be. If you're on round ten and it's still not right, try a different approach.

2. Review what AI generates

Don't blindly accept every change. Cursor shows you diffs—the before and after of every edit AI makes. Skim through them. You'll start to recognize when something looks off, even if you can't articulate exactly why. Trust your design eye.

3. Know when to hand-code

Sometimes AI is overkill. Changing text-14 to text-16? Just do it yourself. Swapping a color from neutral-600 to neutral-500? Faster by hand. Save AI for the stuff that would take you more than a minute or two to figure out.

4. Commit to GitHub often

Git is your unlimited undo button. Commit after every meaningful chunk of work. If AI makes a mess of your code and you can't figure out how to fix it, you can always roll back to the last commit. This is genuinely the most important safety net you have.

5. Reference existing code

The single best habit you can build: always reference existing files in your prompts. When AI can see how your codebase is already structured, it follows the same conventions. Without that context, it'll make something up—and it probably won't match.

6. Watch for hallucinations

AI sometimes invents things that don't exist—CSS classes that aren't real, JavaScript APIs that don't work, or component names that aren't in your codebase. If something isn't rendering correctly, this is often why. A quick sanity check of the generated code usually reveals the issue.

Notice: AI models can confidently generate code that looks correct but references nonexistent utility classes, properties, or APIs. If your output looks broken in the browser, double-check the generated class names and attributes against Tailwind's docs or your own codebase.

Course wrap up

If you've actually gotten through all five of these lessons—seriously, congratulations. This stuff is legitimately hard to learn, especially the first time around.

And you just did it.

Let's take a second to appreciate everything you've picked up:

  1. Set up a full development environment from scratch
  2. Learned how Git, GitHub, and deployment workflows fit together
  3. Operated a real JavaScript project with NodeJS and NPM
  4. Designed and styled interfaces in code using HTML, CSS, and Tailwind
  5. Built animations with CSS keyframes and GSAP
  6. Vibe coded a WebGL effect with AI
  7. Learned to write effective prompts and pull in design assets
  8. Built an entire page from scratch using nothing but AI

That is a massive amount of ground to cover. It's okay if some of it still feels fuzzy. The point was never to become an expert—it was to understand enough so that AI can handle the rest. And now you do.

Keep building
The best way to solidify everything you've learned is to keep making things. Pick a real project—a landing page for something you care about, a portfolio, a weird art experiment—and just start prompting. You'll be amazed how quickly it starts to click.
If you get stuck, have questions, or want to show off what you've made, reach out to @mikestilling in Slack. I genuinely want to see what you create with this stuff.

In 2026, the bar for making incredible things has never been lower. You don't need a CS degree. You don't need years of experience. You just need to understand enough to direct the machine. Go make something cool.

That's all for now
This course was designed for desktop.
Please view on a larger screen for the best experience.