AI-Assisted Product Design Sprint (WIP)

An 11-day sprint exploring how AI coding tools are changing the way product designers prototype and test ideas.

The goal was to understand how designers can move from concept to a functional prototype using AI-assisted tools, and what role these tools might play in the future design workflow.

As a vehicle for the experiment, I built MoveBetter, a reimbursement-based fitness benefit wallet that allows employees to submit receipts against a monthly movement budget.

During the sprint I defined the product concept, flows, and information architecture, and built the working prototype using AI coding tools. The experiment also included tool exploration and quick usability tests with three participants.

Role: Product Designer
Duration: 11 days

The Question I Wanted to Explore

Over the past year, AI tools have started to change how digital products can be built. Platforms like Replit, Cursor, and Lovable promise that designers can move beyond static prototypes and create functional applications without engineering support.

At the same time, it is still unclear how these tools will fit into a designer’s everyday workflow.

For the past decade, tools like Figma have shaped how designers prototype and collaborate. With AI-assisted coding entering the picture, I became curious about what the next evolution of that workflow might look like.

Rather than focusing on a specific tool, I wanted to understand how this shift might change the design process itself.

I explored three questions:

  • How quickly can a designer move from concept to a working prototype using AI coding tools?

  • How does prompting influence product structure and interaction design?

  • Where does product thinking still play a critical role in an AI-assisted workflow?

To explore this, I ran an 11-day design sprint, using AI coding tools as my primary interface for building a small product from scratch. The structure was based on a 10-day learning sprint generated with ChatGPT, which I adapted as the experiment evolved.

The goal was not simply to build a prototype, but to better understand how these tools affect the way designers work.


The Product Vehicle

To explore these questions, I needed a small but realistic product concept that could be built within the constraints of the sprint.

I designed MoveBetter, a reimbursement-based fitness benefit wallet. Employees receive a monthly movement budget from their employer and can submit receipts for activities such as: fitness classes, equipment, coaching, races, recovery services, etc.

The concept mirrors real workplace benefit structures and provided a realistic scenario for designing flows and edge cases.

The goal was not to build a full product, but to create a practical test environment for exploring AI-assisted product design workflows.

Constraints

The experiment was intentionally designed as a short sprint to simulate the kind of rapid exploration designers often do when evaluating new tools. They included:

  • 11-day sprint — moving quickly from concept to working prototype

  • AI coding tools as the primary interface — building directly instead of designing static screens

  • No engineering support — all exploration and implementation happened within AI tools

  • Learning while building — understanding how prompting shapes outcomes

  • Free-tier tool access — runtime and token limits shaped the pace of development

  • Quick user tests with three participants

Process

With the constraints defined, the next step was to see how far a product concept could be pushed using AI coding tools within a short sprint.

Rather than following a traditional design workflow, the sprint combined product thinking, prompting, and rapid iteration inside AI coding environments.


Phase 1: Defining the Product Slice

Before building anything, I defined a small but realistic product scope.

Instead of designing a full application, I focused on a single slice: a reimbursement flow where users can:

  • view their movement budget

  • submit receipts for eligible activities

This phase focused on defining the concept, mapping the information architecture, and identifying the minimum interactions needed for the prototype to feel believable.

Phase 2: Learning How to Work With the AI

Working with AI coding tools required a different approach than traditional design tools.

Instead of designing screens, I translated product decisions into clear prompts that guided both interface and logic.

Part of this phase involved testing different AI coding environments to understand how they behaved when building a real product.

A few patterns emerged quickly:

  • AI defaults to generic solutions.
    Broad prompts produced familiar product patterns.

  • Smaller instructions produced better results.
    Breaking work into precise steps helped maintain product structure.

  • Different tools excel at different things.

For example:

  • Replit maintained stronger context and often generated more complete flows.

  • Lovable produced quick UI outputs but required additional prompting to maintain structure.

  • Framer felt better suited for landing pages than product flows.

This phase became less about generating screens and more about learning how to steer the AI so the product stayed coherent.

Phase 3: Making the Prototype Believable

AI-generated interfaces can look convincing, but they often lack realistic behavior.

To make the prototype usable, I introduced what I called a “believability layer.”

Instead of connecting to a backend, I created simulated states so the interface could reflect:

  1. new user

  2. uploading receipts

  3. reimbursement statuses

This required significant debugging and experimentation. It quickly became clear that believable prototypes need more than good-looking screens — they need to behave like real products.

During this phase I also conducted quick usability tests to observe how people interpreted the flows.

Phase 4: Product Reframing

As the prototype evolved, it became clear that the product had been framed incorrectly.

Early ideas treated the concept like a fitness app, including elements such as goal tracking.

But the real driver of behavior is financial reimbursement, not activity tracking.

Once that became clear, the product was simplified:

  • goal tracking was removed

  • the interface emphasized budget visibility

  • the main interaction became uploading receipts

This shift clarified the product and made the experience significantly simpler.

Key Product Decisions

Removing the Goal Feature

Early iterations included goal tracking.

However, usability tests showed that participants consistently interpreted the product as a reimbursement tool, not a fitness tracker.

One participant summarized it clearly:

A sad face emoji

“If my company is paying for this, I just want to know how much I have left and how to claim it.”

The goal feature added complexity without supporting the product’s core value.

It was removed to keep the experience focused on simple reimbursement.

Simplifying the Home Screen

Early versions of the home screen contained several competing elements.

The final structure focused on three elements:

  • Movement budget

  • Eligible activity categories

  • Receipt upload

This made the interface easier to scan and aligned it with the product’s main task.

Prioritizing Believable Product States

The prototype included simulated states for budgets, receipts, and reimbursement status.

This allowed the interface to behave like a real product and made usability testing significantly more meaningful.

Final Prototype

By the end of the sprint, the prototype evolved into a simplified reimbursement flow centered around one core task: submitting receipts against an available movement budget.

The home screen focuses on:

  • the available movement budget

  • eligible activity categories

  • a clear receipt upload action

Although the prototype uses simulated states rather than a backend, it behaves like a functioning product. Users can view changing budgets, upload receipts, and see reimbursement activity reflected in the interface.

Because the prototype runs like an actual app rather than a static mockup, participants could open it directly on their phones and interact with it naturally.

What I Learned

AI tools dramatically speed up prototyping.
Moving from idea to a functioning prototype is now possible in a very short time.

  1. AI defaults to generic solutions.
    Without clear constraints, systems fill gaps with familiar patterns.

  2. Breaking work into small slices works best.
    Incremental instructions produced more reliable results than large feature requests.

  3. Believable states make prototypes far more useful.
    Simulated budgets and reimbursement states made usability testing much more meaningful.

  4. Design and implementation are starting to merge.
    Instead of designing screens and handing them off, it is now possible to shape the product directly in a working environment.

  5. AI accelerates execution, but design thinking still drives the outcome.

This sprint started as a way to explore how AI coding tools might fit into a product designer’s workflow.

Building a functional prototype quickly showed how fast ideas can move from concept to something people can actually use. At the same time, it reinforced that the quality of the outcome still depends on how clearly the product is defined and structured.

The tools can accelerate execution, but they don’t replace product thinking — if anything, they make it more important.

The future of design may not be about choosing the right tool, but about learning how to think and build within a new kind of design environment.