About This Episode
This episode of Founder Stack explores how AI is fundamentally reshaping the future of product design and UX. The conversation focuses on the rapid growth of the design field and the shift from human-only workflows to collaborative, agent-driven systems where AI is no longer just a tool, but a co-creator inside the design process.
The episode breaks down how platforms like Figma are enabling AI agents to access not just visual layouts but the underlying design code through Model Context Protocol (MCP), redefining the relationship between designers, developers, and machines. It introduces the concept of agentic design — a framework for consciously delegating work to AI while preserving human agency, learning, and judgment.
From empathy and user research to prototyping and testing, the discussion maps how AI is being adopted across every stage of the design thinking process. It highlights the gains in speed and cost efficiency, but also the growing tensions around trust, ethics, privacy, and bias.
The core theme is clear: the future of design isn’t about automation replacing creativity, but about co-creation — where AI scales the mechanical work and humans stay responsible for empathy, ethics, and storytelling. This episode is a guide for founders, product leaders, and designers who want to understand how to build faster without losing the human core of great user experiences.
Topics Covered
- How AI is changing the role of designers
- What “agentic design” means in practice
- AI inside research, prototyping, and testing
- Trust and ethics in AI-driven design
- Building AI-native creative workflows
Listen to the Full Episode
Checkout Other Episodes
Composable by Design: How MCPs Are Changing the Way We Build Digital Products
Podcast: Founders Stack
DEVELOPMENT

Episode Transcript
Emily: Hey everyone, and welcome to Founder Stack, the podcast where founders and product leaders meet modern tech strategy. I'm Emily, your host for Responsive.
Rob: And I'm Rob, engineering lead here at Responsive.
Emily: We really have to kick this off by just acknowledging the scale of what's happening in product design. I mean, the growth in UX alone, it's kind of mind-blowing.
Rob: Absolutely. You look at the numbers, what was it, maybe a million UX pros back in 2017?
Emily: Yeah, something like that. And the projections are talking like a hundred million by 2050.
Rob: That kind of jump. It just guarantees massive change. And well, the catalyst is obviously AI. Artificial intelligence isn't just tweaking things anymore.
Emily: No, it feels like it's rewriting the rule book for design itself. And you really see that with the big platforms, right?
Rob: Definitely. Which brings us to Figma, the news about them making their tools way more accessible to AI agents. That feels like a really big deal.
Emily: It does. It feels less like just another feature and more like AI getting integrated right into the core, the operating system of design, basically.
Rob: Exactly right. It's not just surface stuff. This is a deep technical change they're making, specifically expanding their model context protocol server, the MCP.
Emily: OK, so for those of us maybe not living in the engineering stack day to day, what does that MCP server actually do? Why is it such a game changer for AI?
Rob: Think of it like a super translator, or maybe a bridge. The crucial part is it lets AI models look directly at the actual code behind the designs and prototypes.
Emily: Wait, not just the image? Like the visual output?
Rob: No, not just the picture it renders. Before, AI was mostly looking at a flat image, a screenshot, and kind of guessing the structure underneath. But now, because the MCP server understands and indexes the code itself, AI agents or even dev environments like VS Code, they can see the real hierarchy, how it's all built.
Emily: Okay, so if the AI can read the code that makes the design, what does that mean for, say, the classic front-end developer role? or that whole design-to-dev handoff nightmare?
Rob: Well, it definitely changes where the friction points are and maybe introduces some new tensions, actually.
Emily: How so?
Rob: It highlights that core conflict we see in the research, right? You've got the AI community, often very technology-driven. Think velocity, efficiency, automate everything.
Emily: Right, get it done fast.
Rob: Yeah, yeah. And then you have the UX world, which is deeply rooted in this human-centered philosophy. And the thing is, just automating things blindly can actually get in the way of building real empathy, which is, you know, the whole point of a good UX.
Emily: So it's this push and pull. We need a middle path, then. Something like human-centered AI, HTAI.
Rob: Exactly. The goal has to be augmentation. How does AI make humans better at what they do, not just replace the human element entirely?
Emily: Which leads us to that term you mentioned. agentic design.
Rob: Yes, agentic design. It's about creating a framework, really, a way to consciously delegate tasks to an AI agent.
Emily: Delegate based on?
Rob: Based on factors like the designer's motivation, how difficult the task is, the level of risk involved, and, crucially, trust in the AI. But the key is preserving the designer's ability to learn and their overall agency.
Emily: So the designer stays in charge, making the calls, even to the AI is doing some of the heavy lifting.
Rob: Precisely. The conductor, not just. pushed aside by the orchestra.
Emily: OK, that makes sense. So how does this actually play out? We can look at it through the lens of the classic design thinking phases, right? Empathize, define.
Rob: Ideate, prototype, and test. Yeah, let's walk through that. Starting with empathizing the phase you'd think is the most human.
Emily: Right, I'd expect AI adoption to be lowest here.
Rob: That's what's so surprising. The data we looked at shows 97 % of UX pros surveyed are already using AI in the empathize phase.
Emily: 97? Okay, wow, that's not niche. That's basically standard practice now. Why so high there in the messy human part?
Rob: Well, think about the sheer volume of data you get in empathy work. AI is brilliant at processing that quickly. They're using things like ChatGPT, Google Bard. even specialized tools like CoCo.ai.
Emily: To do what, exactly?
Rob: To analyze mountains of, say, user interview transcripts or open-ended survey responses, recognizing patterns in feedback, synthesizing maybe hundreds of interviews into key themes. The AI gives them that first-past summary fast.
Emily: Ah, okay. So it handles the volume, letting the humans focus on the nuance, the deeper meaning.
Rob: You got it. It takes away some of the tedious data processing.
Emily: All right. Moving to the next phase, defining. This is where you take those insights and shape them into personas, problem statements, journey maps. What's the AI adoption look like here?
Rob: Still pretty high, 76%.
Emily: Yeah.
Rob: AI is being used to, for instance, automatically generate quite detailed user personas if you feed it the right demographic and behavioral data, or creating real-time user journey maps based on actual usage patterns.
Emily: All right, but you mentioned tension earlier. I feel like this is where it might really show up.
Rob: You nailed it. This is where that HTAI principle gets tested. There's a definite line being drawn.
Emily: What's the line?
Rob: Automating the mechanical stuff, like applying a predefined qualitative codebook to thousands of responses. Designers seem okay with that. Save time.
Emily: Right, just tagging things based on rules you already set.
Rob: Exactly. But automating the creation of that codebook in the first place. That initial dive into the raw, messy data to figure out what the important themes even are. That's where there's pushback.
Emily: Because that's where the designer builds their own understanding and empathy.
Rob: Precisely. Taking that away risks creating designs that are fast, maybe, but potentially shallow, lacking real insight.
Emily: Yeah, that really highlights the difference between just outputting something and actually achieving something meaningful. OK, what about the more creative phases, ideating and prototyping?
Rob: Here, the adoption rates are basically identical and quite high, 79 % for both. This is where generative AI really comes into its own.
Emily: Makes sense, generating ideas, mockups.
Rob: Yeah, using LLM visual AI tools. You can go from a text prompt to generating a bunch of creative concepts or even functional design elements incredibly quickly.
Emily: And I imagine this is where founders and product leads really see the dollar signs, right? The speed benefit.
Rob: Absolutely. The sources confirm it. Delegating things like early stage throwaway prototyping to AI leads to significantly faster iteration cycles and lower costs. It just speeds everything up.
Emily: Which is why tools like Figma, Uzard, Framer, they're all racing to bake AI deeper into their platforms.
Rob: Exactly. Velocity is king in many product environments.
Emily: OK, last stage, testing. Given the empathy focus we just discussed, I'm guessing adoption dips here.
Rob: You're spot on. It's the lowest rate, down at 61%. It seems UX pros still heavily favor traditional human-to-human testing methods for getting that rich qualitative feedback on their designs.
Emily: So if adoption's lower, what is AI doing in testing? Where does it fit in?
Rob: It's mostly focused on the more quantitative visual aspects, things like predicting visual saliency, where a user's eye is likely to land first.
Emily: Oh, like heat maps, but predicted.
Rob: Kind of, yeah. Also, aesthetic analysis, like scoring visual appeal based on certain principles. And importantly, automated visual error detection, catching inconsistencies or bugs.
Emily: So tools like Maze or Hotjar might use AI for that kind of analysis under the hood.
Rob: Exactly. They're integrating it to track attention patterns or flag visual glitches automatically, often before a human tester even sees it.
Emily: Taking a step back, then. all this AI integration, plus things like design systems. It feels like the whole job description of a designer is shifting.
Rob: Trematically. Design systems already started moving designers away from being purely pixel pushers, automating components, styles.
Emily: Yeah, less time spent drawing the same button 50 times.
Rob: Right. And AI accelerates that shift and frees designers from a lot of that static UI generation. The focus moves up a level.
Emily: To what? What's the new core focus?
Rob: It's becoming more about system curation, managing those component libraries, ensuring consistency across a complex product, and importantly, directing the overall narrative of the user experience across many screens, not just one.
Emily: Like being the director, ensuring the story flows, rather than just painting one scene.
Rob: That's a great analogy. But there's a catch. For AI to really help with that holistic cross-screen view, it needs to understand the flow, the connections.
Emily: And current AI struggles with that.
Rob: It's a major technical limitation right now. A lot of the big data sets AI learns from, like the RIVO data set, are mostly just collections of individual static screenshots from apps.
Emily: So they see the screens in isolation, but not how a user actually moves between them?
Rob: Exactly. They often miss that crucial sequential logic, the interaction flow, which limits how well AI can currently support that high level systems thinking UX evaluation the designer is now responsible for.
Emily: So that gap understanding the journey is still firmly in the human designer's court. OK, let's bring this back to, say, founders or team leads who aren't designers. Why should they care about all this internal design world evolution?
Rob: Oh, it boils down to two things they definitely care about, velocity and cost. Simple as that.
Emily: Faster, cheaper.
Rob: Pretty much. If AI helps your design team iterate more quickly, test ideas more cheaply, and automate the boring stuff, your whole product development process speeds up. Time to market shrinks, burn rate potentially drops. It's a clear competitive advantage.
Emily: Right. So for the designers listening, maybe feeling a bit overwhelmed by this wave, what's the practical first step? How do they stay relevant in this agentic world?
Rob: The main message from the pros already doing this is, Keep learning. Actively. Don't wait. What's the best way? The resources that working designers found most useful were online courses specifically about AI and design and getting involved in AI design communities. About 76 % pointed to those two things.
Emily: So actively learning the new tools and how they fit into the workflow and talking to others doing the same.
Rob: Exactly. It's not just about knowing what a tool does, but how to integrate it effectively and ethically.
Emily: Okay. Ethics. That feels like the big elephant in the room. As designers hand over more tasks to AI, the responsibility increases, right?
Rob: Absolutely critical. And the concerns highlighted by UX pros fall into a couple of main buckets. User privacy is huge.
Emily: Makes sense. Feeding user data into AI models.
Rob: Right. 68 % were worried about getting proper user consent for that data use, and 63 % had concerns about data security. big numbers.
Emily: And what about the technical side of the ethics beyond privacy?
Rob: The biggest technical headache mentioned by 68 % is just the sheer difficulty and effort needed to gather and prepare accurate representative training data for the AI in the first place. Garbage in, garbage out.
Emily: And even if you get good data in.
Rob: Then you have the ongoing challenge. 56 % were concerned about this, of constantly monitoring the AI's outputs to make sure they're fair and unbiased. You don't want automation accidentally reinforcing harmful stereotypes or creating inequitable experiences.
Emily: Wow. Okay, so it really comes full circle. We're heading towards a future, especially with platforms like Figma leading the way. That's all about co-creation.
Rob: I think that's the right term. Machines handle the heavy lifting on data analysis, maybe the repetitive parts of prototyping, augmenting what humans can do.
Emily: While the humans focus on the core human stuff, the empathy, the ethical judgment, shaping the overall story and experience.
Rob: It's potentially a very powerful combination, but it does leave us with a pretty big question that you want.
Emily: Which is?
Rob: Well… If AI gets really good at automating the mechanics of understanding users, the analysis, the persona drafts, the pattern finding, does that truly free up the designer to cultivate deeper empathy? Or is there a risk they lose something valuable? That deep understanding that maybe only comes from wrestling directly with the raw, messy human data yourself?
Emily: More time for empathy or a potential loss of the very process that builds it. That's definitely something to think about.
Build smarter. Launch faster. Convert better.
Book a free 30-minute call with our UX experts to uncover quick wins that drive growth—without compromising on quality.
BOOK A DISCOVERY CALL