Architecting a “Remote Execution Platform” with AI-augmented UX workflows - a case study
A case study in product thinking with AI as a creative partner
I was asked to design a product in a domain that was sophisticated, with no user access, a complex specification, and a tight deadline.
So I didn’t follow a typical UX process. I built a repeatable, AI-augmented design process that let me move fast, think clearly, and design responsibly.
Note: This project is currently a work in progress. The work shared here reflects the current state of research, systems thinking, and AI-powered UX design. It is not a finalized product. This project is ongoing and some domain-specific details and images have been abstracted/redacted to respect confidentiality agreements.
The setup
The product involved remote workflows, protocols, and physical execution by on-ground executors.
It had two completely different users.
One, a primary user responsible for designing complex remote procedures (structured instructions), and
The other, an on-site executor executing those procedures step by step, using tools or instruments.
In this case study, I refer to primary user-authored workflows as ‘procedures’ for abstraction, though the original system may use different terminology
The vision: Primary users upload a procedure remotely, have it carried out on-site by executors, and receive traceable results.
The design challenge?
No direct access to users
A steep learning curve on the domain
Dense, technical product specifications
A tightly scoped Phase 1 release
And an accelerated delivery timeline
💡I had to lean into a very different strategy.
I used ChatGPT and v0.dev to design the solution in “modular AI-powered blocks” right from research to prototype. This involved using
- smart iterative prompts,
- a repeatable system of prompts,
- validations, and
- human-in-the-loop checkpoints.
Building a “Design Engine”, not just deliverables
Now, here’s the thing with working with tools like ChatGPT. Most people just “use ChatGPT.” They treat it like a conversation. Honestly, that doesn’t work.
To get real, tangible results, I had to engineer a system around this.
1. Workflow architecture - Modular windows, not endless threads
Running long threads in ChatGPT doesn’t work. It drifts. It slows down and gives you bad results.
Instead,
The project I am working on is a “Project” in ChatGPT and every stage (or UX artefact) of the project has its own context window or conversation: Research, Archetypes, JTBDs, IA, UI Layouts.
I would give ChatGPT context of whatever may be useful prior to starting the conversation - product specs, PRD, sources from the web. For me this is particularly important. Its important to train and make it smart before you ask questions.
At the end of each stage or conversation, I asked ChatGPT to summarize everything it learned into a
.txt
file.These
.txt
files became portable memory blocks allowing me to carry over understanding without repeating myself. These memory blocks could be used in any conversation whenever needed (we could always lean into ChatGPT’s memory but I find this handy).
It was a lot of prompt engineering, yes. But it was also a meticulous workflow architecture.
I could load up another window/conversation, hand off the necessary file, and ChatGPT would pick up right where I left off, often validating its memory against mine.
2. Researching like a machine-human hybrid
I had zero familiarity with the on-site technical workflows. The product spec from the client was helpful but I had to dig deeper.
So I used ChatGPT as my research assistant:
I uploaded sections of the PRD in parts
Used structured prompts to understand the domain, market, product and competitors
Asked GPT to carefully consider constraints and scope, if any, in the product spec documents.
Example prompts:
“What is this space all about? Who operates here, and what’s the big picture?”
What are the most misunderstood or complex ideas in this space?
What types of companies or customers face this challenge?
What are some emerging market trends or behaviors I should be aware of?
What are users trying to accomplish in this space? What does their current process look like? Any tools, steps, workarounds?
What tools or platforms try to solve this problem today? What are their strengths what do they do well?
3. Simulating users without speaking to them
Since I couldn’t talk to users, I simulated them with structure.
What I had:
Product specs
Some documents/articles for reference (provided by client)
Research synthesis (via GPT)
Contextual pain points from the client
Competitor research
What I did:
Defined two archetypes:
Primary user - who builds structured procedures
Executor - who executes the procedures
Mapped out responsibilities, pain points, and behavioral traits based on the nature of the target user and patterns from competitor research
Prompted ChatGPT to generate Jobs to Be Done per archetype grounded in scope. This was important.
Why this worked:
I validated each JTBD by mapping it to an actual feature in the scope doc.
The client had also validated many of the JTBDs that I had generated, stating those were precisely what users had told them as well.
If a JTBD implied a feature we couldn’t build, ChatGPT flagged it as “out of scope” and told me where it came from.
Example prompts:
I would want to create the user archetypes and their JTBDs in this conversation. But before that could we just list the types of users we may have?
I have a set of features that are in scope for this phase. Would that be helpful for this exercise?
Could you list the user archetypes and their Goals, Responsibilities, Key pain points and also help me know which was inferred from our research and which was a reasonable assumption?
4. Generating flows and IA with full memory context
Once I had JTBDs, I moved on to flows and IA.
Because ChatGPT had memory of:
The product spec
My user roles
My scope file
It gave me flows that were scoped, logical, and real.
I could then move onto another window, pass the memory block (.txt file) to it, check if it aligned with its memory and then prompt it to use the user flows to create a detailed Information Architecture outline and sitemap.
Now ChatGPT could give me text-based flows and IA and mermaid diagrams as well, which I would translate to FigJam.
5. Competitor UX/UI forensics with ChatGPT
Our competitors were few and gated.
So I did UX forensics:
Scraped YouTube demos of competitors
Downloaded product walkthroughs (through blogs/YouTube transcripts)
Transcribed everything into clean text
All this was fed to ChatGPT.
Based on what I learnt of the competitor UX/UI, I asked ChatGPT:
I find the interface non-intuitive, complex and dense. How could this be improved?
How could the experience be improved for this screen (attached image)?
and so on.
This here is a detailed step with many iterative prompts. It gave me insights I would’ve otherwise missed.
I then asked:
Based on what we’ve discussed, how should we design our version to avoid these pitfalls? Lets brainstorm.
We co-wrote layout principles.
I then did an inspiration audit:
Considering the type of UI we might need for our product, which apps would you find good inspiration from?
ChaGPT would go on to build a detailed table of the UI requirement for all the screens we might need and where (which app) we could find inspiration from for each of the UI requirement
I also asked for a component-wise break down of the UI and which of the references would be best.
All of this is baked into a memory block and used for all subsequent layout prompts turning GPT into a real UX design partner.
6. UI Layouts: Prompt → v0 → Visual Feedback → Loop
For every major flow, I used this 4-step pattern:
Prompted ChatGPT with full IA context and layout needs
Received a structured text based UI layout block in Tailwind-style terms
Fed it into v0.dev to get a mid-fidelity interactive prototype
Reviewed the prototype, took screenshots, and gave visual feedback back to ChatGPT
Sometimes, I’d even screenshot the v0 prototype and ask GPT to critique it based on our goals and constraints.
This step here was purely from a prototype and a mid-fidelity design perspective (something akin to wireframes, but better).
This could then be used as a visual reference for the final UI.
My idea is to also feed the design language of the app into GPT and try making high-fidelity prototypes that could come close to 70-80% of our requirements. These could be used as a visual reference that I could build on top of.
A massive advantage here is how this speeds up our manual process. With the high fidelity design, I have a map of all the potential components I may need for the design system and also a great idea of how I might want the end UI to look. I am not starting from a blank slate.
What this case study is really about
This project was (it still is) high-stakes, in an unfamiliar domain, with tight constraints.
But what this really taught me was:
💡The design process itself is a product and it’s worth optimizing.
AI didn’t do the work for me. It worked with me as a collaborator.
It helped me:
Synthesize dense product specs and research (this was the biggest return on investment for me, with GPT handling all the data analysis, mapping two and two together and helping me connect dots, which otherwise could take a long time).
Simulate users with logic
Map flows that align with real-world scope
Prototype faster than my hands could sketch
But I was still the designer in the loop.
I pushed back, challenged assumptions, and validated every artifact before it made its way into the system.
This was the first project where I, apart from designing a solution, also designed a repeatable design engine.
And I’ll be using it for a long time.
NDA + Status Note
This is an active, ongoing project. The case study reflects AI-driven workflows and mid-fidelity outputs used during the early design phase. Some names, flows, and data have been abstracted to respect confidentiality agreements.