Designing for User Autonomy

Designing for User Autonomy

Designing for User Autonomy

Designing for User Autonomy

Designing for User Autonomy

Unlocking FlairX's Growth Ceiling:
From Copy-Pasting Across 4+ Tools to a 20-Minute AI-Integrated Workflow

Unlocking FlairX's Growth Ceiling:
From Copy-Pasting Across 4+ Tools to a 20-Minute AI-Integrated Workflow

Unlocking FlairX's Growth Ceiling:
From Copy-Pasting Across 4+ Tools to a 20-Minute AI-Integrated Workflow

CONTEXT

What FlairX does

FlairX is an Interview-as-a-Service platform. Companies use it to conduct technical interviews at scale, either with AI interviewers or expert human interviewers from FlairX's network.

The platform handles the entire interview workflow: creating custom questionnaires, scheduling candidates, conducting interviews, and providing detailed feedback to hiring teams.

Team

2 Designers, 3 Engineers, PM, CEO

Timeline

3 weeks

Domain

B2B SaaS, HRTech

My Responsibilities

HCI Research, Iterative Design Cycle

THE PROBLEM

Scaling limitations

FlairX had demand. Companies wanted in. But they couldn't scale past 3-4 active clients a day.

The bottleneck: questionnaire creation. Each custom questionnaire took their 3-person ops team 1.5-2 hours to create.

They were using four separate tools: Google Docs and spreadsheets for job descriptions, ChatGPT for generating questions, Sheets for organizing and taking approval from the company and the founder, then FlairX's platform for final input done manually.

Impact

  • Business impact: Growth was capped. Revenue constrained by ops capacity. Every new client hire delays expansion.


  • User experience: Ops team felt like human copy-paste machines rather than domain experts. Repetitive, tedious, error-prone.

"We can't take on more than 3-4 clients at once without hiring more people. We're spending more time creating questionnaires than conducting interviews."

-Founder/CEO, initial conversation

THE DISCOVERY

What I learned by participatory shadowing

I had 2 weeks to deliver. Spent the first two days watching the CEO and ops team create questionnaires.

01

Template avoidance

FlairX had hundreds of organized templates. The CEO opened them, scrolled for ten seconds, closed, started fresh.

.

02

Quality bar

Clients expected custom-crafted, high-difficulty questions. Generic prompts made them lose trust and request changes.

03

No intelligent system

No tagging, hierarchy, or smart recommendations. Ops team fell back on manual curation every time.

04

Inconsistent quality

No AI assistance to craft unique questions or guide difficulty levels. Time-consuming manual effort, inconsistent results

Critical insight

Users weren't avoiding speed; they were avoiding losing control. Every "fast" solution (templates, full automation) took away their ability to shape the output. So they rejected it.

The challenge was making the process faster while keeping users in control.

COMPETITIVE LANDSCAPE

What others were building

Platforms analyzed

HireVue

Karat

BrightHire

BarRaiser

Intervuew

Metaview

Glider

What they had

  • AI note-taking

  • Interview recording

  • Automated scoring

  • Static templates

What was missing

  • Question generation

  • Smart refinement

  • Human-in-the-loop

  • Adaptive workflow

Our opportunity

Build AI-assisted flow that balances automation with editorial control; questions aligned to job descriptions, easy to personalize, with users keeping strategic control.

MAJOR DESIGN DECISIONS

Decisions that shaped the solution

Following core decisions that determined whether users would adopt this or reject it like they'd rejected templates.

AI assistant panel

Iter 1

Open-ended conversational chatDiscarded

Free-text AI chat where users could ask anything about the selected question.

Why it was dropped
Iteration 1 screenshot

Iter 2

Structured list actions + free-text inputDiscarded

Vertical list of actions (Make Unique, Humanize, Regenerate…) with a text input below and inline AI suggestions above.

Why it was dropped
Iteration 2 screenshot

Iter 3

2×2 button grid + custom prompt inputRefined

Four preset action buttons in a grid with a freeform text input below. Clearer, but hierarchy still weak.

Progress + remaining issues
Iteration 3 screenshot

Iter 4

Curated pill buttons + inline contextual actionsChosen

Four pill-shaped action buttons derived from observed usage. AI output cards carry inline Replace / Add as New / Refine controls.

Why this works
Iteration 4 screenshot

The forcing function

Watching the CEO use the tool for an hour revealed 4 repeated prompts, constraint drove curation instead of infinite options.

What was traded off

Natural language flexibility dropped. But the 4 curated actions cover ~95% of real use cases observed in testing.

Net outcome

Workflow time dropped to 18 min vs open chat. No learning curve. Buttons show what's possible at a glance.

Premium UX Template for Framer

Job Description Parsing + Skills Step

The problem: Manually extracting skills from JDs, typing them into ChatGPT. Time-consuming, error-prone. Questions often misaligned.

What we built: Upload JD → auto-parse skills → review/edit → set weightage/time → generate questions.

This happens before any questions generate. Users shape direction first.

Why this decision

Business impact: Questions aligned to job requirements. Less rework, fewer complaints.

User experience: Control at the right moment. Users loved reviewing skills before committing. Felt strategic, not reactive.

Tradeoff: Added a step. But users found it helpful, not burdensome; saved time downstream.

Why it's best: Turned out to be most valuable. Strategic control + automated extraction.

Premium UX Template for Framer

Structured Flow (Not Canvas)

Initial idea: Notion-style canvas. Drag-and-drop, build anything, full flexibility.

PM's pushback: "Months of development. We have two weeks."

The compromise: Structured approach. Pre-defined sections, guided flow, clear progression.

Why this decision

Business impact: Shipped in 2 weeks vs months. Got to market fast, started learning immediately.

User experience: Lost "build anything" feeling. Gained clarity, users knew exactly where they were. No decision paralysis.

Tradeoff: Gave up flexibility for speed and simplicity. Less magical, more learnable.

Why it's best: Shipped fast, users understood it immediately.

Premium UX Template for Framer

Question Bank Library

The business need: Preserve competitive edge through vetted, high-quality questions.

What we built: Searchable question bank with tagging, filtering by skill/difficulty/type. Browse, favorite, and add to questionnaires.

My honest take: Skeptical this gets used much. Usage will decline. People won't update it. They'll lean on AI generation because it is faster and personalized.

Why this decision

Business impact: Competitive differentiator on paper. Shows expert-curated questions, not just AI content.

User experience: Fallback for users who don't trust AI fully yet, or need some specific question that they have seem before.

Tradeoff: Maintenance effort, potentially low usage.

Why it's included: Business priority, not product necessity. Leadership wanted it. Sometimes you build for strategy despite doubts.

THE SOLUTION

How it works

IMPACT

What changed

For the business:
Same 3-person ops team now handles 12+ clients (was capped at 3-4). No new hires. Growth unconstrained.

For users:
Two weeks after launch, ops team stopped using ChatGPT for questionnaires. Stopped opening Google Docs for this workflow. Just used the assistant.

Time: 2 hours → 20 minutes average.

Adoption: The entire ops team. No training. No mandates. They chose it.

What made the difference: The JD parsing step. Users loved reviewing skills before generating questions. Control at the right moment.

What I'm skeptical about: The question bank. Usage will probably decline as users grow to trust AI generation more.

MY TAKEAWAYS

Reflections

Watch what users do, not what they say

CEO said he wanted "better templates." Watching her revealed she avoided templates and repeated the same ChatGPT prompts. The solution came from observation, not asking.

Constraints force better solutions

Engineering said "no open-ended chat." I resisted, then realized curating most-used actions beat infinite flexibility. Limits reveal what matters.

Control drives adoption more than speed

Users rejected fast solutions that removed control. We shipped the fastest version that kept them in control. That's why they adopted it.

Should've pushed harder for inline editing

We put AI actions in a sidebar. It works, but inline editing would've been more intuitive and saved space. I didn't fight for it. Next time, I'd explore deeper before compromising.