An Agile Product Development Workflow for Pizza Squads!

A comprehensive framework for building, delivering, and iterating on products through collaborative teamwork and continuous improvement

What is This Workflow?

This agile workflow is a lightweight framework based on Scrum, Kanban and DevOps principles that helps product teams work together to deliver valuable products iteratively. It encourages teams to learn through experiences, self-organize while working on problems, and reflect on their wins and losses to continuously improve.

Team Roles

In an agile team, there are different responsibilities to facilitate the flow of a sprint. Those roles aren't job titles but rather roles that need to be filled by members of the team. In some cases a single team member can fill in multiple roles depending on the team needs.

Product Owner

Maximizes the value of the product by managing the Product Backlog, ensuring the team works on the most valuable items, and representing stakeholder interests.

Scrum Master

Serves the team by facilitating team events, removing impediments, and coaching the team on agile practices and principles to improve effectiveness.

Development Team

Self-organizing professionals who do the actual work of delivering a potentially shippable product increment at the end of each Sprint.

Team Ceremonies

High performing teams communicate effectively and efficiently, and the following ceremonies are meant to help teams communicate effectively, build common understanding and tackle problems efficiently.

Refinement

1-2 hours

The team discusses epics and stories, breaks them down into manageable pieces, estimates effort, and marks items as ready for the upcoming sprint.

Sprint Planning

2-4 hours

The team selects user stories from the refined backlog for the upcoming 2-week sprint and commits to delivering them.

Daily Standup

15 minutes

A daily standup where team members synchronize activities and create a plan for the next 24 hours. Everyone shares progress and obstacles.

Sprint Review

1-2 hours

The team demonstrates completed work, reviews sprint metrics, and discusses DORA metrics to measure delivery performance and identify improvements.

Sprint Retrospective

1-1.5 hours

The team reflects on the past sprint and identifies improvements for processes, relationships, and tools used during the sprint.

The Sprint Cycle

A Sprint or Cycle is a repeatable time box where the team picks up and work on work items.

Hover over each step to see its description

Sprint 1 Refinement Discuss, break down, estimate & mark ready 2 Planning Select stories from backlog for sprint 3 Daily Standup 15-minute daily sync meeting 4 Review Demo, metrics, DORA 5 Retrospective Reflect and plan improvements

Key Artifacts

Product Backlog

Product Backlog

An ordered list of everything that is known to be needed in the product. It's dynamic, constantly changing to identify what the product needs to be competitive and useful.

Sprint Backlog

Sprint Backlog

The set of Product Backlog items selected for the Sprint, plus a plan for delivering them. It's a highly visible, real-time picture of the work the team plans to accomplish.

Product Increment

Product Increment

The sum of all Product Backlog items completed during a Sprint, combined with all previous Sprints. It must be in a useable condition regardless of whether the Product Owner decides to release it.

Writing Work Items: Best Practices

Effective work item creation is crucial for clear communication and successful sprint execution. Each level serves a specific purpose in the product development hierarchy.

Epic

Epics

Large Initiative

What is an Epic?

A large body of work that can be broken down into smaller stories. Epics typically span multiple sprints and represent significant features, initiatives, or technical improvements.

Best Practices:

  • Focus on business value and outcomes
  • Keep them broad but with clear boundaries
  • Include success criteria and measurable goals
  • Represents significant features or initiatives
User-Facing Epic:
Title: Mobile App Experience
Description: Develop a native mobile application that enables customers to access core services on iOS and Android devices, providing a seamless on-the-go experience.

Success Criteria:
  • App available on iOS and Android app stores
  • 50% of active users adopt mobile app within 6 months
  • Mobile user satisfaction score above 4.5/5
Technical Epic:
Title: Microservices Architecture Migration
Description: Transition from monolithic architecture to microservices to improve scalability, deployment flexibility, and team autonomy.

Success Criteria:
  • Core services decomposed into 8-10 microservices
  • Independent deployment capability for each service
  • System maintains 99.9% uptime during migration
User Story

User Stories

Sprint Level

What is a User Story?

A small, deliverable piece of functionality that delivers value. Stories should be completable within a single sprint and can represent both user-facing features and technical work.

Best Practices:

  • Keep stories concise with clear value proposition
  • Follow INVEST criteria (Independent, Negotiable, Valuable, Estimable, Small, Testable)
  • Include clear acceptance criteria
  • Keep them small enough for one sprint
  • Focus on outcomes and value delivered
User-Facing Story:
Title: Password Reset via Email
Description: Enable users to reset their password through email verification to regain account access when forgotten.

Acceptance Criteria:
  • User receives reset email within 5 minutes
  • Reset link expires after 24 hours
  • Password must meet security requirements
  • User is notified of successful password change
Technical Story:
Title: Migrate Database to PostgreSQL 15
Description: Upgrade the production database from PostgreSQL 13 to PostgreSQL 15 to gain performance improvements and security updates.

Acceptance Criteria:
  • Zero data loss during migration
  • All existing queries function correctly
  • Performance benchmarks meet or exceed current levels
  • Rollback plan tested and documented
Task

Tasks

Implementation

What is a Task?

Technical work items that break down a story into actionable steps. Tasks represent the "how" of implementing a story.

Best Practices:

  • Be specific and actionable
  • Keep scope manageable and focused
  • Assign to a single person
  • Include technical details and approach
  • Can be created during sprint planning or execution
Examples:
  • Create password reset API endpoint
  • Design email template for reset link
  • Implement token generation and validation
  • Write unit tests for password reset flow
Sub-task

Sub-tasks

Granular Steps

What is a Sub-task?

Very small, specific pieces of work that break down a task further. Used when a task needs additional detail or tracking.

Best Practices:

  • Extremely specific and granular
  • Keep scope very small and focused
  • Use when task complexity requires it
  • Keep the hierarchy shallow (avoid over-nesting)
Examples for "Create password reset API endpoint":
  • Set up route handler
  • Add input validation
  • Integrate with email service
  • Add error handling

Work Item Hierarchy

Epic
Large initiative
Story
Deliverable unit
Task
Implementation step
Sub-task
Granular step

Technical Documentation: PRD & RFC

Product Requirement Documents (PRDs) and Requests for Comments (RFCs) are essential tools for aligning teams, documenting decisions, and ensuring quality in product and technical development.

PRD

PRD (Product Requirement Document)

What is a PRD?

A comprehensive document that defines the product requirements, user needs, success metrics, and scope for a feature or initiative. PRDs are typically written before development begins and serve as the source of truth for what will be built.

When to Write a PRD:

  • Before starting work on significant new features
  • When launching new products or major initiatives
  • For features requiring cross-team coordination
  • When the scope and requirements need formal documentation

Best Practices:

  • Start with the problem: Clearly articulate the user problem or business need being addressed
  • Define success metrics: Establish measurable goals and KPIs upfront
  • Be specific about scope: Clearly state what's in scope and what's explicitly out of scope
  • Include user stories: Provide concrete examples of how users will interact with the feature
  • Add mockups/wireframes: Visual representations help align understanding
  • Document assumptions: List key assumptions and dependencies
  • Consider edge cases: Think through error states and boundary conditions
  • Keep it living: Update the PRD as requirements evolve

Typical PRD Structure:

1
Overview & Context

Background, problem statement, and goals

2
User Personas & Stories

Target users and their use cases

3
Requirements

Functional and non-functional requirements

4
Success Metrics

KPIs and measurement criteria

5
Design & Mockups

Visual representation of the solution

6
Out of Scope

What will NOT be included

7
Timeline & Dependencies

Key milestones and blockers

RFC

RFC (Request for Comments)

What is an RFC?

A technical design document that proposes a solution, architecture, or approach for solving a problem. RFCs invite feedback and discussion from the team before implementation, promoting collaborative decision-making and knowledge sharing.

When to Write an RFC:

  • Before making significant architectural decisions
  • When introducing new technologies or frameworks
  • For changes that impact multiple teams or systems
  • When multiple solution approaches are possible
  • To document and socialize technical decisions

Best Practices:

  • State the problem clearly: Explain what problem you're solving and why it matters
  • Propose concrete solutions: Present one or more specific approaches with trade-offs
  • Compare alternatives: Evaluate different options objectively
  • Include diagrams: Architecture diagrams, flow charts, and sequence diagrams clarify complex concepts
  • Address concerns proactively: Anticipate questions about performance, security, scalability
  • Set a review timeline: Give reviewers a clear deadline for feedback
  • Invite diverse perspectives: Seek input from different roles and teams
  • Document the decision: Record what was decided and why

Typical RFC Structure:

1
Summary

High-level overview in 2-3 sentences

2
Problem Statement

What problem are we solving and why

3
Proposed Solution

Detailed technical approach

4
Alternative Approaches

Other options considered and why not chosen

5
Trade-offs & Risks

Downsides and mitigation strategies

6
Implementation Plan

Rollout strategy and phases

7
Open Questions

Areas where feedback is specifically needed

PRD vs RFC: When to Use Each

PRD

  • Focus: What to build and why
  • Audience: Product managers, stakeholders, entire team
  • Perspective: User and business needs
  • Detail Level: Requirements and expected behavior
  • Example: "Build a two-factor authentication feature"

RFC

  • Focus: How to build it technically
  • Audience: Engineers, architects, technical teams
  • Perspective: Technical implementation
  • Detail Level: Architecture, systems, technologies
  • Example: "Use TOTP protocol for 2FA implementation"

Tip: Many initiatives benefit from both a PRD (defining requirements) and an RFC (defining technical approach)

Continuous Improvement Metrics

Measuring the right metrics helps teams identify opportunities for improvement, track progress, and make data-driven decisions. These metrics provide visibility into delivery speed, quality, and flow efficiency to drive continuous improvement.

DORA

DORA Metrics

DevOps Performance

What are DORA Metrics?

DORA (DevOps Research and Assessment) metrics are four key measures that indicate the performance of software delivery teams. Research shows these metrics are strong predictors of organizational performance.

Deployment Frequency

How often does your team deploy code to production?

Elite: Multiple deployments per day
High: Between once per day and once per week
Medium: Between once per week and once per month
Low: Less than once per month
Lead Time for Changes

How long does it take from code commit to code running in production?

Elite: Less than one hour
High: Between one day and one week
Medium: Between one week and one month
Low: More than one month
Change Failure Rate

What percentage of deployments cause a failure in production requiring hotfix or rollback?

Elite: 0-15%
High: 16-30%
Medium: 31-45%
Low: 46-100%
Time to Restore Service

How long does it take to recover from a failure in production?

Elite: Less than one hour
High: Less than one day
Medium: Between one day and one week
Low: More than one week
Why These Matter: DORA metrics measure both throughput (deployment frequency, lead time) and stability (change failure rate, time to restore). High-performing teams excel at both, showing that speed and stability are not trade-offs.
Constraints

Theory of Constraints Metrics

Bottleneck Analysis

What is Theory of Constraints?

Theory of Constraints (ToC) focuses on identifying and managing the bottleneck (constraint) in your system. The constraint determines the maximum throughput of your entire delivery process.

Key Principles:

  • Every system has at least one constraint limiting its performance
  • Improving anything other than the constraint is an illusion
  • The constraint determines overall system throughput
Throughput

The rate at which the system generates value (e.g., completed stories per sprint, features shipped per month).

Example: If your team completes 20 story points per sprint consistently, your throughput is 20 points/sprint.
Work In Progress (WIP)

The number of items currently being worked on. High WIP often indicates a constraint or bottleneck.

Watch for: Work piling up before certain stages (e.g., code review, QA, deployment) indicates a constraint at that stage.
Constraint Utilization

How effectively is the bottleneck being used? Any idle time at the constraint wastes total system capacity.

Goal: Keep the constraint at 100% utilization. Protect it from starvation by ensuring upstream work is always ready.
How to Apply: Identify your constraint (where work queues up), ensure it's never idle, subordinate everything else to the constraint, elevate the constraint's capacity, and repeat as the constraint moves.
Flow

Kanban Flow Metrics

Flow Efficiency

What are Kanban Metrics?

Kanban metrics focus on flow efficiency and predictability. They help teams understand how work moves through their system and identify opportunities for improvement.

Cycle Time

The total time from when work starts until it's completed and delivered. This is your primary measure of speed.

Measure: From "In Progress" to "Done"
Track: Average, median, and percentiles (85th, 95th)
Lead Time

The total time from when a request is made until it's delivered. Includes time waiting to start.

Measure: From "Backlog" to "Done"
Customer perspective: This is what users experience
Work In Progress (WIP)

The number of items actively being worked on. Lower WIP typically leads to faster cycle times.

Little's Law: Average Cycle Time = WIP / Throughput
Action: Set WIP limits per column to reduce context switching
Throughput

The number of items completed per unit of time. Measure of delivery rate.

Track: Items per week/sprint
Use for: Capacity planning and forecasting
Flow Efficiency

The percentage of time work is actively being worked on vs. waiting. Most teams have 10-20% flow efficiency.

Formula: (Active Time / Total Lead Time) × 100
Improve by: Reducing wait times, handoffs, and blockers
Blocked Time

How much time items spend blocked or waiting on external dependencies.

Track: Number of blocked items, average block duration
Use to: Identify systemic issues and external dependencies
Cumulative Flow Diagram (CFD): Visualize WIP, throughput, and cycle time trends over time. A stable CFD shows consistent flow; expanding bands indicate growing WIP or bottlenecks.
Sprint Metrics

Sprint Metrics

Planning & Delivery

What are Sprint Metrics?

Sprint metrics help teams plan capacity, track delivery consistency, and improve estimation accuracy. These metrics support velocity-based planning and predictable delivery.

Velocity

The average number of story points completed per sprint. Used for capacity planning and forecasting.

Calculate: Sum story points completed over last 3-5 sprints, then divide by number of sprints
Use for: Planning how much work to commit to in upcoming sprints
Note: Velocity is team-specific and should never be compared across teams
Commitment vs Completion

Comparison between story points committed at sprint planning vs actually completed by sprint end.

Ideal: 90-100% completion rate consistently
If consistently low: Team is over-committing or stories are poorly estimated
If consistently 100%: Team may be under-committing and could take on more
Sprint Goal Success Rate

Percentage of sprints where the sprint goal was achieved, regardless of all stories being completed.

Track: Did we achieve what we set out to accomplish?
Why it matters: Focuses on outcomes over output
Target: 80%+ sprint goal achievement
Story Point Distribution

Breakdown of completed stories by size (e.g., 1, 2, 3, 5, 8, 13 points).

Watch for: Too many large stories may indicate insufficient refinement
Healthy mix: Majority of stories should be small to medium (1-5 points)
Action: Break down 8+ point stories during refinement
Scope Change

Number of stories added or removed mid-sprint after planning.

Track: How often does scope change during the sprint?
Frequent changes indicate: Poor planning, unclear priorities, or external disruptions
Goal: Minimize scope changes to maintain team focus
Estimation Accuracy

How closely actual effort matches estimated story points over time.

Improve through: Regular refinement sessions and retrospective discussions
Watch for: Consistently over or under-estimated story sizes
Remember: Story points are relative sizing, not precise time estimates
Sprint Burndown

Visual chart showing remaining work (story points) throughout the sprint.

Ideal pattern: Steady downward slope from sprint start to end
Flat line: Work isn't being completed, may indicate blockers
Spike up: Scope was added mid-sprint
Use daily: Quick visual check during standup
Velocity Best Practices: Don't use velocity as a performance metric or to compare teams. Velocity naturally varies as team composition, technology, and story point calibration differ. Use it solely for the team's own capacity planning and to identify trends in their delivery patterns.

Using Metrics Together

📊 Plan Capacity

Use Sprint metrics (velocity, commitment) to plan realistic sprint commitments based on team capacity and historical performance.

🎯 Understand Flow

Use Kanban metrics to understand your delivery flow, identify where work gets stuck, and establish baseline cycle times.

🔍 Find Constraints

Apply Theory of Constraints thinking to identify your bottleneck and focus improvement efforts where they'll have the most impact.

📈 Measure Outcomes

Track DORA metrics to measure overall DevOps performance and ensure process improvements translate to better delivery outcomes.

🔄 Iterate Continuously

Review all metrics regularly in retrospectives. Use data to drive experiments and validate that changes actually improve performance.

Remember: Metrics are tools for learning and improvement, not goals in themselves. Focus on trends and continuous improvement rather than hitting specific numbers. Never use velocity or story points to compare teams or measure individual performance.