top of page

Metric for Scrum Team



Photo by Luke Chesser on Unsplash
Photo by Luke Chesser on Unsplash

1. Velocity

What: Total story points for work finished and accepted.How to calculate:

  • Add up the story points for tasks marked “Done” and approved by the Product Owner.

Example:

  • 3 tasks were finished: 5 pts, 8 pts, and 3 pts.

  • Velocity = 5 + 8 + 3 = 16


2. Work Capacity

What: Total effort the team puts in (even if not all work is finished).How to calculate:

  • Add up story point "votes" each day during Daily Stand-Ups.

Example:

  • Mon: 5 pts, Tue: 6 pts, Wed: 4 pts, Thu: 3 pts, Fri: 2 pts

  • Work Capacity = 5 + 6 + 4 + 3 + 2 = 20


3. Focus Factor

What: % of effort that became real value.How to calculate:

  • Focus Factor = Velocity ÷ Work Capacity

Example:

  • Velocity = 16, Work Capacity = 20

  • Focus Factor = 16 ÷ 20 = 0.8 or 80%


4. Adopted Work

What: Work added after Sprint Planning.How to calculate:

  • Add up story points for the work added mid-Sprint.

  • Divide by the original Sprint plan (Forecast).

Example:

  • Forecast = 20 pts, Adopted mid-Sprint = 5 pts

  • Adopted Work = 5 ÷ 20 = 0.25 or 25%


5. Found Work

What: Hidden work discovered during the Sprint.

How to calculate:

  • Add up extra points needed (above the original estimate).

  • Divide by the original Forecast.

Example:

  • Forecast = 20 pts

  • A task was estimated at 5 pts but took 9 pts → 4 extra

  • Total Found Work = 4 pts

  • Found Work = 4 ÷ 20 = 0.2 or 20%


6. Velocity Increase

What: How much faster the team is now.How to calculate:

  • Velocity Increase = Current Velocity ÷ Original Velocity

Example:

  • Sprint 1 Velocity = 10

  • Sprint 4 Velocity = 20

  • Velocity Increase = 20 ÷ 10 = 2x or 200%

7. Estimation Accuracy

What: How close your estimates are to reality.How to calculate (easy version):

  • Compare estimated vs. actual for each task.

  • If most are close, your accuracy is good.

Example (simplified):

  • Estimated 5 pts → actual 6 pts → off by 1

  • Estimated 3 pts → actual 4 pts → off by 1

  • Estimated 8 pts → actual 8 pts → spot on

  • Pretty good accuracy.


8. Forecast Accuracy

What: Did we finish what we planned?How to calculate:

  • Forecast Accuracy = Forecast ÷ (Forecast + Adopted + Found)

Example:

  • Forecast = 20 pts, Adopted = 5, Found = 5

  • Forecast Accuracy = 20 ÷ (20 + 5 + 5) = 20 ÷ 30 = 66.7%


9. Happiness Metric

What: How happy is your team?How to calculate:

  • Each team member gives a score (1–5) for:

    1. How happy they are with their job.

    2. How happy they are with the company.

  • Average the scores across the team.

Example:

  • Team of 4 scores their job: 4, 5, 3, 4 → (4+5+3+4)/4 = 4.0

  • Team of 4 scores the company: 3, 4, 4, 5 → (3+4+4+5)/4 = 4.0

  • Result: Job = 4.0, Company = 4.0


10. Kaizen Investment

What: How much effort do we spend improving ourselves?How to calculate:

  • Kaizen Investment = Work spent on improvements ÷ Total Work Capacity

Example:

  • Work Capacity = 20 story points

  • Work on improvement (e.g. automation, retro action) = 4 pointsKaizen Investment = 4 ÷ 20 = 0.2 or 20%


11. Honoring Priority

What: Did we work in the right order?How to calculate (simple version):

  • Count how many tasks were started out of order from the Sprint priority list.

  • Divide by total possible priority conflicts.

Example:

  • 5 tasks in Sprint backlog.

  • Team worked on task #3 before #1 → 1 priority violation

  • Total possible pairings: 5 tasks → 5 × (5 - 1) = 20

  • Honoring Priority = 1 ÷ 20 = 0.05 or 5% out-of-order

A lower percentage = better honoring of priority!


12. Success at Scale

What: How well do we finish big user stories? How to calculate:

  • Success at Scale = Big items completed ÷ Big items started

Example:

  • Started 4 tasks with 13+ points each (large stories)

  • Completed 3 of them

  • Success at Scale = 3 ÷ 4 = 0.75 or 75%


13. Customer Happiness

What: How happy are your customers with the product or service delivered by the team?

How to calculate: Each customer gives a score (1–5) for:

  • How satisfied they are with the product or increment delivered

  • How well the product meets their expectations or solves their problem

  •  Average the scores across all customers who give feedback.

Example:

  • 4 customers rate product satisfaction: 4, 5, 3, 4 → (4+5+3+4)/4 = 4.0

  • 4 customers rate expectation fit: 3, 4, 4, 5 → (3+4+4+5)/4 = 4.0


Reference: S. Downey, and J. Sutherland, “Scrum metrics for hyperproductive teams: How they fly like fighter aircraft,” 46th Hawaii International Conference on System Sciences, pp. 4870 - 4878, 2013. 

 
 
 

Comments


SIGN UP AND STAY UPDATED!

dahm@ocadee.com

Line ID: dahm872

  • Grey LinkedIn Icon

© 2025 by OCADEE.  

bottom of page