Team competitions

Run retrospectives on visible Jira work, not guesswork.

SeeCodes turns recent Jira-visible activity into a retrospective board that highlights contributors, solved tasks, and a weighted relative-effort score across logic, architecture, and UI/spec signals.
Current + last sprintSolved task boardVisibility-aware retrospective

A retrospective board that stays grounded in Jira visibility

The current Team Competition view combines scoped telemetry, visible issue metadata, and contributor sorting into a single Jira-native retrospective page.

Retrospective comparison, not surveillance

The current app explicitly frames effort as a subjective, productivity-style composite for healthy retrospectives, not as strict time tracking or payroll logic.

Project page / SeeCodes

Team Competition

AI-generated retrospective board
Current + Last SprintLast MonthLast Half Year
Sort: EffortRefresh Metrics

Visible contributors

3

Jira visibility enforced

Visible solved tasks

11

Done tasks in scope

Relative effort produced

764

Composite effort units in scope

Monthly baseline

100

Typical visible completed task

AI-generated retrospective board

“Effort” is a productivity-oriented composite of active minutes, changed LOC, files changed, and architecture / logic / UI-specification signals.

Raw effort = active minutes × 1.8 + LOC changed × 0.18 + files changed × 3, plus bonuses for architecture (+24), logic (+14), and UI / specification (+10). A relative score of 100 means roughly “typical for this month.”

Visible tasks in scope: 22. Hidden by Jira visibility: 3. Activity records in scope: 144. Relative task effort is normalized so that 100 behaves like a monthly index baseline, not like an hour target or a payroll number.

Competition Board

Sort by relative effort, solved tasks, logic, architecture, UI/spec, or contributor name.

#1

Avery Chen

5 solved • 7 contributed tasks

ArchitectureLogic
Team share41%

Relative effort 312 • Active minutes 184 • LOC changed 1210 • Files changed 47 • Avg solved-task effort 128

#2

Priya Shah

4 solved • 6 contributed tasks

LogicUI / Spec
Team share33%

Relative effort 254 • Active minutes 162 • LOC changed 990 • Files changed 39 • Avg solved-task effort 117

#3

Jordan Miles

3 solved • 5 contributed tasks

UI / SpecLogic
Team share26%

Relative effort 198 • Active minutes 141 • LOC changed 740 • Files changed 31 • Avg solved-task effort 109

Solved Task Board

Top solved tasks in scope with relative effort and transparent raw-effort drivers.

PLAT-221

Rework auth token rotation

Effort 142Raw 102.43 contributorsStory
Architecture +24Logic +14

Assignee Avery Chen23 active min • 78 LOC changed • 3 files changed

PAY-84

Stabilize billing guardrails

Effort 124Raw 89.62 contributorsBug
Logic +14UI / Spec +10

Assignee Priya Shah22 active min • 78 LOC changed • 4 files changed

WEB-97

Refine logout and error messaging

Effort 108Raw 78.22 contributorsTask
Logic +14UI / Spec +10

Assignee Jordan Miles19 active min • 61 LOC changed • 3 files changed

How the page reads effectiveness from effort

Strictly speaking, the formula measures relative effort. The page becomes a useful effectiveness view only when that score is read next to solved work and average solved-task effort.

Important nuance: the formula is an effort proxy

On this page, what people casually call “effectiveness” is really a combination of relative effort, solved work, and task difficulty. The formula itself is intentionally subjective and is designed to support healthier retrospectives, not to claim a universal truth about individual productivity.
Current scoring logic
Raw effort = active minutes × 1.8
           + LOC changed × 0.18
           + files changed × 3
           + architecture signal ? 24 : 0
           + logic signal ? 14 : 0
           + UI / spec signal ? 10 : 0

Relative score ≈ raw effort / monthly typical raw effort × 100
100 ≈ "typical for this month"

The strongest scientific case is for the choice of inputs and for the monthly normalization step. The exact coefficients are still product heuristics chosen so that time, churn, diffusion, and semantically heavier work all stay visible in one interpretable index.

How to read a score of 100

  • 100 ≈ a typical visible completed task in the current month.
  • 120 means roughly 20% above that month’s visible baseline, not 20% 'better' than another team in another repository.
  • 60 means lighter than that month’s typical visible completed task, not low value or weak performance.

Where “effectiveness” actually shows up

  • Solved count shows whether effort is turning into finished work.
  • Average solved-task effort shows whether someone is closing lighter or heavier completed tasks.
  • Dominant-area labels show whether contribution skewed toward architecture, logic, or UI/spec work.

Active minutes × 1.8

Minutes with detected work activity give the score a focused-work component, but the page explicitly frames them as a proxy for retrospective context rather than payroll time.

Changed LOC × 0.18

Code churn captures implementation magnitude. The lower coefficient prevents raw line volume from overpowering every other signal on the board.

Files changed × 3

Cross-file changes usually imply broader reasoning, more coordination, and more places where side effects can appear, so the model gives diffusion visible weight.

Semantic bonuses

Architecture (+24), logic (+14), and UI / specification (+10) bonuses stop the model from pretending that every edit has the same blast radius or product meaning.

Why this formula shape is scientifically defensible

Empirical software-engineering research supports the dimensions of the score more strongly than it supports any single universal coefficient. That is exactly why the page keeps calling the metric subjective.

Research-informed structure; heuristic coefficients

Self-reported productivity is still the gold standard for subjective effectiveness. This board is a low-friction proxy built from observable activity, code churn, diffusion, and semantic change signals, then normalized into a monthly index.

Activity belongs, but never alone

Developer-productivity research consistently treats activity and flow as useful dimensions of work, while warning that activity-only measurement is easy to misuse.

LOC changed is a defensible size signal

Empirical software-engineering research shows that relative code churn is informative for change magnitude and defect risk, which makes changed LOC a reasonable ingredient in a composite score.

Files changed captures diffusion

Just-in-time defect-prediction studies repeatedly find that distributed changes across more files or subsystems are harder to reason about and more likely to be risky.

Architecture deserves extra credit

Architecture and modifiability research shows that semantic transformations and change propagation affect cost beyond raw structural counts, which supports a larger architecture bonus than a UI / spec bonus.

Research anchors behind the rationale

  • SPACE framework: productivity is multidimensional, and activity metrics should not be used in isolation.
  • Relative code churn research: changed LOC becomes useful when interpreted as contextual change magnitude rather than as a vanity metric.
  • Just-in-time quality assurance research: files touched and change diffusion are among the most useful signals for risky software changes.
  • Architecture modifiability research: semantic architectural transformations affect change cost beyond raw structural metrics alone.
  • Computer-activity studies: active minutes can correlate with perceived productivity, but extremes and context still matter.

How to use the score safely

  • Treat the score as a retrospective index, not as a universal truth about individual productivity.
  • Compare contributors only inside similar scope, visibility, and time windows rather than across unrelated months or teams.
  • Read high effort together with solved tasks, risk, and review quality; effort alone is not effectiveness.
  • Never turn the score into payroll, surveillance, or a single-number performance system.

Where teams use it

This page works best as a retrospective and planning aid, especially when a team wants to celebrate contribution without losing nuance.
  • Run retrospectives with a shared view of solved work, contributor mix, and relative task effort.
  • Celebrate specialists and generalists without forcing the conversation into raw ticket counts.
  • Spot work that needed multiple contributors or carried unusually high relative effort.
  • Find follow-up topics for pairing, knowledge sharing, or planning quality in the next sprint.

Best used for retrospectives and coaching

This board is ideal for looking back, spotting patterns, and celebrating contribution. It should support discussion and learning, not replace human judgment with a single score.