LogoSkills

ambiguity-score

Ambiguity Score Calculation — Measure spec clarity through 5-dimension quantitative evaluation

항ëĒŠë‚´ėšŠ
Invoke/spec:ambiguity
Categoryspec-clarity
Complexitymedium

/spec:ambiguity#

Quantitatively evaluates the Seed spec's ambiguity across 5 dimensions and calculates a score.

Usage#

/spec:ambiguity                      # Evaluate the latest Seed spec
/spec:ambiguity --file {path}        # Evaluate a specific file
/spec:ambiguity --verbose            # Include detailed per-dimension analysis

Parameters#

ParameterDescriptionDefault
--file Target file path Latest docs/seed-spec-*.md
--verboseDetailed analysis outputfalse

Execution Flow#

1. Load Target File#

  • Read Seed spec file
  • Parse each section

2. 5-Dimension Evaluation#

Evaluate each dimension per AMBIGUITY_RUBRIC.md:

2a. Lexical Precision (weight 0.25)

  • Scan for prohibited words ("fast", "appropriate", "etc.", "if needed" ...)
  • Check whether specific values/criteria have been substituted
  • Score: 0.0 (0 prohibited words) ~ 1.0 (pervasively ambiguous)

2b. Reference Clarity (weight 0.20)

  • Scan for pronouns/demonstratives ("this", "that", "said")
  • Check whether abbreviations are defined
  • Check for implicit subjects ("is processed" → missing agent)
  • Score: 0.0 (all references clear) ~ 1.0 (key references unclear)

2c. Completeness (weight 0.20)

  • Check whether AC (acceptance criteria) exist for all features
  • Check whether edge cases are identified
  • Check whether error scenarios are defined
  • Score: 0.0 (all complete) ~ 1.0 (most missing)

2d. Consistency (weight 0.20)

  • Check terminology unity
  • Check for numeric/logical contradictions between sections
  • Check priority consistency
  • Score: 0.0 (fully consistent) ~ 1.0 (severe contradictions)

2e. Testability (weight 0.15)

  • Check whether each requirement has Pass/Fail criteria
  • Check automation feasibility
  • Check reproducibility
  • Score: 0.0 (all testable) ~ 1.0 (most require subjective judgment)

3. Final Score Calculation#

score = (lexical × 0.25) + (reference × 0.20) + (completeness × 0.20)
      + (consistency × 0.20) + (testability × 0.15)

4. Verdict#

GradeScoreResult
CLEAR<= 0.2Pass
WARNING 0.2~0.3 Conditional — Improvement recommendations provided
AMBIGUOUS0.3~0.5Fail — Required fix items presented
UNCLEAR> 0.5Reject — Rewrite required

5. Seed Spec Update#

  • Update the ambiguity score in metadata

Output#

Evaluation results printed to console. With --verbose, detailed per-dimension analysis is included. Ambiguity score in Seed spec metadata is updated.

  • /spec:seed — Create Seed spec
  • /spec:verify — Verification + lock
  • /spec:evaluate — 3-stage integrated evaluation

References#

  • references/AMBIGUITY_RUBRIC.md — Detailed 5-dimension rubric
  • config/thresholds.yaml — Threshold settings