Search papers, labs, and topics across Lattice.
This paper critiques the current state of software metrics, arguing that their limited practical use stems from a lack of grounding in metrology principles. It highlights the disconnect between existing metrics and the decision-making needs of software engineers, such as refactoring or test sufficiency. The authors propose that future software metrics research should prioritize metrological rigor to enhance the utility and relevance of these measurements.
Software metrics are failing engineers because they're built on shaky measurement science, not real-world decisions.
Most engineers use measurements to make decisions. However, measurements are rarely used for decisions about constructing software products. While many approaches to measuring attributes of software (``metrics'') have been developed, they are rarely used to answer useful questions such as ``Do I need to refactor this class?''or ``Are these integration tests sufficient?''Practitioners therefore question the value of software metrics. We argue that this situation arose because software metrics were developed without understanding metrology (the science of measurement) and suggest directions software metrics research should take.