Tuesday, March 12, 2019

Software Engineering Metrics: The Myth and Realities


INTRODUCTİON
For sometime now, it had been a difficult task quantifying and representing the performance and the profits  of  purchasing or developing a sofware product for the layman who in this case could be customers or stakeholders who have invested in that software. This case is not only in the business World but also the academic research World where artificial intelligence techniques are making data analysis and prediction faster and easier.

A typical miscontrue
The metrics are available but the use is where the problem lies. Because different purposes need different metrics and it is quite difficult to identify the specific metrics which could maket he purpose of the performance testing much fulfilling. To try and reduce the negative impact of  this issue, i have done some research on about 10(ten) main software engineering metrics that i realised have been common in every process such as profit analysis, business analysis and the longevity analysis of a product.


These 10 metrics can be grouped under major types such as Agile Process Metrics,Production Analytics,Function Metrics ,Delivery Metrics and Security Metrics. We realize that these major groupings take care of each process during software development which implies that at each step of the way,at least one or more metrics can be used to quantitatively help the stakeholders know where their investments and where their profits are coming from.

·         Leadtime:This is generally how long it takes you to go from idea to delivered software. If you want to be more responsive to your customers, work to reduce your leadtime, typically bymaking sure that decision-making is simple without bureaucracies and reduce wait time.

·         Cycle time: This measures how long it takes a software developer  to make a change to a software system and deliver that change into production process. Teams using continuous delivery can have cycle times measured in minutes or even seconds instead of months.

·         Team velocity: This measures how many “units” of software the team typically develops or tries to complete in an iteration (a.k.a. “sprint”). This number should only be used to plan iterations. This is usally possibşe when using the scrum system of software development.

·         Open/close rates: This also measures how many production issues are reported and closed within a specific time period and could also be instituted into the “sprint” cycle. The general trend matters more than the specific numbers as this can be represented on graphs and models that will make easier sense to stakholders.

·         Code churn: This represents the number of lines of code that were modified, added or deleted in a specific time-period. If code churn increases, then it could be a sign that the software development project needs attention.
These first five metrics we have seen are agile process metrics. These metrics do not  measure success or value added, and have nothing to do with the objective quality of the software, but then are also importat in their own way to make things better and easier during the process. A high open rate and a low close rate across a few iterations, for example, may mean that the production issues are currently a lower priority than new features, or perhaps that the team is focused on reducing technical debt to fix entire classes of issues, or that there is personnel shortage. These are all viable reasons that need to be consulted with the team and “scrum” so as to use this knowledge to implement better policies during the next sprint cycle.

·         Application crash rate; another crucial software engineering metric  is calculated by dividing how many times an application fails (F) by how many times it is used (U).
ACR = F/U

·         Mean time between failures (MTBF): Tİt simply means how long a software cn run before experiencing a failure. This is a metric based on prior observations, designating a software’s average time between failures. An MTBF value can be defined by the following equation:
        MTBF = total operational uptime between failures / number of failures
·         Mean Time to Failure (MTTF): This measures failure rates for a software product or a component of the development. Unlike MTBF items, MTTFs are only used to designate failure rates for replaceable (non-repairable) products, such as keyboards, and motherboards etc. MTTF formulas generally use the same equations as for an MTBF product, but they only record one data point for each failed item. Which means that, replaceable components cannot be repaired and the said component of the software product’s first failure is its only failure and it must be replaced.

·         Mean time to repair (MTTR):  This metric represents the average time to repair or replace a failed product or subsystem of a product. İn previous researches made, purchasing software that tracks MTBF, MTTF, and MTTR history by individual product in your data center can help improve your data center and service desk performance. That is why it is very necessary for a software engineer keep tabs on these metrics and possibly include them in the final product.

·         Defect Removal Efficiency: This metric simply defines the defect removal efficiency (DRE) rate which gives a measurement of the development team ability to remove defects prior to release. It is calculated as a ratio of defects resolved to total number of defects found. It is typically measured prior and at the moment of release. To be able to calculate that metric, it is important that in your defect tracking system you track:
1.       affected version, version of software in which this defect was found.
2.       release date, date when version was released

Simply put, DRE = Number of defects resolved by the development team / total number of        defects at the moment of measurement.
After all is said and done,it is important to note that the only way to make an optimize use of these metrics is to know when, where and how to use them. İ.T is not as advanced as science which basically has tried and tested hypothesis that have become scientific approaches. The hope and believe is that, technology can also get there by learning from the mistakes of science which will make it faster for technology to get to that goal than how long it took science.

No comments:

Post a Comment