Tuesday, March 12, 2019

Software Engineering Metrics: The Myth and Realities


INTRODUCTİON
For sometime now, it had been a difficult task quantifying and representing the performance and the profits  of  purchasing or developing a sofware product for the layman who in this case could be customers or stakeholders who have invested in that software. This case is not only in the business World but also the academic research World where artificial intelligence techniques are making data analysis and prediction faster and easier.

A typical miscontrue
The metrics are available but the use is where the problem lies. Because different purposes need different metrics and it is quite difficult to identify the specific metrics which could maket he purpose of the performance testing much fulfilling. To try and reduce the negative impact of  this issue, i have done some research on about 10(ten) main software engineering metrics that i realised have been common in every process such as profit analysis, business analysis and the longevity analysis of a product.


These 10 metrics can be grouped under major types such as Agile Process Metrics,Production Analytics,Function Metrics ,Delivery Metrics and Security Metrics. We realize that these major groupings take care of each process during software development which implies that at each step of the way,at least one or more metrics can be used to quantitatively help the stakeholders know where their investments and where their profits are coming from.

·         Leadtime:This is generally how long it takes you to go from idea to delivered software. If you want to be more responsive to your customers, work to reduce your leadtime, typically bymaking sure that decision-making is simple without bureaucracies and reduce wait time.

·         Cycle time: This measures how long it takes a software developer  to make a change to a software system and deliver that change into production process. Teams using continuous delivery can have cycle times measured in minutes or even seconds instead of months.

·         Team velocity: This measures how many “units” of software the team typically develops or tries to complete in an iteration (a.k.a. “sprint”). This number should only be used to plan iterations. This is usally possibşe when using the scrum system of software development.

·         Open/close rates: This also measures how many production issues are reported and closed within a specific time period and could also be instituted into the “sprint” cycle. The general trend matters more than the specific numbers as this can be represented on graphs and models that will make easier sense to stakholders.

·         Code churn: This represents the number of lines of code that were modified, added or deleted in a specific time-period. If code churn increases, then it could be a sign that the software development project needs attention.
These first five metrics we have seen are agile process metrics. These metrics do not  measure success or value added, and have nothing to do with the objective quality of the software, but then are also importat in their own way to make things better and easier during the process. A high open rate and a low close rate across a few iterations, for example, may mean that the production issues are currently a lower priority than new features, or perhaps that the team is focused on reducing technical debt to fix entire classes of issues, or that there is personnel shortage. These are all viable reasons that need to be consulted with the team and “scrum” so as to use this knowledge to implement better policies during the next sprint cycle.

·         Application crash rate; another crucial software engineering metric  is calculated by dividing how many times an application fails (F) by how many times it is used (U).
ACR = F/U

·         Mean time between failures (MTBF): Tİt simply means how long a software cn run before experiencing a failure. This is a metric based on prior observations, designating a software’s average time between failures. An MTBF value can be defined by the following equation:
        MTBF = total operational uptime between failures / number of failures
·         Mean Time to Failure (MTTF): This measures failure rates for a software product or a component of the development. Unlike MTBF items, MTTFs are only used to designate failure rates for replaceable (non-repairable) products, such as keyboards, and motherboards etc. MTTF formulas generally use the same equations as for an MTBF product, but they only record one data point for each failed item. Which means that, replaceable components cannot be repaired and the said component of the software product’s first failure is its only failure and it must be replaced.

·         Mean time to repair (MTTR):  This metric represents the average time to repair or replace a failed product or subsystem of a product. İn previous researches made, purchasing software that tracks MTBF, MTTF, and MTTR history by individual product in your data center can help improve your data center and service desk performance. That is why it is very necessary for a software engineer keep tabs on these metrics and possibly include them in the final product.

·         Defect Removal Efficiency: This metric simply defines the defect removal efficiency (DRE) rate which gives a measurement of the development team ability to remove defects prior to release. It is calculated as a ratio of defects resolved to total number of defects found. It is typically measured prior and at the moment of release. To be able to calculate that metric, it is important that in your defect tracking system you track:
1.       affected version, version of software in which this defect was found.
2.       release date, date when version was released

Simply put, DRE = Number of defects resolved by the development team / total number of        defects at the moment of measurement.
After all is said and done,it is important to note that the only way to make an optimize use of these metrics is to know when, where and how to use them. İ.T is not as advanced as science which basically has tried and tested hypothesis that have become scientific approaches. The hope and believe is that, technology can also get there by learning from the mistakes of science which will make it faster for technology to get to that goal than how long it took science.

From Deming's Cycle to ITIL 4


INTRODUCTION
ITIL, formerly an acronym for Information Technology Infrastructure Library, is a set of detailed best practices and processes  for IT service management (ITSM) that focuses on paralleling IT services with the needs of business.

ITIL describes processes, procedures, tasks, and checklists which are not specific to one organization nor secific to one type of technological development.It can be applied by an organization for establishing integration with the organization's strategy, delivering value, and keeping  the minimum level of competency that can be achieved. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. There is no formal independent third party compliance assessment available for ITIL compliance in an organisation. Certification in ITIL is only available to individuals.

Since July 2013, ITIL has been owned by AXELOS, a joint venture between Capita and the UK Cabinet Office. AXELOS licenses organisations to use the ITIL intellectual property, accredits licensed examination institutes, and manages updates to the framework. Organizations that wish to implement ITIL internally do not require this license.

ITIL has been adopted by thousands of organizations worldwide, including NASA, Microsoft and HSBC. There have been case studies with The Walt Disney Company and Müller Dairy in which the the ITIL framework was used  to make improvements to their businesses.
Currently studies in Turkey show that most companies are employing the İTİL process and implementing it in their digital transformation and İ.T departments.  İntellectuals such as Prof. Mehmet Demir; who is not only a lecturer at Computer Engineering Faculty of İstanbul University-Cerrapahşa, but also a co-founder of chief digital officer platform for Turkey,and founder of Netax Tech is at the front of İTİL implementation in companies in Turkey. İn just a few years, great accomplishments have been chalked by companies that implement the İTİL processes well here in Turkey. Companies such as Flo are drastically at the forefront of this digital transformations.

HISTORY
      The United Kingdom Government's Central Computer and Telecommunications Agency (CCTA) in the 1980s developed a set of recommendations after  it recognized that, without standard practices, government agencies and private sector contracts had started independently creating their own IT management practices which were not necessarily the best pratices and were most repeated. A simple product being developed was done very differently by different organizations which usually led to inconsistencies and difficulty in cross-platform integrations. İt also unncessarily strained on the budget of the companies involved in the development process of such products. The recommendation was a flexible and  general set of best standards which every organization could use in developing whichever products they were working on.

Deming's Cycle
Deming's Cycle
The IT Infrastructure Library originated as a collection of books, each covering a specific practice within IT service management. ITIL was built around a process model-based view of controlling and managing operations often credited to W. Edwards Deming and his plan-do-check-act (PDCA) cycle which was sometimes referred to as the “Demings cycle”.
PDCA was made popular by W. Edwards Deming, who is considered by many to be the father of modern quality control; however, he always referred to it as the "Shewhart cycle". Later in Deming's career, he modified PDCA to "Plan, Do, Study, Act" (PDSA) because he felt that "check" emphasized inspection over analysis.


The concept of PDCA is based on the scientific method, as developed from the work of Francis Bacon (Novum Organum, 1620). The scientific method can be written as "hypothesis–experiment–evaluation" or as "plan–do–check". This could arguably be one of the first standards that direcly followed a scientific approach to things.
A fundamental principle of the scientific method and PDCA/PDSA is iteration—once a hypothesis is confirmed (or negated), executing the cycle again will extend the knowledge further. Repeating the PDCA cycle can bring its users closer to the goal, usually a perfect operation and output.
Another fundamental function of PDCA is the proper separation of each phase, for if not properly separated measurements of effects due to various simultaneous actions (causes) risk becoming confounded. Deming continually emphasized iterating towards an improved system, hence PDCA should be repeatedly implemented in spirals of increasing knowledge of the system that converge on the ultimate goal, each cycle closer than the previous.
Continuity Diagram
In 1950, Japanese businessmen turned to Deming to help them rebuild an economy shattered in World War II. That industrial expert, W. Edwards Deming, taught Japan’s manufacturers how to produce top quality products economically through his Demings Cycle. The Japanese used that knowledge to turn the global economy on its head and beat U.S. industry at its own game.
Companies such as Toyota Motor Corp. and Sony Corp. adopted Deming’s concepts and became world-class producers in their fields, helping Japan become one of the planet’s dominant economic powers.
MODERN-DAY
Currently, ITIL is currently evolving from ITIL v3 to ITIL 4. ITIL 4 expands on previous versions by providing a practical and flexible basis to support organizations on their journey to the new world of digital transformation.
ITIL 4 expands on previous versions of ITIL by providing a practical and flexible basis to support organizations on their journey to the new world of digital transformation. It provides an end-to-end IT/digital operating model for the delivery and operation of tech-enabled products and services and enables IT teams to continue to play a crucial role in wider business strategy.
ITIL 4 development is a community and industry-led initiative who have been working with a team of industry experts based around the globe, including 150 content writers, reviewers and contributors from the wider IT industry. Axelos also  created the ITIL Development Group, now at 2,000+ members, which has helped steer the development of ITIL 4 and continues to do so. Anyone who will like to contribute his or her quota to İTİL4 can do that by joining the ITIL Development Group.