Thursday, December 10, 2009

Virtual Functions in C++

The word Virtual refers to something that exists in effect but not in reality. A virtual function is a function that does not really exist but does affect some parts of the program. Let us see why we would need to use the virtual functions.

Consider a program to draw different shapes like triangles, circles, squares, ellipses etc. Consider that each of these classes has a member function called “draw()” by which the object is drawn. To be able to put all these objects together in one picture we would have to find a convenient way to call each object's draw() method. Let us look at the class declarations for a class "Shapes"
and derived classes "Circle" and "Square".

class Shapes{
....
....
public:
void draw(){
cout<<"Draw Base"< }

};
class Circle : public Shapes{
private:

int radius;

public:

Circle(int r);
void draw(){
cout<<"Draw Circle"< }
};
class Square:public Shapes{
private:
int length;

public:
Square(int l);
void draw(){
cout<<"Draw Square"< }
};
void main(){
Circle c;
Square s;
Shapes *ptr;

ptr=&c;
ptr->draw();
ptr=&s;
ptr->draw();
}

When pointers are used to access classes the arrow operator(->) can be used to access members of the class. We have assigned the address of the derived class to a pointer of the base class as seen in the statement:
ptr=&c; //where 'c' is an object of class Circle

Now, when we call the draw() method using:
ptr->draw();
we expect the "draw()" method of derived class would be called. However, the result of the program would be as follows:

Draw Base

Draw Base

Instead, we want to be able to use the function call to draw a square or any other object depending on which object "ptr" pointed to. This means, different draw() methods should be executed by the same function call.

For us to achieve this all the different classes of shapes must be derived from a single base class "Shapes". The "draw()" function must be declared to be "virtual" in the base class. The "draw()" function can then be redefined in each derived class. The "draw()" function in the base class declaration would be changed as follows:

class Shapes{
....
....

public:
virtual void draw(){
cout<<"Draw Base"< }

};

The keyword "virtual" indicates that the function draw() can have different versions for the derived classes. A virtual function allows derived classes to replace the implementation provided by the base class. The compiler makes sure the replacement is always called whenever the object in question is actually of the derived class.

The virtual function should be declared in the base class and can not be redeclared in a derived class. The return type of a member function must follow the keyword "virtual". If a hierarchy of derived classes is used, the virtual function has to be declared in the top-most level. Here, in the above example top most class is "Shapes" and the virtual can be declared in that.

A virtual function must be defined for the class in which it was first declared. Then even if no class is derived from that class, the virtual function can be used by the class. the derived class that does not need a special version of a virtual function need not provide one.

To redefine a virtual function in the derived class you need to provide a function exactly matching the virtual function. It must have the same arguments(in terms of number of parameters and data type), otherwise the compiler thinks you want to overload the virtual function. The return type does not have to match. For example if the virtual function of the class returns a pointer of its class, then you can redefine the function in a derived class and have it return pointer to the derived class, instead of a pointer of the base class. This is possible only with virtual member functions.



Tuesday, December 1, 2009

Software Quality Metrics - Slides Description by Me

In today's software industries there is intensive competition, so software companies strive to deliver bug-free/defect free efficient software at optimal cost. Due to this software development companies shifted the focus on effective testing of software test as a critical aspect of the software development.

So, Test Metrics plays an important role to describe relevant information to the management on test effectiveness.

According to the dictionary word "METRICS"means "A Science of meter". So we can say in software terminolgy as Metrics are standards of measurement. Metrics are quantitative measures of the degree to which a system, component, or process possesses a given attribute. Test metrics are used to measure the effectiveness of software testing. For example, test metrics are used to accurately estimate the number of tests that need to be applied to a software project.
Metrics are the best way to know whether a process is under control and is performing as desired. For example, if there is a large or complex project, test metrics can be used to monitor the quality of the overall project and measure the effectiveness of processes applied for development.
Test metrics are used to plan the number and types of tests on a software product. Test metrics are used to estimate various components like test coverage for given components of a software project.
The testing teams must be aware of why test metrics are gathered and how they help. The various purposes for which test metrics are used in an organization are:
1. Basis for estimation - Test metrics can serve as a basis for estimation for types of tests and the test effort required for the software.
2. Status reporting - The testing status of a software development project can be measured based upon various components such as the number or percentage of test cases written or executed, requirements tested, modules tested, and business functions tested.
3. Flag actions - Some test metrics guide the management on when to flag certain actions in the software development process. These metrics are established to signal actions that need to occur if a threshold is met. For example, some organizations create entry criteria into the system when the test group demonstrates that the application is complete.
4. Identification of risky areas - Test metrics gathered from various parts of a software development project indicate the parts of the project that require greater care during testing. Test metrics, by measuring the relative defect density of the modules, provide a realistic basis for planning the test cases of a module.

Gathering test metrics with the correct framework is crucial for a successful test metrics collection program. While gathering test metrics, the following must be ensured-
1. Allocation of dedicated resources - The test metrics collection program needs a dedicated team. The measurement program is destined to fail if resources are not dedicated to it.
2. Preferably use automation tools - Metrics gathering and reporting is a timeconsuming activity. In many organizations, people who perform measuring activities spend a good portion of their workweek creating and distributing metrics. Automated measurement tools are available in the market to reduce this effort.
3. Focus the test tracking processes on to the goals of testing - The focus of the testing processes must always be set on the goals set by the management of the organization. Metrics are used to track progress towards the goals.

There are various types of test metrics that can be gathered in a software development project. Unfortunately, there is no standard list of test metrics that need to be gathered, as the measurement needs of each project are usually different. At the very least, most projects need measures of quality, resources, time, and size to analyze the product and process effectiveness.

There are different metrics that are unique to testing activities. The common metrics unique to testing include Defect Removal Efficiency (DRE), Defect Density (DD) and Mean Time to Last Failure (MTTF).

Defect Removal Efficiency (DRE): This is the measure of defects before the delivery of software. DRE is a metric that provides benefits at both the project level as well as the process level of the software development process.

DRE is calculated as a percentage of the defects identified and corrected internally with respect to the total defects in the complete project life cycle. Thus, DRE is the percentage of defects eliminated by measures, such as reviews, inspections, and tests.

DRE= (Total Number of defects found during development (before Delivery))/ (Total Number of defects found during development (before Delivery)+Total Number of Defects Found after Delivery)

When the value of DRE is 1, it implies that there is no defects found in the software.This is the ideal situation. However, in real life scenarios, the total number of defects after delivery will be greater than zero.

The DRE indicates the defect filtering ability of the testing processes. DRE encourages the software testing team to find as many defects as possible before delivery.

Defect Density (DD): This is the measure of all the found defects per size of software entity being measured. In other words, DD is the number of defects that have been identified in a software product divided by the size of the software component. This measure is expressed in the standard measurement terms for the software product.
These can be in Line of code (LOC) or Function Points (FPs). The size is used to normalize the measurements to allow comparisons between different software entities,such as the defects found in a module or release of a software product.

DD = Number of Known Defects / Size (in LOC or FP)

DD helps the management in comparing the relative number of defects found in the components of the software product. This helps the management to focus on components for additional inspection, testing, re-engineering, or replacement.

DD is used often in the software development community for another purpose also. With DD, it is possible to compare subsequent releases of software products. This measure can track the impact of defect reduction and quality improvement activities in a software development project. By normalizing the defects found in a release by the
size allows releases of varying sizes to be compared. Differences between products or product lines can also be compared in this manner.

Mean time to Failure: This is a measure of reliability, giving the average time before the first failure. In other words, this is the average time from one failure to the next failure during a test operation on a software project. This is the average time that the products are expected to operate at a given stress level before failure. For this metric, the time refers to the system operating time. This metric is not applicable prior to putting the system together for a system test. This metric is often referred to as mean time to failure (MTTF) or mean time before failure (MTBF).

A way for computing the MTTF is given below:
1. Define the period over which the MTTF is to be calculated as starting time, t2 and ending time, t3.
2. Locate the latest failure earlier than t2 and find the time, t1, at which this failure occurred.
3. Locate the earliest failure later than t3 and find the time, t4, at which it occurred, that is the two failures nearest to the period but outside the period in question.
4. Count the number of failures, including these two and all failures in between
them.
5. The MTTF for the period t2 to t3 is then given by (t4-t1)/(n-1). For example, the failures in a software were recorded. The failures occurred on 2 May, 5 June, 4 July, 28 July, 4 August, and 28 August. Calculate the MMTF for the months of June, July, and August and over the 4 months.

The MMTF for the month of June will be computed as:
first failure, t1 = 2 May
last failure, t4 = 4 July
Therefore, t4-t1 = 63 days
n = 3
Therefore, the MMTF for June is 63/2 = 31.5 days.

The MMTF for the month of July will be computed as:
first failure, t1 = 5 June
last failure, t4 = 4 August
Therefore t4-t1 = 60 days
n = 4
Therefore, the MMTF for July is 60/3 = 20 days

The MMTF for August will be computed as:
first failure, t1 = 28 July
last failure, t4 = 28 August
Therefore, t4-t1 = 31 days
n = 3, so MTTF for August is 31/2 = 15.5 days

The MMTF for the 4 months will be computed as:
first failure, t1 = 2 May
last failure, t4 = 28 August
Therefore, t4-t1 = 118 days
n = 6
Therefore, MTTF for the four months is 118/5 = 23.6 days

Complexity metrics is a component-level design metric. Component-level design metrics are used to focus on the internal characteristics of the software at the component level. Complexity metrics are very useful while testing software modules as they provide detailed information on the software. This information is used to trace the
modules and identify the areas of potential instability.

Software Quality Metrics- An Example!

A definition of software quality metrics is - A measure of some property of a piece of software or its specifications.

Basically, as applied to the software product, a software metric measures (or quantifies) a characteristic of the software. Some common software metrics are:-

•Source lines of code.
•Cyclomatic complexity, is used to measure code complexity.
•Function point analysis (FPA), is used to measure the size (functions) of software.
•Bugs per lines of code.
•Code coverage, measures the code lines that are executed for a given set of software tests.
•Cohesion, measures how well the source code in a given module work together to provide a single function.
•Coupling, measures how well two software components are data related, i.e. how independent they are.

The above list is only a small set of software metrics, the important points to note are:-

•They are all measurable, that is they can be quantified.
•They are all related to one or more software quality characteristics.

The last point, related to software characteristics, is important for software process improvement. Metrics, for both process and software, tell us to what extent a desired characteristic is present in our processes or our software systems. Maintainability is a desired characteristic of a software component and is referenced in all the main software quality models (including the ISO 9126). One good measure of maintainability would be time required to fix a fault. This gives us a handle on maintainability but another measure that would relate more to the cause of poor maintainability would be code complexity. A method for measuring code complexity was developed by Thomas McCabe and with this method a quantitative assessment of any piece of code can be made. Code complexity can be specified and can be known by measurement, whereas time to repair can only be measured after the software is in support. Both time to repair and code complexity are software metrics and can both be applied to software process improvement.

We now see the importance of measurement (metrics) for the SDLC and SPI. It is metrics that indicate the value of the standards, processes, and procedures that SQA assures are being implemented correctly within a software project. SQA also collects relevant software metrics to provide input into a SPI (such as a CMMi continuous improvement initiative). This exercise of constantly measuring the outcome, then looking for a causal relationship to standards, procedures and processes makes SQA and SPI pragmatic disciplines.

Following section tries to pull the ideas of quality metrics, quality characteristics, SPI, SQC and SQA together with some examples by way of clarifying the definition of these terms.

The Software Assurance Technology Center (SATC), NASA, Software Quality Model includes Metrics

The table below cross references Goals, Attributes (software characteristics) and Metrics. This table is taken from the Software Assurance Technology Center (SATC) at NASA.

The relationship with Goals lends itself to giving examples of how this could be used in CMMi. If you look at the other quality models they have a focus on what comes under the Product (Code) Quality goal of the SATC model.

The reason SQA.net prefers the SATC quality model is:-


•The standard, including the new ISO 9126-1, quality models describe only the system behavioral characteristics.
•The SATC model includes goals for processes, (i.e. Requirements, Implementation and Testing).
•The SATC model can be used to reference all the CMMi, for example Requirements management (which includes traceability).
•If desired the SATC model can be expanded to accommadate grater risk mitigation in the specified goal areas, or other goal areas can be created.
•Demonstrating the relationship of metrics to quality characteristics and SPI (CMMI) is well served by the SATC quality model.

If you need to do this work in practice, you will need to select a single reference point for the Software Quality Model, then research the best metric for evaluating the characteristic. The importance of being able to break down a model of characteristics into measurable components indicates why these models all have a hierarchal form.