Computing Technologies
Wednesday, October 12, 2016
Thursday, March 18, 2010
Hibernate Tutorial
In today's programming world all contemporary, large applications are written using object oriented programming. So, J2EE is real choice for developers, where they are able to develop the Object Oriented programs. Since J2EE is powered by Java programming so object orientation for applications and relation of databases are totally natural, which provides the faster way to write the object systems, they are also easier to design and divide tasks. Also, implementation of this is usually very suitable for any kind of modifications. Similarly, relational databases seem to be irreplaceable, because they are very effective and introducing another model of storing databases.
Java language comes with a feature called JDBC (Java Database connectivity), which gives us an opportunity to download data from a database any number of times, transfer them to an application and save the data converted by the system to a database.
Now, just imagine one day suddenly we need to change the database from the current database which we are using, e.g., today if we are using PostgreSQL to MySQL in this situation some commands and constructions working with PostgreSQL might not work with MySQL, which leads us to convert the whole code. Here the use comes for Hibernate.
What is Hibernate
Hibernate is an object-relational mapping(ORM) libray for the Java language, providing a framework for mapping an object oriented domain model to a traditional database relational database. Hibernate solves object-relational impedance mismatch problems by replacing direct presistence-related database accesses with high-level object handling functions.
Hibernate's primary feature is mapping from Java classes to database tables (and from Java data types to SQL data types). Hibernate also provides data query and retrieval facilities.
Hibernate generates the SQL calls and relieves the developer from manual result set handling and object conversion, keeping the application portable to all supported SQL databases, with database portability delivered at very little performance overhead.
About Object Relational Mapping
ORM - Object Relation Mapping. Almost all contemporary databases are relational and based on SQL language. Most programs using them are created in an object way. So some kind of incoherence appears at the meeting point of a database and software. We need some appropriate tools to translate data from relational language into an object language and the other way round.
Service of JDBC is extremely troublesome while dealing with more complex applications.
Tasks of ORM
Hibernate is currently one of the most popular ORM solutions.Its popularity and effectiveness were proved by the fact that e.g. implementation of CMP in EJB3 on JBoss server is based on Hibernate. What is more, a lot of solutions were copied during designing EJB3 specification.
Other, common solutions ORM include: CMP, JDO. Hibernate, as opposed to them, is not a standard but a concrete solution.
Advantages of ORM
Lightness, it is no need to use special containers, it is enough to attach an appropriate library. One can also use it to both: web and client applications. There is no need to generate manually an additional code. As opposed to e.g. JDO, Hibernate requires only presence of configuration files. Additional classes used by it are generated during performing a task.
Features of Hibernate
Hibernate is a solution for object relational mapping and a persistence management solution or persistent layer. This is probably not understandable for anybody learning Hibernate.
What you can imagine is probably that you have your application with some functions (business logic) and you want to save data in a database. When you use Java all the business logic normally works with objects of different class types. Your database tables are not at all objects.
Hibernate provides a solution to map database tables to a class. It copies the database data to a class. In the other direction it supports to save objects to the database. In this process the object is transformed to one or more tables.
Saving data to a storage is called persistence. And the copying of tables to objects and vice versa is called object relational mapping.
Practical View Point
The most important Hibernate's feature is mapping from Java classes to database tables (and from Java data types to SQL data types), but also to provides data query ( and retrieval facilities ).
It is important that Hibernate generates the SQL calls and relieves keeping the application portable to all SQL databases, with database portability delivered at very little performance overhead.This feature can significialy reduce development time that programmer would have to spent with manual data handling in SQL and JDBC.
You can use Hibernate as in standalone Java applications or as in Java EE applications using servlets or EJB session beans.
Thursday, December 10, 2009
Virtual Functions in C++
Consider a program to draw different shapes like triangles, circles, squares, ellipses etc. Consider that each of these classes has a member function called “draw()” by which the object is drawn. To be able to put all these objects together in one picture we would have to find a convenient way to call each object's draw() method. Let us look at the class declarations for a class "Shapes" and derived classes "Circle" and "Square".
class Shapes{
....
....
public:
void draw(){
cout<<"Draw Base"<
};
class Circle : public Shapes{
private:
int radius;
public:
Circle(int r);
void draw(){
cout<<"Draw Circle"<
};
class Square:public Shapes{
private:
int length;
public:
Square(int l);
void draw(){
cout<<"Draw Square"<
};
void main(){
Circle c;
Square s;
Shapes *ptr;
ptr=&c;
ptr->draw();
ptr=&s;
ptr->draw();
}
When pointers are used to access classes the arrow operator(->) can be used to access members of the class. We have assigned the address of the derived class to a pointer of the base class as seen in the statement:
ptr=&c; //where 'c' is an object of class Circle
Now, when we call the draw() method using:
ptr->draw();
we expect the "draw()" method of derived class would be called. However, the result of the program would be as follows:
Draw Base
Draw Base
Instead, we want to be able to use the function call to draw a square or any other object depending on which object "ptr" pointed to. This means, different draw() methods should be executed by the same function call.
For us to achieve this all the different classes of shapes must be derived from a single base class "Shapes". The "draw()" function must be declared to be "virtual" in the base class. The "draw()" function can then be redefined in each derived class. The "draw()" function in the base class declaration would be changed as follows:
class Shapes{
....
....
public:
virtual void draw(){
cout<<"Draw Base"<
};
The keyword "virtual" indicates that the function draw() can have different versions for the derived classes. A virtual function allows derived classes to replace the implementation provided by the base class. The compiler makes sure the replacement is always called whenever the object in question is actually of the derived class.
The virtual function should be declared in the base class and can not be redeclared in a derived class. The return type of a member function must follow the keyword "virtual". If a hierarchy of derived classes is used, the virtual function has to be declared in the top-most level. Here, in the above example top most class is "Shapes" and the virtual can be declared in that.
A virtual function must be defined for the class in which it was first declared. Then even if no class is derived from that class, the virtual function can be used by the class. the derived class that does not need a special version of a virtual function need not provide one.
To redefine a virtual function in the derived class you need to provide a function exactly matching the virtual function. It must have the same arguments(in terms of number of parameters and data type), otherwise the compiler thinks you want to overload the virtual function. The return type does not have to match. For example if the virtual function of the class returns a pointer of its class, then you can redefine the function in a derived class and have it return pointer to the derived class, instead of a pointer of the base class. This is possible only with virtual member functions.
Tuesday, December 1, 2009
Software Quality Metrics - Slides Description by Me
So, Test Metrics plays an important role to describe relevant information to the management on test effectiveness.
According to the dictionary word "METRICS"means "A Science of meter". So we can say in software terminolgy as Metrics are standards of measurement. Metrics are quantitative measures of the degree to which a system, component, or process possesses a given attribute. Test metrics are used to measure the effectiveness of software testing. For example, test metrics are used to accurately estimate the number of tests that need to be applied to a software project.
Metrics are the best way to know whether a process is under control and is performing as desired. For example, if there is a large or complex project, test metrics can be used to monitor the quality of the overall project and measure the effectiveness of processes applied for development.
Test metrics are used to plan the number and types of tests on a software product. Test metrics are used to estimate various components like test coverage for given components of a software project.
The testing teams must be aware of why test metrics are gathered and how they help. The various purposes for which test metrics are used in an organization are:
1. Basis for estimation - Test metrics can serve as a basis for estimation for types of tests and the test effort required for the software.
2. Status reporting - The testing status of a software development project can be measured based upon various components such as the number or percentage of test cases written or executed, requirements tested, modules tested, and business functions tested.
3. Flag actions - Some test metrics guide the management on when to flag certain actions in the software development process. These metrics are established to signal actions that need to occur if a threshold is met. For example, some organizations create entry criteria into the system when the test group demonstrates that the application is complete.
4. Identification of risky areas - Test metrics gathered from various parts of a software development project indicate the parts of the project that require greater care during testing. Test metrics, by measuring the relative defect density of the modules, provide a realistic basis for planning the test cases of a module.
Gathering test metrics with the correct framework is crucial for a successful test metrics collection program. While gathering test metrics, the following must be ensured-
1. Allocation of dedicated resources - The test metrics collection program needs a dedicated team. The measurement program is destined to fail if resources are not dedicated to it.
2. Preferably use automation tools - Metrics gathering and reporting is a timeconsuming activity. In many organizations, people who perform measuring activities spend a good portion of their workweek creating and distributing metrics. Automated measurement tools are available in the market to reduce this effort.
3. Focus the test tracking processes on to the goals of testing - The focus of the testing processes must always be set on the goals set by the management of the organization. Metrics are used to track progress towards the goals.
There are various types of test metrics that can be gathered in a software development project. Unfortunately, there is no standard list of test metrics that need to be gathered, as the measurement needs of each project are usually different. At the very least, most projects need measures of quality, resources, time, and size to analyze the product and process effectiveness.
There are different metrics that are unique to testing activities. The common metrics unique to testing include Defect Removal Efficiency (DRE), Defect Density (DD) and Mean Time to Last Failure (MTTF).
Defect Removal Efficiency (DRE): This is the measure of defects before the delivery of software. DRE is a metric that provides benefits at both the project level as well as the process level of the software development process.
DRE is calculated as a percentage of the defects identified and corrected internally with respect to the total defects in the complete project life cycle. Thus, DRE is the percentage of defects eliminated by measures, such as reviews, inspections, and tests.
DRE= (Total Number of defects found during development (before Delivery))/ (Total Number of defects found during development (before Delivery)+Total Number of Defects Found after Delivery)
When the value of DRE is 1, it implies that there is no defects found in the software.This is the ideal situation. However, in real life scenarios, the total number of defects after delivery will be greater than zero.
The DRE indicates the defect filtering ability of the testing processes. DRE encourages the software testing team to find as many defects as possible before delivery.
Defect Density (DD): This is the measure of all the found defects per size of software entity being measured. In other words, DD is the number of defects that have been identified in a software product divided by the size of the software component. This measure is expressed in the standard measurement terms for the software product.
These can be in Line of code (LOC) or Function Points (FPs). The size is used to normalize the measurements to allow comparisons between different software entities,such as the defects found in a module or release of a software product.
DD = Number of Known Defects / Size (in LOC or FP)
DD helps the management in comparing the relative number of defects found in the components of the software product. This helps the management to focus on components for additional inspection, testing, re-engineering, or replacement.
DD is used often in the software development community for another purpose also. With DD, it is possible to compare subsequent releases of software products. This measure can track the impact of defect reduction and quality improvement activities in a software development project. By normalizing the defects found in a release by the
size allows releases of varying sizes to be compared. Differences between products or product lines can also be compared in this manner.
Mean time to Failure: This is a measure of reliability, giving the average time before the first failure. In other words, this is the average time from one failure to the next failure during a test operation on a software project. This is the average time that the products are expected to operate at a given stress level before failure. For this metric, the time refers to the system operating time. This metric is not applicable prior to putting the system together for a system test. This metric is often referred to as mean time to failure (MTTF) or mean time before failure (MTBF).
A way for computing the MTTF is given below:
1. Define the period over which the MTTF is to be calculated as starting time, t2 and ending time, t3.
2. Locate the latest failure earlier than t2 and find the time, t1, at which this failure occurred.
3. Locate the earliest failure later than t3 and find the time, t4, at which it occurred, that is the two failures nearest to the period but outside the period in question.
4. Count the number of failures, including these two and all failures in between
them.
5. The MTTF for the period t2 to t3 is then given by (t4-t1)/(n-1). For example, the failures in a software were recorded. The failures occurred on 2 May, 5 June, 4 July, 28 July, 4 August, and 28 August. Calculate the MMTF for the months of June, July, and August and over the 4 months.
The MMTF for the month of June will be computed as:
first failure, t1 = 2 May
last failure, t4 = 4 July
Therefore, t4-t1 = 63 days
n = 3
Therefore, the MMTF for June is 63/2 = 31.5 days.
The MMTF for the month of July will be computed as:
first failure, t1 = 5 June
last failure, t4 = 4 August
Therefore t4-t1 = 60 days
n = 4
Therefore, the MMTF for July is 60/3 = 20 days
The MMTF for August will be computed as:
first failure, t1 = 28 July
last failure, t4 = 28 August
Therefore, t4-t1 = 31 days
n = 3, so MTTF for August is 31/2 = 15.5 days
The MMTF for the 4 months will be computed as:
first failure, t1 = 2 May
last failure, t4 = 28 August
Therefore, t4-t1 = 118 days
n = 6
Therefore, MTTF for the four months is 118/5 = 23.6 days
Complexity metrics is a component-level design metric. Component-level design metrics are used to focus on the internal characteristics of the software at the component level. Complexity metrics are very useful while testing software modules as they provide detailed information on the software. This information is used to trace the
modules and identify the areas of potential instability.
Software Quality Metrics- An Example!
Basically, as applied to the software product, a software metric measures (or quantifies) a characteristic of the software. Some common software metrics are:-
•Source lines of code.
•Cyclomatic complexity, is used to measure code complexity.
•Function point analysis (FPA), is used to measure the size (functions) of software.
•Bugs per lines of code.
•Code coverage, measures the code lines that are executed for a given set of software tests.
•Cohesion, measures how well the source code in a given module work together to provide a single function.
•Coupling, measures how well two software components are data related, i.e. how independent they are.
The above list is only a small set of software metrics, the important points to note are:-
•They are all measurable, that is they can be quantified.
•They are all related to one or more software quality characteristics.
The last point, related to software characteristics, is important for software process improvement. Metrics, for both process and software, tell us to what extent a desired characteristic is present in our processes or our software systems. Maintainability is a desired characteristic of a software component and is referenced in all the main software quality models (including the ISO 9126). One good measure of maintainability would be time required to fix a fault. This gives us a handle on maintainability but another measure that would relate more to the cause of poor maintainability would be code complexity. A method for measuring code complexity was developed by Thomas McCabe and with this method a quantitative assessment of any piece of code can be made. Code complexity can be specified and can be known by measurement, whereas time to repair can only be measured after the software is in support. Both time to repair and code complexity are software metrics and can both be applied to software process improvement.
We now see the importance of measurement (metrics) for the SDLC and SPI. It is metrics that indicate the value of the standards, processes, and procedures that SQA assures are being implemented correctly within a software project. SQA also collects relevant software metrics to provide input into a SPI (such as a CMMi continuous improvement initiative). This exercise of constantly measuring the outcome, then looking for a causal relationship to standards, procedures and processes makes SQA and SPI pragmatic disciplines.
Following section tries to pull the ideas of quality metrics, quality characteristics, SPI, SQC and SQA together with some examples by way of clarifying the definition of these terms.
The Software Assurance Technology Center (SATC), NASA, Software Quality Model includes Metrics
The table below cross references Goals, Attributes (software characteristics) and Metrics. This table is taken from the Software Assurance Technology Center (SATC) at NASA.
The relationship with Goals lends itself to giving examples of how this could be used in CMMi. If you look at the other quality models they have a focus on what comes under the Product (Code) Quality goal of the SATC model.
The reason SQA.net prefers the SATC quality model is:-
•The standard, including the new ISO 9126-1, quality models describe only the system behavioral characteristics.
•The SATC model includes goals for processes, (i.e. Requirements, Implementation and Testing).
•The SATC model can be used to reference all the CMMi, for example Requirements management (which includes traceability).
•If desired the SATC model can be expanded to accommadate grater risk mitigation in the specified goal areas, or other goal areas can be created.
•Demonstrating the relationship of metrics to quality characteristics and SPI (CMMI) is well served by the SATC quality model.
If you need to do this work in practice, you will need to select a single reference point for the Software Quality Model, then research the best metric for evaluating the characteristic. The importance of being able to break down a model of characteristics into measurable components indicates why these models all have a hierarchal form.
Thursday, November 19, 2009
Fractional Number Conversion
To convert a fraction into binary, we can repeatedly multiply by 2, keeping the integer part.
After each multiplication, the integer part becomes the next binary digit (left to right), and
the fractional part gets multiplied by 2 again. We can continue until the fractional part is
zero, or until we have as many digits as we desire. For example, to convert 0.59375 (base 10)
into binary we multiply: .1 0 0 1 1 / / / / / .59375 * 2 = 1.1875 keep the 1 -----------+ / / / / \__/ / / / / /------------/ / / / / v / / / / .1875 * 2 = 0.375 keep the 0 ---------+ / / / \_/ / / / /-----------/ / / / v / / / .375 * 2 = 0.75 keep the 0 -------+ / / \/ / / /-----------/ / / v / / .75 * 2 = 1.5 keep the 1 -----+ / \/ / /------------/ / v / .5 * 2 = 1.0 keep the 1 ---+ Since the fractional part of the last multiplication is 0, we can stop and we have an exact
answer. (Any additional digits would simply be 0.) So our answer is 0.59375 (base 10) = 0.010011
(base 2). We can check this by converting the binary back to decimal. The fractional place values in
binary are: 1 1 1 1 1 . --- --- --- --- --- ... 2^1 2^2 2^3 2^4 2^5 or: 1 1 1 1 1 . - - - -- -- ... 2 4 8 16 32 or: . (.5) (.25) (.125) (.0625) (.03125) ... So 0.10011 (base 2) is: 1 * .5 = .5 + 0 * .25 = .0 + 0 * .125 = .0 + 1 * .0625 = .0625 + 1 * .03125 = .03125 -------- .59375 (base 10) So it double-checks. Notice that most fractions that terminate in a decimal will be
repeating fractions in binary. For example, let's convert the fraction 0.6 (base 10) to binary: .1 0 0 1 1 0 0 1 ... / / / / / / / / .6 * 2 = 1.2 keep the 1 ---+ / / / / / / / .2 * 2 = 0.4 keep the 0 ----+ / / / / / / .4 * 2 = 0.8 keep the 0 -----+ / / / / / .8 * 2 = 1.6 keep the 1 ------+ / / / / .6 * 2 = 1.2 keep the 1 -------+ / / / .2 * 2 = 0.4 keep the 0 --------+ / / .4 * 2 = 0.8 keep the 0 ---------+ / .8 * 2 = 1.6 keep the 1 ----------+ Notice that the second four lines repeat the first four, and the next four would repeat them,
and so on. Thus, our answer is: 0.6 (base 10) = 0.10011001... (base 2)
Binary Conversion
The '2' in 247 represents two hundred because it is a two in the hundreds position (two times a hundred is two hundred). In similar fashion, the '4' in 247 represents forty because it is a four in the tens position (four times ten is forty). Finally, the '7' represents seven because it is a seven in the units position (seven times one is seven). In a decimal number, the actual value represented by a digit in that number is determined by the numeral and the position of the numeral within the number.
It works the same way with a binary number. The right-most position in a binary number is units; moving to the left, the next position is twos; the next is fours; the next is eights; then sixteens; then thirty-twos ... Notice that these numbers are all powers of two - 2^0, 2^1, 2^2, 2^3, 2^4, 2^5. (The units, tens, hundreds, thousands, ten thousands of the decimal system are all powers of ten: 10^0, 10^1, 10^2, 10^3, 10^4).
So, to convert the binary number 1001 (don't read that as one thousand one - read it as one zero zero one) to decimal, you determine the actual value represented by each '1' and add them together. The right-most '1' has a decimal value of 1 (it is in the 2^0, or units, position) and the left-most '1' has a decimal value of 8 (it is in the 2^3, or eights, position). So the binary number 1001 is equal to
decimal 9. Here's another way to look at it:
1 0 0 1
^ ^ ^ ^
| | | |_________> 1 x 2^0 = 1 x 1 = 1
| | |___________> 0 x 2^1 = 0 x 2 = 0
| |_____________> 0 x 2^2 = 0 x 4 = 0
|_______________> 1 x 2^3 = 1 x 8 = 8
---
9
The decimal number system is also referred to as "base ten" since each position in a decimal number represents a power of ten - a number that can be written as 10^n, where n is an integer. The binary number system is also referred to as "base two" since each position in a binary number represents a power of two - a number that can be written as 2^n, where n is an integer. The hex number system is also referred to as "base sixteen" since each position in a hexadecimal number
represents a power of sixteen - a number that can be written as 16^n, where n is an integer.
The right-most position in a hexadecimal number is units; moving to the left, the next position is sixteens; the next is two hundred fifty-sixes; the next is four thousand ninety-sixes, and so on - all powers of sixteen - 16^0, 16^1, 16^2, 16^3.
To convert a binary number to a hex equivalent, notice that four binary digits together can have a value of from 0 to 15 (decimal) exactly the range of one hex digit. So four binary digits will always convert to one hex digit!
For example:
10110111 = B7 (hex)
The right-most four digits of the binary number (0111) equal seven, so the hex digit is '7'. The remaining left-most four digits of the binary number (1011) equal eleven, so the hex digit is 'B'. Here is another way of looking at it:
1 0 1 1 0 1 1 1 from right to left, make four-digit groups
\ /\ /
\ / \ /
eleven seven determine the decimal equivalent of each
| | group
V V
B 7 write the equivalent hexadecimal digit
What is the decimal equivalent of B7 hex?
B 7
^ ^
| |_________> 7 x 16^0 = 7 x 1 = 7
|___________> 11 x 16^1 = 11 x 16 = 176
---
183 decimal
Check that against the decimal equivalent of 10110111 binary:
1 0 1 1 0 1 1 1
^ ^ ^ ^ ^ ^ ^ ^
| | | | | | | |_________> 1 x 2^0 = 1 x 1 = 1
| | | | | | |___________> 1 x 2^1 = 1 x 2 = 2
| | | | | |_____________> 1 x 2^2 = 1 x 4 = 4
| | | | |_______________> 0 x 2^3 = 0 x 8 = 0
| | | |_________________> 1 x 2^4 = 1 x 16 = 16
| | |___________________> 1 x 2^5 = 1 x 32 = 32
| |_____________________> 0 x 2^6 = 0 x 64 = 0
|_______________________> 1 x 2^7 = 1 x 128 = 128
---
183 decimal