Sunday, 28 June 2015

MT - 42 - What is component testing?

Let us see what is component in software. A component may be a class or a cluster of classes tightly coupled, but conceptually its always a unit.So lets talk about component testing.
Component testing, similar to unit testing but with higher level of integration, is a testing method which  searches for defects in, and verifies the functioning of software . Here we check  each modules , programs , objects and classes at unit level. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested. The stubs and drivers are used to test the missing components in the application.

There are basically three types of components that we can test :

Reusable components - Components intended for reuse should be tested over a wider range of values than a component intended for a single focused use.

Domain components - Components that represent significant domain concepts should be tested both for correctness and for the faithfulness of the representation.

Commercial components - Components that will be sold as individual products should be tested not only as reusable components but also as potential sources of liability.

Tuesday, 23 June 2015

MT - 41 - Positive and Negative testing !

- Positive Testing is testing process where the system validated against the valid input data While Negative Testing is testing process where the system validated against the invalid input data.

- In positive testing tester always check for only valid set of values and check if a application behaves as expected with its expected inputs. A negative test checks if a application behaves as expected with its negative inputs.

- The main intention of positive testing is to check whether software application not showing error when not supposed to & showing error when supposed to.The main intention of negative testing is to check whether software application not showing error when supposed to & showing error when not supposed to.

- Positive testing is to be carried out keeping positive point of view & only execute the positive scenario. Negative testing is to be carried out keeping negative point of view & only execute the test cases for only invalid set of input data.

- Positive Testing always tries to prove that a given product and project always meets the requirements and specifications. Under Positive testing is test the normal day to day life scenarios and check the expected behaviour of application.Negative testing is a testing process to identify the inputs where system is not designed or un-handled inputs by providing different invalid. The main reason behind Negative testing is to check the stability of the software application against the influences of different variety of incorrect validation data set

-Positive testing can be performed on the system by providing the valid data as input. Negative Testing can be performed on the system by providing invalid data as input.

- Positive testing is done with intention of passing the testcase while negative testing is done to fail the testcase.

- If we talk about BVA(boundary value analysis) , positive testing is done with the positive testdata consists of value inside the boundary while negative testcases are out the bouindary.


Positive Testing (Valid)
   Negative Testing (Invalid)
1. Positive Testing means testing the application or system by giving valid data.
1. Negative Testing means testing the application or system by giving invalid data.
2. In this testing tester always check for only valid set of values.
2. In this testing tester always check for only invalid set of values.
3. Positive Testing is done by keeping positive point of view for example checking the mobile number field by giving numbers only like 9999999999.
3. Negative Testing is done by keeping negative point of view for example checking the mobile number field by giving numbers and alphabets like 99999abcde.
4. It is always done to verify the known set of Test Conditions.
4. It is always done to break the project and product with unknown set of Test Conditions.
5. This Testing checks how the product and project behave by providing valid set of data.
5. This Testing covers those scenarios for which the product is not designed and coded by providing invalid set of data.
6. Main aim means purpose of this Testing is to prove that the project and product works as per the requirements and specifications.
6. Main aim means purpose of this Testing is try to break the application or system by providing invalid set of data.
7. This type of Testing always tries to prove that a given product and project always meets the requirements and specifications of a client and customer.
7. Negative Testing is that in which tester attempts to prove that the given product and project does, which is not said in the client and customer requirements.

MT - 40 - Exhaustive testing - Is it possible in real ?

Exhaustive testing is an approach in which all the possible inputs are tested against a system. It covers all the data combinations to test a system.In this type of testing we try to check the output given by the software by entering all the possible inputs, in fact we use all the permutations and combinations of the inputs. Each element of code is verified under this process. This way we are able to find software endurance as well as its ability to handle extreme situations. Exhaustive testing is the process of testing for absolutely everything just to make sure that the product cannot be destroyed or crashed by some random inputs. It takes into consideration all possible combinations of inputs, usage scenarios as well as random situations and inputs.

Is it possible ?
So here I'd like to say that it totally depends on the type and size of application. If our application is very small then the ans will be yes but for very large system its impossible.Although it provides complete surity of correctness of application but its rarely done in real practice.

MT - 39 - Verification and validation ( V &V ) or Static and dynamic testing

Used in software testing , validation and verification are two independent procedures that are used to check the product or any setup meets requirements and specifications or not.
Software verification and validation

Validation(Static testing):

Validation is the process of evaluating the final product to check whether the software meets the business needs. In simple words the test execution which we do in our day to day life are actually the validation activity which includes smoke testing, functional testing, regression testing, systems testing etc

Verification(Dynamic testing):

Verification is a process of evaluating the intermediary work products of a software development lifecycle to check if we are in the right track of creating the final product.


There are two aspects of V&V tasks

Confirms to requirements (Producer view of quality)
Fit for use (consumers view of quality)

Producer’s view of quality, in simpler terms means the developers perception of the final product.
Consumers view of quality means users perception of final product.
When we carry out the V&V tasks, we have to concentrate both of these view of quality.

Methods of Verification

1. Walkthrough
2. Inspection
3. Review

Methods of Validation

1. Different phases of testing
2. End Users


Difference : 


   Verification
             Validation
1. Verification is a static practice of verifying documents, design, code and program. Its done without executing the actual code.
1. Validation is a dynamic mechanism of validating and testing the actual product. Its done with t he execution of the code.
2. To ensure that the product is being built according to the requirements and design specifications. In other words, to ensure that work products meet their specified requirements.
2. To ensure that the product actually meets the user’s needs, and that the specifications were correct in the first place. In other words, to demonstrate that the product fulfils its intended use when placed in its intended environment.
3. Its discussion based checking of documents , code , RTM and files.
3. It is actual testing of system through execution of program.
4. Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc.
4. Validation uses methods like black box (functional)  testing, gray box testing, and white box (structural) testing etc.
5. Verification is to check whether the software conforms to specifications.It makes sure that the product is designed to deliver all functionality to the customer.
5. Validation is to check whether software meets the customer expectations and requirements.
6. Checks “Are we building the product right”?
6. Checks “Are we building the right product”?
7. Target is requirements specification, application and software architecture, high level, complete design, and database design etc.
7. Target is actual product-a unit, a module, a bent of integrated modules, and effective final product.
8. Verification is done by QA team to ensure that the software is as per the specifications in the SRS document.
8. Validation is carried out with the involvement of testing team.
9. It generally comes first-done before validation.
9. It generally follows after verification.




Advantages of Software Verification :

Verification helps in lowering down the count of the defect in the later stages of development.
Verifying the product at the starting phase of the development will help in understanding the product in a better way.
It reduces the chances of failures in the software application or product.
It helps in building the product as per the customer specifications and needs.

Saturday, 20 June 2015

MT - 38 - Risk analysis and risk based testing !

Depending on the time constraint and type of software being tested , there may be a need of risk analysis where we shold focus more on the risky areas of the system. For example : if we consider the online shopping application the most risky area would be the payment page from  the user point of view. The end user will be more focused on the transactions as thry will be paying the money directly.
So we need to do risk analysis here.Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system is deployed.
We also approach this type of testing when we have less time. In that case the risky areas are prioritized first and then testing is performed over them.

Risk base testing is done with the following steps :

(1) Through  meetings and plannings the risk analysis is done. The areas having heighest risk is analysed. For this we need to have clear understaing of the specifications.

(2) Proper testing is done to explore and correct each risk. We can choose to brainstorm with the stakeholders.

(3) There may be generation of other risks on correction of other ones while execution. So proper mitigation plans needs to be made for this.

(4) Risk-based testing also includes measurement process that recognizes how well we are working at finding and removing faults in key areas.





Advantages/Benefits of Risk-based testing: 

- Improved quality – All of the critical functions of the application are tested. Real time clear understanding of project risk.

- Give more focus on risks of the business project instead of the functionality of the information system.

- Provides a negotiating instrument to client and test manager similar when existing means are limited.

- Associate the product risk to the requirement identifies gaps. During testing, test reporting always takes place in a language (risks) that all stake-holder understands.

- Testing always concentrate on the most important matters first with optimal test delivery, in case of limited time, money and qualified resources. With the time and resources we have, we just can able to complete 100% testing, so we need to determine a better way to accelerate our testing effort with still managing the risk of the application under test. Efforts are not wasted on non-critical or low risk functions.

 -Improve customer satisfaction – Due to customer involvement and good reporting and progress trackin

MT - 37 - Latent and masked defect ?

A defect that prevailed in the system for a long time but was not detected because the particular set of criteria and conditions were never met. It existed in system for a long time but was never caught in testing and all pre-release testings. There may be more than one releases in which this was not caught.

Masked defect are results of latent defects. Masked defect hides the other defect, which is not detected at a given point of time. It means there is an existing defect that is not caused for reproducing another defect. It clearly resembles that if any latent defect was there on the system which created few more defects.

MT - 36 - Exploratory testing.

As the name says, explatory testing means leanrn , explore and test. Its a concept of software testing where learning and testing is done simultaneously. Its the responsibility and ownership of tester to optimize the testing of application through self learning and deciding the risk areas.Its actually a black-box testing technique.

- In exploratory testing approach testers are involved in minimum planning and maximum test execution.

- The planning includes creation of a test case, a short declaration of the scope of a short time-boxed test effort, the objectives and possible approaches to be used.

- The test design and test execution are performed parallelly without any formal documentation of test conditions, test cases or test scripts. However, this does not imply that other more formal testing techniques will not be used.

Note :  There may be a confusion between Ad-hoc and Explatory testing. In Exploratory testing the product is developed, studied and tested simultaneously while ad-hoc is informal and improvisational approach to assessing the viability of a product.

MT - 35 - What is gold plating in software testing ?

To explain this term let me take you through the real time example. Suppose you are testing an application and you found something additional in your application. This new thing is not mentioned in any of the test case or any requirement document. But this additional functionality gives a additional feature which is liked by the client. This is called gold plating in terms of software. To summarize gold plating is just giving an additional feature to the software which was not mentioned in BRD. Gold plating means when you are Adding extra feature in
your product to delight your customer (a kind of surprise). Gold plating is not a bargain. It can increase operation and maintenance costs and reduce quality. It refers to continuing to work on a project or task well past the point where the extra effort is worth the value it adds (if any). After having met the requirements, the developer works on further enhancing the product, thinking the customer would be delighted to see additional or more polished features, rather than what was asked for or expected. The customer might be disappointed in the results, and the extra effort by the developer might be futile.
Now this question can be twisted as " What will you do if you find an additional feature in the software which is not mentioned in the BRD"

So in response to this question we can simply say that this condition is termed as gold plating and we can simply discuss to the client and stakeholders about this and if its approved we can get the BRD, RTM and  Test Case template updated.

Its not that everytime gold plating will be liked by client.In terms of project management this is not considered as the best practice because there may be a condition when any new functionality may be a cause of risk and client unsatisfaction.

Thursday, 18 June 2015

MT - 34 -Entry and Exit Criteria in software testing !

Entry and exit criteria are set of conditions which ensures that when we need to begin the testing and when it can be stopped.By defining exit and entry criteria you define your boundaries. For instance, you can define entry criteria that the customer should provide the requirement document or acceptance plan. If these entry criteria are not met then you will not start the project. On the other end, you can also define exit criteria for your project. For instance, one of the common exit criteria in projects is that the customer has successfully executed the acceptance  test plan.

Entrance criteria:
1)All source codes are unit tested
2)All QA resource has enough functional knowledge of application and tool.
3)Hardware and software are in place and are ready to be used in testing
4)Test plans and test cases are reviewed and signed off
5)Proper environment is in place to support the entire system test process. Item must meet in entry criteria:
6)All test hardware platforms must have been successfully installed, configured and Functioning properly.
7)All standard software tools including testing tools must have been successfully installed and functioning properly.
8)All documentation and design of the architecture must be made available.
9)All personnel involved in the system test effort must be trained in tools to be used during testing process.
10)A separate QA environment (with its own webserver, database and Application server instance) must be available.
11) All the necessary documentation, design, and requirements information should be available that will allow testers to operate the system and judge the correct behavior.
12) Proper test data is available.


Exit criteria:

1)No defect over a perod of time or testing effort
2)Planned deliverables are ready
3)High severity defects are fixed
4)It ensures that the project application has been satisfactorily completed before exiting the system test stage and clarifying the application as complete. I
5)Application must provide the required services.
6)Ensure all application documentation has been completed and is upto date.
7)100% of all high priority bugs must be resolved.
8) The application covers all the requirements.
9) All high-risk areas have been fully tested, with only minor residual risks left outstanding
10) All scripts have passed with zero backlog



Note : If testing is being done in phases hen exit criteria of one phase may become entry criteria of other.

Wednesday, 17 June 2015

MT - 33 - Stubs and drivers - What and Why ?

These two terms are used by developers to complete the flow of application for a undeveloped piece of code. By using the stubs and drivers the testers can test a part of application even if the other parts of application are not developed. These are basically dummy codes which may be used in integration testing. They are used to replace the missing software and simulate the interface between the software components in a simple manner.

The Stub is called from the software component to be tested. It is used in top down approach and they act as called functions.In this type highest level components are created first.Stubs will contain only functionality needed to be successfully called by a higher level component. It will simulate the behaviour of  a lower level component.






The drivers are used in bottom top approach and is called calling function.In bottom up approach, lower level components are created first.Temporary components called 'drivers' are written as substitutes for the missing code. Then the lowest level components, can be tested using the test driver

Stubs and driver are used in following condition :

• Suppose we want to test the interface between modules A and B and we have developed only module A. So we cannot test module A but if a dummy module is prepare, using that we can test module A.
• Now module B cannot send or receive data from module A directly so, in these cases we have to transfer data from one module to another module by some external features. This external feature used is called Driver.

MT - 32 - What is Test Harness?

A test harness is a collection of software and test data required to test the application by running it in different testing condition like stress, load, data- driven, and monitoring its behaviour and outputs. It may be used to see behaviour or may be used to analyse  results in different set of conditions.

Test Harness contains of following main parts:

- Test execution engine\tool
- Test script repository\database
- Results folder to analyse

Automation testing is the use of a tool to control the execution of tests and compare the actual results with the expected results. It also involves the setting up of test pre-conditions.In the automation testing world, Test harness refers to the framework and the software systems that contain the test scripts, parameters necessary (in other words, data) to run these scripts, gather test results, compare them (if necessary) and monitor the results.

The typical objectives of a test harness are to:

- Automate the testing process.
- Execute test suites of test cases.
- Generate associated test reports.

A test harness may provide some of the following benefits:

- Increased productivity due to automation of the testing process.
- Increased probability that regression testing will occur.
- Increased quality of software components and application.
- Ensure that subsequent test runs are exact duplicates of previous ones.
- Testing can occur at times that the office is not staffed (e.g. at night)
- A test script may include conditions and/or uses that are otherwise difficult to simulate (load, for example)

Example:
If I was talking about a project that uses QTP for functional testing, ALM is linked to organize and manage all the scripts, runs and results and the data is picked from a MS Access DB – The following would be the test harness for this project:
  • The QTP (UFT) software itself
  • The scripts and the physical location where they are stored
  • The Test sets
  • MS Access DB to supply parameters, data or the different conditions that are to be supplied to the test scripts
  • HP ALM
  • The test results and the comparative monitoring attributes
Test Harness in automation
As you can see, software systems (automation, test management, etc.), data, conditions, results – all of them become an integral part of the Test harness – the only exclusion being the AUT itself.

MT - 31 - Memory Leak in software!

Memory leak is a condition when a program or any application or software does not releases the allocated memory. Memory leaks are blocks of allocated memory that the program no longer references. Its a kind of bug and should always be fixed. Leaks waste space by filling up pages of memory with inaccessible data and waste time due to extra paging activity. Leaked memory eventually forces the system to allocate additional virtual memory pages for the application, the allocation of which could have been avoided by reclaiming the leaked memory.When the allocated memory is not released back to OS it shows "out of memory" exception after a certain time. This may also result in system slowness and sluggish behaviour. The best practice is to release the memory once the desired activity is performed on the system.

Its easy to find out the memory leak. Do a series of test on the application and then return to the initial state. The memory must be released by the system If it doesn't happens the empty memory keeps on getting filled. In object-oriented programming, a memory leak may happen when an object is stored in memory but cannot be accessed by the running code.

Tuesday, 16 June 2015

MT - 30 - Pesticide Paradox !

Pesticide paradox is a contradictory concept of defect clustering.In Defect Clustering we have seen that a particular module having large number of defects are tested again and again. So in this way the defect is removed from that particular module but there is a possibility of defects in other modules. This concept is called pesticide paradox where same set of test are performed again and again over a module and a time comes when defect remaining in that module is zero but the other modules remains untouched.

The name pesticide paradox is derived from the real time scenario where same kind of pesticide makes the insects immune to it similarly execution of same cases again and again does not finds new bug in the system.

So in order to find new bugs we need to change the test cases. Old regression scripts/suites needs to be updated.The test cases need to be revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
Now we have two choices.

1.To write whole new set of test cases to exercise different parts of the software.
2.To prepare new test cases & add them to the existing test cases.

In the first case,we will be finding more potential defects in an area where we did not focused earlier or the area at which developer was not extra cautious as the tester was not raising the defects from these areas. But by neglecting earlier identified defect cluster, we are certainly taking a risk of giving less importance to the area which used be very critical in order to find the more defects in earlier iterations of testing.


In the second case,we can find the new potential defects in the new area as well as we can focus on the earlier identified defect cluster. But on the other hand the number of test case will become so large that it will increase the testing time & in turn increase the cost of testing. Hence too many useless tests may be an overhead.

MT - 29 - Defect Clustering !

While testing a large system we have noticed that a large number of defects are produced by small no of modules. We can say that a majority of defects revolves around few modules. This concept is called as defect clustering in software testing. This is also known as Pareto principle or 80-20 rule and this is usually seen for larger applications. It means 80% of the defects in the applications are seen by 20% of the modules.This can give a good indication that when a defect is found in one area of the application, chances are there are more defects in that particular area, so it is worth investing more time to test that particular area of the application to find as many defects as possible. The reason may be the complexity of the code and application at the particular area. We can also choose such risky modules for our regression suite.

Sunday, 14 June 2015

MT - 28 - Authorization and authentication !

These two similar looking terms are slightly different. Lets look the difference between them :

Authentication :

An authentication system is how you identify yourself to the computer. The goal behind an authentication system is to verify that the user is actually who they say they are. It proves that an Individual is who says He or She or It in fact says, He or she or It is. There are many ways of authenticating a user. Any combination of the following are good examples.

- Password based authentication

- Device based authentication

- Biometric Authentication

- Retina Scanners:

- Hand Scanners:


Authorization: 

Once the system knows who the user is through authentication, authorization is how the system decides what the user can do.It determines what an individual can do in the system after He or She or It is authenticated.

A good example of this is using group permissions or the difference between a normal user and the superuser on a unix system.

There are many types of authorizaion :

- ACL(Access Control Lists)

- Group or Role Membership

- Privilege Ownership

- Permissions

Summary :


NoAuthenticationAuthorization
1
Authentication verify who you are?
Authorization verify what you are authorized to do ?
2There are different way to authenticate the user like Password based Authenticate, Device based Authenticate, Biometric AuthenticateGiving the group permission like normal user to super user on Unix system
3Authentication establish the identityAuthorization decide what privileges a given to person and program