Saturday, 30 May 2015

MT - 27 - Requirement Traceability Matrix (RTM)

Before going to RTM ,let me debrief about the Traceability matrix.A traceability matrix is a document that co-relates any two documents that require a many-to-many relationship to check the completeness of the relationship. It is used to track the requirements and to check the current project requirements are met.

Requirement Traceability Matrix
Requirement Traceability Matrix or RTM captures all requirements proposed by the client or development team and their traceability in a single document delivered at the conclusion of the life-cycle.

In other words, it is a document that maps and traces user requirement with test cases. The main purpose of Requirement Traceability Matrix is to see that all test cases are covered so that no functionality should miss while testing.

It also helps to trace the testcase from the requirement and vice versa. So if we get any bug we can directly trace the requirement. Its bi-directional.

RTM(Requirements Trace Matrix ) is also used as a test planning tool to help determine how many tests are required, what types of tests are required, whether tests can be automated or manual, and if any existing tests can be re-used. Using the RTM in this way helps ensure that the resulting tests are most effective.

Also in case of any change in the application/requirement we can direcly check the testcases and update it.

Requirement Traceability Matrix – Parameters include

Requirement ID
Risks
Requirement Type and Description
Trace to design specification
Unit test cases
Integration test cases
System test cases
User acceptance test cases
Trace to test script

MT - 26 - Mutation testing.

Main concept of mutation testing is to see the behaviour of the application after injecting some defect. This phenomenon of injecting defect is called defect seeding where intentionally defect is introduced to a product to check the detection rate. It is mostly done during unit testing where a certain change in code is done to see the behaviour of application. We do it to check that our testcases are smart enough to catch the defect.

In this technique we  mutate or change certain statement in the source code and check the test code is able to find errors . These mutants are run with an input data from a given test set. If a test set can distinguish a mutant from the original program i.e., it produces different execution result, the mutant is said to be killed. Otherwise, the mutant is called as a live mutant.

A mutant remains live because it is equivalent to the original program i.e., it is functionally identical to the original program or the test data is inadequate to kill the mutant. If a test data is inadequate, it can be improved by adding test cases to kill the live mutant. A test set which can kill all non-equivalent mutants is said to be adequate (mutation score).





Following are the steps to execute mutation testing:

Step 1: Faults are introduced into the source code of the program by creating many versions called mutants. Each mutant should contain a single fault, and the goal is to cause the mutant version to fail which demonstrates the effectiveness of the test cases.

Step 2: Test cases are applied to the original program and also to the mutant program. A test case should be adequate, and it is tweaked to detect faults in a program.

Step 3: Compare the results of original and mutant program.

Step 4: If the original program and mutant programs generate the same output, then that the mutant is killed by the test case. Hence the test case is good enough to detect the change between the original and the mutant program.

Step 5: If the original program and mutant program generate different output, Mutant is kept alive. In such cases , more effective test cases need to be created that  kill all mutants.


Mutation Testing Types:

Value Mutations: An attempt to change the values to detect errors in the programs. We usually change one value to a much larger value or one value to a much smaller value. The most common strategy is to change the constants.

Decision Mutations: The decisions/conditions are changed to check for the design errors. Typically, one changes the arithmetic operators to locate the defects and also we can consider mutating all relational operators and logical operators (AND, OR , NOT)

Statement Mutations: Changes done to the statements by deleting or duplicating the line which might arise when a developer is copy pasting the code from somewhere else.


Mutation testing is based on two hypothesis :
Competent programmer hypothesis : This hypothesis states that most software faults introduced by experienced programmers are due to small syntactic errors.It states that programmers are competent, which implies that they tend to develop programs close to the correct version. As a result, although there may be faults in the program delivered by a competent programmer, we assume that these faults are merely a few simple faults which can be corrected by a few small syntactical changes. Therefore, in Mutation Testing, only faults constructed from several simple syntactical changes are applied, which represents the faults that are made by "competent programmers".

Coupling effect: The coupling effect asserts that simple faults can cascade or couple to form other emergent faults.The Coupling Effect Hypothesis is that complex faults are coupled to simple faults in such a way that a test data set that detects all simple faults in a program will detect a high percentage of the complex faults.

Friday, 29 May 2015

MT - 25 - Use-case and Test-case !!

A usecase is a high level scenario where you specify the functionality of the application from a business perspective.A usecase describes that how a user use specific functionality in our application.A use case describes an entire flow of interaction that the user has with the system/application. A Use Case describes the behaviour of a business system from the business user’s point of view. It should describe in plain business terms how the user interacts with the system (assuming it is an online Use Case) and what the system does in response. It does not determine how the system works internally, that is it does not define the implementation.


Testcase is the implementation of the highlevel scenario(usecase) wherein one gives detailed and step-by-step account of procedures to test a particular functionality of the application. Things get lot technical here.A test case describes that a test condition to apply on that application to validate.Test cases are written on the basis of use cases. The test cases check if the various functionalities  that the user uses to interact with the system is working fine or not.

Differences :


                  Use Case
                       Test Case
1
Use Case is prepared by business analyst or client representative.
Test case is prepared by test engineer and in small companies sometimes it is prepared by quality analyst too.
2
Based on test cases use cases cannot be prepared means it is not derived from test cases.
Based on use cases test cases can be prepared means it is derived from use cases.
3
Use case describes step by step instructions means how to use functionality.
Test case verifies the functionality means it is as per the instructions mentioned in use case or not.
4
Use case is not designed to be executed. Its designed to test the end-to-end functionality of the system.
We design the test cases and later execute them.
5
It is derived from the BRS (Business Requirement Specification)
It is derived from the use case.
6
It is a pictorial representation of client requirements or you can say customer requirements.
It is not represented diagrammatically it is only documented in excel sheet and in big companies it is also documented in some test case management tools.
7
It is a document which always describes the flow of events of an application.
It is a document which always contains an action, event and an expected result of particular feature of an application.
8
Use Cases can be written by BA (business analyst.) on the basis of client requirements or customer requirements.
Test cases are written by test engineer or quality analyst on the basis of use case document.
9
It always tells us about the story of how people interact with a software system to achieve a goal.
It verifies the goal to see it is as per the instructions of use case or not.

MT - 24 - What Bug report contains ?

While assigning the bug we need to give the details of the bug. Here are few details that can help in writing effective bug report :

1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported. If it is done manually then use incremental sequene.

2) Reproducible:
Before assigning any but make sure that its reproducible.You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.

3) Be Specific:
Do not try to merge more than one bug in one report. Be specidfic about the bug you are mentioning.Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way.

4) Business impact :
Always try to show the impact of bug on the application/AUT. Mention the functionality that is impacted and what issues you are facing due to that.


Some details about bug : 

Bug Name : mention the tittle of bug

Bug Id: Mention the unique bug ID

Reporter: Your name and email address.

Reported On: Date

Product: In which product you found this bug.

Version: The product version if any.

Component: These are the major sub modules of the product.

Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.

Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.

Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.

Severity:
This describes the impact of the bug.This may be assigned by tester/team lead or test manager or triage team.

Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.

Actual and expected result : Always mention the actual and expected result of the bug. What you are getting and what was expected outcome.

URL:
The page url on which bug occurred.

Traces : Provide traces of the bug whereever its hitting system error.

Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.

Description:
A detailed description of bug. Use following fields for description field:

Reproducing step : Mention the steps clearly to reproduce the bug. You can also provide screenshot of it.


Some more tips to write a good bug report:

1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.

2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.

3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.

4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.

5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.

6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.

MT - 23 - Alpha and Beta(field) testing !

 -> Alpha Testing
Alpha testing is a type of acceptance testing; performed to identify all possible issues/bugs before releasing the product. The aim is to carry out the tasks that a typical user might perform. Alpha testing is carried out by professional testers . Its a final round of testing before release to check the application as per the end user.

-> Beta Testing(Field Testing)
Beta Testing of a product is performed by "real users" of the software application in a "real environment" and can be considered as a form of external user acceptance testing. Beta testing reduces product failure risks and provides increased quality of the product through customer validation.It is the final test before shipping a product to the customers. Direct feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product in real time environment. There are basically 2 types of beta versions

Closed beta versions are released to a select group of individuals for a user test and are invitation only,

Open beta are from a larger group to the general public and anyone interested. The testers report any bugs that they find, and sometimes suggest additional features they think should be available in the final version.


MT - 22 - Bug Leakage and Bug Release

A bug leakage results when a bug is detected which should have been detected in earlier builds/versions of the application.A defect which exists during testing yet unfound by the tester which is eventually found by the end-user is also called bug leakage.Its a missed defect which could have been found during testing.

A bug release is when a particular version of s/w is released with a set of known bug(s)/defect(s). These bugs are usually low severity and/or low priority bugs. It is done when the company can afford the existence of bug in the released s/w rather than the time/cost for fixing it in that particular version. These bugs are usually mentioned in the Release Notes.

As a summary we can say that bug leakage is some defect which was missed out in the application while bug release is final build of application in which we intentionally leave bugs.

MT - 21 - BUG life cycle

To make it more realistic ,lets start with the  the process- flow of bug life cycle in the below picture:

- > When a tester finds a bug .The bug is assigned with NEW status

-> The bug is assigned to development project manager or project lead who will analyze the bug .He will check whether it is a valid defect. If it is not valid bug is rejected, now status is REJECTED.

->If not, next the defect is checked whether it is in scope. When bug is not part of the current release .Such defects are POSTPONED/DEFERRED

-> Now, Developer checks whether similar defect was raised earlier. If yes defect is assigned a status DUPLICATE and its CLOSED/CANCELLED

-> When bug is assigned to developer. During this stage bug is assigned a status IN-PROGRESS

-> Once code is fixed. Defect is assigned with FIXED status.

-> Next the tester will re-test the code. In case the test case passes the defect is CLOSED

-> If the test case fails again the bug is RE-OPENED and assigned to the developer. That’s all to Bug Life Cycle.

This is the whole process of life cycle. Now there are different statuses in bug life cycle. Lets see each of them :

New:  When a defect is logged and posted for the first time. It’s state is given as new.

Assigned:  After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.

Open:  At  this state the developer has started analyzing and working on the defect fix.

Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.

Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.

Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.

Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.

Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.

Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.

Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.

Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of colour of some text then it is not a bug but just some change in the looks of the  application.

MT - 20 - QA and QC

Quality Assurance (QA): A set of planned and systematic activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented. When Statistical tools and techniques are applied to processes (process inputs & operational parameters), they are called Statistical Process Control (SPC) & it becomes the part of Quality Assurance.It is basically on the project side.

Quality Control (QC): The function of software quality that checks that the project follows its standards, processes, and procedures, and that the project produces the required internal and external (deliverable) products. When statistical tools & techniques are applied to finished products (process outputs), they are called as Statistical Quality Control (SQC) & comes under Quality Control.It is on the product side.

Differences                  
Quality Assurance
Quality Control
Quality Assurance is a part of quality management process which concentrate on  providing confidence that quality requirements will be fulfilledQuality Control is a part of quality management process which concentrates on fulfilling the quality requirements on the end product.
Quality Assurance is a set of activities for ensuring quality in the processes by which products are developed.Quality Control is a set of activities for ensuring quality in products. The activities focus on identifying defects in the actual products produced.
Quality Assurance is the process of managing for quality;Quality Control is used to verify the quality of the output
The goal of Quality Assurance is to prevent introducing defects in the software application which help to improve the development and testing processes.The goal of Quality Control is to identify the defects in the software application after it is developed.
QA is Pro-active means it identifies weaknesses in the processes.QC is Reactive means it identifies the defects and also corrects the defects or bugs also.
QA tracks the outcomes and adjusts the process to meet the expectation.QC finds the defects and suggests improvements.
All peoples who are involved in the developing software application as responsible for the quality assurance.Testing team is responsible for Quality control.
Quality Assurance is process orientedQuality Control is product oriented
Quality Assurance basically aim to prevention of defects to improve the quality.Quality Control basically aim to detection of defects to improve the quality.
It identifies weakness in processes to improve them.It identifies defects to be fixed.
It checks that correct product is being preparedIt checks that the product prepared is correct
It is a staff function.It is a line function.
It is done before Quality Control.It is done only after Quality Assurance activity is completed.
Quality Assurance means Planning done for doing a process.Quality Control Means Action has taken on the process by execute them.

Wednesday, 27 May 2015

MT - 19 - Black-box Testing.

Often asked in testing interviews , does the BB-testing means just testing of the functionality.Well as a definition we can say that in BB-testing we consider the system as black box and we are not concerned with the internal structure of it. We just need to give input and see the response of the system. BB testing is used at all the level of testing (i.e - integration, system , etc). Here the requirement of any application is analysed and then the testing is performed by tester.

Test design techniques
Typical black-box test design techniques include:

Equivalence partitioning : It divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. For each partitions the behavior of the system remains the same.It avoids repetition of cases as for all the data in the given range the behave of the system will be same.divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.

Boundary value analysis : This techniques is used to check the boundary values of the system. Boundary values means the behave of the software will change at these values. So there is maximum possibility of errors at boundaries.

Cause–effect graph : a cause–effect graph is a directed graph that maps a set of causes to a set of effects. The causes may be thought of as the input to the program, and the effects may be thought of as the output. Usually the graph shows the nodes representing the causes on the left side and the nodes representing the effects on the right side. There may be intermediate nodes in between that combine inputs using logical operators such as AND and OR. Its used basically in designing test cases based out of functionality.

Cause-Effect flow diagram


Error guessing : error guessing is a test method in which test cases used to find bugs in programs are established based on experience in prior testing.The scope of test cases usually rely on the software tester involved, who uses past experience and intuition to determine what situations commonly cause software failure, or may cause errors to appear. Typical errors include divide by zero, null pointers, or invalid parameters.

Decision table testing : Its also a kind of Cause -effect technique where we create a decision based out of certain inputs. Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to.


Types of Black Box Testing
There are many types of Black Box Testing but following are the prominent ones -

Functional testing - This black box testing type is related to functional requirements of a system; it is done by software testers.

Non-functional testing - This type of black box testing is not related to testing of a specific functionality , but non-functional requirements  such as performance, scalability, usability.

Regression testing - Regression testing is done  after code fixes , upgrades or any other system maintenance to check the new code has not affected the existing code.

MT - 18 - All about Agile!!

Agile methodology is a way of managing software project in a better way. The motive of Agile methodology is basically to bring agility in project management and handling process.. It promotes adaptive planning, evolutionary development, early delivery, continuous improvement, and encourages rapid and flexible response to change .It has basically 4 functions/motives :

-> To increase the individuals interactions over processes and tools
-> it supports Working software over comprehensive documentation
-> Customer collaboration over contract negotiation
-> manage and respond to change over following a plan


-> Active user Involvement should be there to get the feedback and requirements clearly communicated to the team so that software can be built in accordance with the need of end user. In this the user representative checks the requirement and product on daily basis and discuss with the team.

-> The project team must be empowered to make decisions in order to ensure that it is their responsibility to deliver the product and that they have complete ownership. Any interference with the project team is disruptive and reduces their motivation to deliver.

->In Agile Development projects, requirements are allowed to evolve, but the timescale is fixed. So to include a new requirement, or to change a requirement, the user or product owner must remove a comparable amount of work from the project in order to accommodate the change.

-> In Agile development we have flexibility to develop and test even a small feature having less requirements. further we can update/enhance the functionality. XP is one of the Agile methods which are extensively followed. We break the requirement into small pieces and work on it.

-> Analyse, develop, test - yes this is what we follow in Agile. We take one feature , do requirement analysis on it and then develop it and then we go for testing.

Advantages of this iterative approach to software development include:
Reduced risk: clear visibility of what’s completed to date throughout a project

Increased value: delivering some benefits early; being able to release the product whenever it’s deemed good enough, rather than having to wait for all intended features to be ready

More flexibility/agility: can choose to change direction or adapt the next iterations based on actually seeing and using the software

Better cost management: if, like all-too-many software development projects, you run over budget, some value can still be realized; you don’t have to scrap the whole thing if you run short of funds

Frequent delivery of products : Agile believe in frequent delivery of product and further feedback is taken to improve it in iterative cycles.Iterative cycles are also called sprints.

-> Completeness of one feature in one go : Each sprint consists of a feature and once the sprint is completed we are done with the development of complete feature.So, in agile development, make sure that each feature is fully developed, tested, styled, and accepted by the product owner before counting it as “DONE!”. And if there’s any doubt about what activities should or shouldn't be completed within the Sprint for each feature, “DONE!” should mean shippable.

-> Integration of testing in development life cycle - we start testing early in development phase to get the software more in accordance with the requirements.

There are various agile methodologies. The famous among them are :

Scrum (an agile development method), concentrates particularly on how to manage tasks within a team-based development environment.  Scrum is the most popular and widely adopted agile method – I think because it is relatively simple to implement and addresses many of the management issues that have plagued IT development teams for decades.

XP (Extreme Programming) is a more radical agile methodology, focusing more on the software engineering process and addressing the analysis, development and test phases with novel approaches that make a substantial difference to the quality of the end product.

Friday, 22 May 2015

MT - 17 - Smoke and sanity testing.


 Sanity_Smoke_Testing

Lets see the diagram to understand the basic concept between smoke and sanity testing:









Smoke Testing:

Smoke Testing is performed after software build to ascertain that the critical functionalities of the program is working fine.It is executed "before" any detailed functional or regression tests are executed on the software build.The purpose is to reject a badly broken application, so that the QA team does not waste time installing and testing the software application.

In Smoke Testing, the test cases chosen cover the most important functionality or component of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system is working fine.
For Example a typical smoke test would be - Verify that the application launches successfully, Check that the GUI is responsive ... etc.

Smoke testing term came from hardware testing, when you get new hardware and power it on if smoke comes out then you do not proceed with testing.It is done to check the normal health of the build and make sure if it is possible to continue testing. It is done in the beginning of the software testing cycle.A subset of most basic and important test cases is selected and run to make sure that most basic and crucial functions of the software are working fine.
It follows shallow and wide approach where you cover all the basic functionality of the software.
Smoke test is scripted, i.e you have either manual test cases or automated scripts for it.
In some organizations smoke testing is also known as Build Verification Test(BVT) as this ensures that the new build is not broken before starting the actual testing phase.

Sanity Testing -

After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes.The goal is to determine that the proposed functionality works roughly as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing.

The objective is "not" to verify thoroughly the new functionality, but to determine that the developer has applied some rationality (sanity) while producing the software. For instance, if your scientific calculator gives the result of 2 + 2 =5! Then, there is no point testing the advanced functionalities like sin 30 + cos 50.When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing a sanity is performed on that build. You can say that sanity testing is a subset of regression testing.
Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase.It follows narrow and deep approach with detailed testing of some limited features.It is like doing some specialized testing which is used to find problems in particular functionality.It is done with an intent to verify that end user requirements are met on not.

Thursday, 21 May 2015

MT - 16 - Build and Release.

BUILD: is a number given to install-able software that is given to testing team for testing by the development team. Build number assigned are incremental and sequential. A new build is created if any change is done by the development team.

RELEASE: is a number given to installable software that is handed over to customer by the developer or tester.
The information of build, release and version are displayed in software help page. Using this build and release customer can let the customer team know which release version build they are using.
eg "9.4.123.2" (Release Number.VersionNumber.BuildNumber.Patch Number)

Build is given by dev team to test team while release is the executable file given to customer. A  build tested and verified by test team is given as release to customer. A build can be rejected by test team while a release cant. A release may contains too many builds.

MT - 15 - Monkey testing

Monkey testing is a type of Black Box Testing used mostly at the Unit Level. This testing is adopted to complete the testing, in particular if there is a resource/time crunch.In this tester enter the data in any format and check the software is not crashing. In this testing we use Smart monkey and Dumb monkey.Monkey testing is a software testing technique in which the testing is performed on the system under test randomly. The Input data that is used to test also generated randomly and keyed into the system.

Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very expensive to develop.

Dumb monkey, they are important for basic testing. They help in finding those bugs which are having high severity. Dumb monkey are less expensive as compare to Smart monkeys.

MT - 14 - Bug Triage

Bug Triage is nothing but making a meeting and examine the Bugs in order to Fix . We just Prioritize the bugs in the order they need a fix.Bug Triage is the process of evaluating defect reports to determine their course of action.

Types of bugs we discuss and prioritize :
--Some bugs needs to be Hot Fix (High priority to close the bug)
--Some bugs needs to Fix later (Medium)
--Some bugs need not be fix (Low)


Triaging a bug involves:
Making sure the bug has enough information for the developers and makes sense.
Making sure the bug is filed in the correct place.
Making sure the bug has sensible "Severity" and "Priority" fields

MT - 13 - AD-Hoc and Random Testing

Ad-hoc Testing : Without applying any testing techniques to derive testcases and conduct testing is called as ad-hoc testing.Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early application studies).Commonly used in software development, ad hoc testing is performed without a plan of action and any actions taken are not typically documented. Testers may not have detailed knowledge of product requirements. Ad hoc testing is also referred to as random testing and monkey testing.

Random testing: randomly selecting some of the testcases from existing testcases for current execution is called as random testing(randomly selecting 25 testcaese from 100 existing teatcases)

MT - 12 - What may be reasons of risk\challenges in testing !

Communication : Lack of communication leads to error in code

Schedule and Timeliness : Unrealistic schedule in which we are intended to test a big functionality in very less time.

Changing requirements : If the requirements are not stable and it keeps on changing , then there may be a risk in testing.

Unrealistic Plans: If test-planning is not done properly seeing the skills and resources in mind.

Resources :  If  there are less resources as compared to work.

Tools : If there are no proper tool or methodology what we are following.

Improper bug management : When there are no proper mitigation plans for the defects and risks.

Lack of Skill : Sometimes unskilled testers are also responsible for risk/defect in any product.

Priority of executing testcases : sometimes we execute the least priority testcases first and the most important cases later.

MT - 11 - Test Strategy and Test-Plan

These are two different things but  often people mistake in differentiating between these two. Let see the details :

-> A Test Strategy document is a high level document and normally developed by project manager.The Test Plan document is usually prepared by the Test Lead or Test Manager.

-> The Test Strategy is normally derived from the Business Requirement Specification document.The Test Plan document on the other hand, is derived from the Product Description, 

->The Test Strategy document is a static document meaning that it is not updated too often. Test plan is also static but it may be updated oftenly.

Note : Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.


COMPONENTS OF THE TEST STRATEGY DOCUMENT

  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverables
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrices
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan

COMPONENTS OF THE TEST PLAN DOCUMENT

  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test deliverables
  • Staff and training needs
  • Responsibilities
  • Schedule

Wednesday, 13 May 2015

MT - 10 - Writing a good test case?.

The following are the attributes of good test case.

- A good test has a high probability of finding an error. To find the maximum error, the tester and developer should have complete understanding of the software and attempt to check all the conditions that how the software might fail.

- A good test is not redundant. Every test should have a different purpose from other. We must avoid repetition

- A good test should be neither too simple nor too complex. In general, each test should be executed separately. If we combine more than one test into one test case, it might be very difficult to execute. Sometimes we can combine tests but it may hide some errors.

- There must be diersity in test data for each case

- The intent of each case should be to challenge and break the system

- It should be designed in the way so that it can be reused

- each case must have detailed steps so that any person not having system knowledge can understand the steps in one go.

-Independent , uniuque and should be traceable to requirement

- Each case must contain the following details :
Test case id:
Unit to test: What to be verified?
Assumptions:
Test data: Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

MT - 9 - Desktop application Testing:

As I discussed in previous blog about the web-server and client-server application testing , now I am going to discuss about the desktop application testing. Desktop- application testing is different than those types and its easier as compared to them.Desktop application runs on personal computers and work stations, so when you test the desktop application you are focusing on a specific environment. You will test complete application broadly in categories like GUI, functionality, Load, and backend i.e  DataBase.

1. Application runs in single memory (Front end and Back end in one place)
2. Single user only

Here all the resources are on the user system and are easily accessible. While testing we can check for the following things :

User Interface Testing (GUI Testing):
a. Content wording used in the pages should be correct.

b. Wrap-around should occur properly.

c. Instructions used in the application should be correct (i.e. if you follow each instruction does the expected result occur?)

d. Image spacing – To verify that images are displaying properly with text.

Functional Testing
a. Check for broken links (Broken link  refers to a hyperlink which does not work).

b. Warning messages and Edits: User input should get verified at system level according to business rules and error/warning messages should be flash to user for incorrect inputs.

c. Resolution change effect on the application: Ensure that application's functionality and design is compatible with the different resolutions.

d.  Theme change: Ensure the successful launch of application after theme change.

e. Installation Testing (Upgrade/Downgrade): Verify application is included in Programs and Features list after installation. Also Verify application is removed from Programs and Features list after un installation. Keep in mind that older version of application should not be install on latest version.

f. Testing with multi user accounts: Open Control Panel, User Accounts, and add 2 user accounts (Standard and admin) to the system. With the application running, press Start, the Switch User to the user account just created.
Verify application launches and runs correctly on the newly created user account. Switch back and forth between user accounts and use the application in both. Watch for any performance decreases and check functionality.

g. Sleep: While the application is running, put the system to sleep (S3). Wake the system up after two minutes.
a) Verify the application is still running.
b) Verify there is no distortion or error.

h. Cache and cookies:
- Delete the application's cache, launch the application and verify that application should work properly.
- Delete the application's cache while application is running and verify that application should work properly.

 Compatibility Testing
a. Test on different Operating systems: Some functionality in your web application may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows (XP, Vista, Win7 etc), Unix, MAC, Linux, Solaris with different OS flavors.

Performance testing
a. Long period of continuous use: Is site able to run for long period, without downtime.

b. Memory: Note down the average memory usage and check for memory leaks

Recovery testing
a. In this we check the behaviour of application after a sudden change in the environment

b. While restart the application must be able to recover and doesn't breaks dowm.

MT - 8 - Priority or severity- Which one is more important ?

Priority of any defect relates directly to the business impact of the software application While severity is the measure of level of impact that a bug can have on the application. Priority defines the sequence in which we are supposed to correct the defect.Most of the people assume that a highly critical or severe bug have the most priority. It may not be always true that a highly severe change is always prior and vice versa. Every thing depends on the time and business impact. The biggest example I may take is of the client logo. Its the most easy thing but if its coming wrong it means it will have high priority even if the severity is low.

We can further classify severity into following type : 

Critical: This type of defect may lead to system crash  or may cause loss of data or may create unsatisfaction to end user . There is no alternative for this type of severity .

Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.

Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.

Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.

Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.


Priority can be of following types:

Low: This type of defect has the least priority and this can also be deferred.

Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.

High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the  repair has been done.Its done on urgent basis. The turnaround time of such defects are lowest.

Who assigns it :
Priority and severity can be given while logging defect depending on the time and impact. usually the priority and severity is assigned by someone who has the  application and business knowledge.

Tuesday, 12 May 2015

MT - 7 - What is the difference between retest and regression testing?

- When any bug/defect is found it is assigned to developer to analyse and correct the code. Once the code is corrected the retesting is done by tester.Also known as confirmation testing ,it is a testing which runs the test cases that failed the last time, when they were run in order to verify the success of corrective actions taken on the defect found. On the other hand, regression testing is testing of a previously tested program after the modifications to make sure that no new defects have been introduced. In other words, it helps to uncover defects in the unchanged areas of the software. Regression testing is testing your software application when it undergoes a code change to ensure that the new code has not affected other parts of the software.

- Retesting is done after each defect fix while regression testing done when all the defects are fixed.

- In Regression testing we ensure that earlier passed cases are still getting passed while in retesting we ensure that the case which was getting fail should pass now.

- Regression testing can be automated but retesting can't.

- In the Regression Testing test cases are extracted from functional test cases to ensure that no new defects should be included & check whether original features and functionality is working as expected and make sure no new defect has been introduced. Once the regression test suite is created you can automate test cases using automation tool but same is not applicable for Retesting.

MT - 6 - Compatibility testing - Why and What ?

Compatibility testing is a kind of non-functional testing like Load and Volume testing which is carried out on the software component or applications or the entire software setup to evaluate the compatibility of the application with the different computing environment. It can be with the servers, other software, computer operating system, different web browsers or the hardware as well. Basically its carried on on different OS with different Hardware onfiguration to make sure there is no crash or unwanted behave of system. It is basically the testing of the application or the product built with the computing environment.Its done to ensure the customer satisfaction.It is to determine whether your software application or product is proficient enough to run in different browsers, database, hardware, operating system, mobile devices and networks. Application could also impact due to different versions, resolution, internet speed and configuration etc.
Types of Software compatibility testing:

-> Browser compatibility testing
-> Hardware
-> Networks
-> Mobile Devices
-> Operating System
-> Versions

MT - 5 - What is the difference between client-server testing and web based testing?

CLIENT / SERVER TESTING
In client server application you have two different components to test. Application is loaded on server machine while the application exe on every client machine. You will test broadly in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in LAN networks. You are aware of number of clients and servers and their locations in the test scenario .This type of testing is performed over 2-tier architecture or client-server architecture where client sends the request and server process it. If we look at the structure its simple but as a security perspective , the architecture is unsecure. Here we will be having front-end and backend.This has forms & reporting at front-end (monitoring & manipulations are done) [using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data storage & retrieval) [using ms access, sql server, oracle, sybase, mysql, quadbase etc.,]

The application launched on front-end will be having forms and reports which will be monitoring and manipulating data

The backend for these applications would be any database from where the sate is fetched .As an example we can take MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase

The tests performed on these types of applications would be
– User interface testing
– Manual support testing
– Functionality testing
– Compatibility testing & configuration testing
– cross-browser testing
- Client server Interaction


WEB TESTING
Web application is a bit different and complex to test as tester don’t have that much control over the application. Application is loaded on the server whose location may or may not be known and no exe is installed on the client machine, you have to test it on different web browsers. Here we access web servers using web browsers .Web applications are supposed to be tested on different browsers and OS platforms so broadly Web application is tested mainly for browser compatibility and operating system compatibility, error handling, static pages, backend testing and load testing
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.

The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can monitor through these applications)

Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs developed)

The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is stored in the database available on the DB server)

The tests performed on these types of applications would be
– User interface testing
– Functionality testing
– Security testing
– Browser compatibility testing
– Load / stress testing
– Interoperability testing/intersystem testing
– Storage and data volume testing

MT - 4- Debugging & Testing - Is there a difference ?

Both the terms looks alike but there are lots of differences. The main intention of testing is to find bugs and check that built system is in accordance with the client requirements but the intent of debugging is to locate the bug. When any defect is logged by a tester the developer debugs the particular code or module to locate the exact bug. In debugging we analyse ,locate and fix the causes of failures in the software. On the contrary testing is either a static or dynamic process of finding a bug in the system. In testing we check the sync  between the requirement and end product.

Sunday, 3 May 2015

MT - 3 - What is difference between client side and server side verification?

For web based applications we may have 2 type of verifications , one at the front end and one at back-end(also called back session). So if we look at the client-server architecture the client is referred to the web browser while server is a common machine which shares data on request across all the clients.
So server side verification is required when we need server resources to validate the input given by user while client side verification does not requires server resources.
For example : Suppose we are trying to login any application or tool and our email id and password is required condition is that the password should be minimum of 8 characters :
so lets suppose we give the credentials as :
Username : pratyush37.gmail.com
password: prats123

So here we don't need a server side verification because the format is wrong of email and the password only contains 7 characters . So we need not to send any response to server for this.
Here we create a small script to check the user input, those scripts are either created in VB or java.

For server side verification we send request to server to validate the input given by user ,here server side scripting language like PHP or ASP.net is used. These scripts checks the input against the database at server side.

The client verification is at browser level and it reduces the traffic as we are not sending request to server for wrong data.Also client verification is fast than the server.