Monday, 24 July 2017

MT - 84 - Cost of conformance and cost of non-conformance !

Cost of conformance and cost of non-conformance are components of Cost of quality.

Cost of conformance is the total cost of ensuring that a product is of good Quality. It includes costs of Quality Assurance activities such as standards, training, and processes; and costs of Quality Control activities such as reviews, audits, inspections, and testing.This is money spent to avoid failures.These activities each consume resource hours and thus need to be included in your project schedule, resource assignments, and cost estimates.

Costs of non-conformance can include things such as payments made out on warranties, re-work or scrap, and/or damage to reputation.This is money spent because of failures.

Sunday, 16 July 2017

MT - 82 - Different Roles and their work during a review !

Moderator/Leader/Facilitator - The moderator plays a key role in the review process. The moderator is responsible for selecting a team of reviewers, scheduling the code review meeting, conducting the meeting, and working with the author to ensures that the review process is followed, and that the other reviewers perform their responsibilities throughout the review process.

Author/Producer: The Author wrote the code that is being reviewed. The author is responsible for starting the code review process by finding a Moderator. The role of Author must be separated from that of Moderator, Reader, or Recorder to ensure the objectivity and effectiveness of the code review. However, the Author serves an essential role in answering questions and making clarifications during the review and making corrections after the review. He is also responsible for fixing any defect found during the review.

Reader: The Reader presents the code during the meeting by paraphrasing it in his own words. It is important to separate the role of Reader from Author, because it is too easy for an author to explain what he meant the code to do instead of explaining what it actually does. The reader's interpretation of the code can reveal ambiguities, hidden assumptions, poor documentation and style, and other errors that the Author would not be likely to catch on his own.

Scribe\Recorder.: The Scribe records all issues raised during the code review. Separating the role of Scribe from the other roles allows the other reviewers to focus their entire attention on the code. He records and classifies all the defects/ issues at the meeting, and assists the moderator / leader/ facilitator in preparing any reports / minutes.

Reviewer / Inspector: Reviews the code and identifies defects/ issues in the work product. They are experienced professionals who checks the code for defects and issues.

Whole Process : 
The moderator is responsible for ensuring the item to be inspected has met the entry criteria for inspection readiness. If these criteria are met the moderator plans the inspection. This involves selecting the other participants, arranging for meeting rooms, ensuring the inspection data package is prepared and distributed, and ensuring sufficient time for participant preparation. The recorder is responsible for logging the potential defects during the Inspection Meeting. This should be done as rapidly as possible, ensuring the essence of each comment is recorded without logging them verbatim. The author or producer submits their portion of a work product for inspection. Additionally, they provide the rest of the reference material that makes up the inspection package distributed to each inspector.The reader is the participant responsible for leading the inspection team through the work product during the Inspection Meeting. The moderator may be the reader to help control the pace of the meeting.

Phases of Inspection :
Inspection Planning
Inspection Overview – optional
Inspection Preparation
Inspection Meeting
Work Product Rework
Inspection Follow-up

Saturday, 15 July 2017

MT - 81 - What is Binary Portability Testing ?

The Binary Portability Testing is used to test the portability of the software by executing the software on different platforms and environment. It is used for conformation of an Application Binary Interface (ABI) specification. The conformation to an Application Binary Interface (ABI) specification. The Application Binary Interface (or ABI) defines a system interface for compiled application programs and also different for different types of hardware architecture. Since the binary specification includes information specific to the computer processor architecture for which it is intended, it is not possible to specify for a single document for all possible System. Hence, ABI is a family of specifications, rather than a single one.

Software Platforms
The Binary Portability testing  should be carried on different types of software program platforms,

Windows(x86,X86-64)

Linux

Mac OS

Java

Solaris

Android

Thursday, 13 July 2017

MT - 80 - Emulator and Simulator - Difference ?

Simulation
A simulation is a system that behaves similar to something else, but is implemented in an entirely different way. It provides the basic behaviour of a system, but may not necessarily adhere to all of the rules of the system being simulated. It is there to give you an idea about how something works. Simulator's objective is to simulate the internal state of an object as close as possible to the internal state of an object. A simulator is just a partial re-implementation of the original software .


Example
1) Think of a flight simulator as an example. It looks and feels like you are flying an airplane, but you are completely disconnected from the reality of flying the plane, and you can bend or break those rules as you see fit.

2)Apple’s iOS Simulator


Emulation
An emulation is a system that behaves exactly like something else, and adheres to all of the rules of the system being emulated. It is effectively a complete replication of another system, right down to being binary compatible with the emulated system's inputs and outputs, but operating in a different environment to the environment of the original emulated system. The rules are fixed, and cannot be changed, or the system fails.The goal of an emulation is to replace hardware or software components with functional equivalents when the original modules aren't available.The emulator aims at emulating or mimicking as close as possible the outer behaviour of an object. Often an emulator comes as a complete re-implementation of the original software .

Example
Google’s Android SDK

Tuesday, 11 July 2017

MT - 79 - Peer Review (Or Deskcheck)

Peer review is the technical name given to the informal review process in terms of testing. Its a common practice whose purpose is to validate and correct the defects in software. By doing the peer reviews the defect is not multiplied into the system. Generally, the goal of all peer review processes is to verify whether the work satisfies the specifications for review, identify any deviations from the standards, and provide suggestions for improvements.

- It is a static technique which is followed in very beginning of the software life cycle.

- The peer review doesn't involves management participation as other review techniques.

- It is done by a trained moderator in presence of author.

- It is documented and report related to the issues are created. The peers and technical specialists are     involved in the process.

Sunday, 9 July 2017

MT - 78 - What is Pareto Chart and when to use it ?


A Pareto chart is a type of chart that contains both bars and a line graph, where individual values are represented in descending order by bars, and the cumulative total is represented by the line.Pareto charts are extremely useful for analysing what problems need attention first because the taller bars on the chart, which represent frequency, clearly illustrate which variables have the greatest cumulative effect on a given system.

The left vertical axis is the frequency of occurrence, but it can alternatively represent cost or another important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure.

The purpose of the Pareto chart is to highlight the most important among a set of factors. In quality control, it often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, and so on.



When to use Pareto chart ?
- When analysing data about the frequency of problems or causes in a process.
- When there are many problems or causes and you want to focus on the most significant.
- When analysing broad causes by looking at their specific components.
- When communicating with others about your data.


MT - 77 - What is Cost of Quality (COQ) ?

Cost of quality is a methodology that allows an organisation to determine the extent to which its resources are used for activities that prevent poor quality, that appraise the quality of the organisation's products or services, and that result from internal and external failures.

Cost of quality refers to the sum of costs incurred:
1) to prevent the non-conformance from happening
2) to correct the non-conformance which occurs in the products (Also called cost of poor quality) . Its the cost which is incurred when a poor quality product is delivered which doesn't meets the quality.


Quality cost has two major categories :

1) Cost of achieving good Quality : 
i) Prevention Cost - Prevention costs are incurred to prevent or avoid quality problems. These costs are associated with the design, implementation, and maintenance of the quality management system. They are planned and incurred before actual operation. Eg : Design Review , Surveys , Quality Planning and Assuarance , Training , Requirement analysis.

ii)Appraisal Cost - Appraisal costs are associated with measuring and monitoring activities related to quality. These costs are associated with the suppliers’ and customers’ evaluation of purchased materials, processes, products, and services to ensure that they conform to specifications. Eg : Verification , Quality Audits , Inspections


2) Cost of poor quality :
i) Internal Failure Costs - Internal failure costs are incurred to remedy defects discovered before the product or service is delivered to the customer. This occours when a product fails to meet quality requirements.These costs occur when the results of work fail to reach design quality standards and are detected before they are transferred to the customer. Eg :  Scrap , Waste , Rework , Rectification , Failure analysis ,Dowwntime

ii)External Failure Costs - External failure costs are incurred to remedy defects discovered by customers. These costs occur when products or services that fail to reach design quality standards are not detected until after transfer to the customer. Eg : Repairs , Warantee claims , Returns , Complaints , Price Concession




Thursday, 6 July 2017

MT - 76 - Usability Testing !

As the  name says this testing is done from the perspective of end user to see the ease of its usability from the end user point of view. This type of testing is done to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

This testing is done by end user and is being researched by the tester. The users are asked to perform a task and are observed by the testers that what difficulties are being faced by the end users.

Goals of Usability Testing : 
- Useful
- Find-able
- Accesible
- Desirable
- Usable

Type of  Usability testing

Comparative Usability Testing

Used to compare the usability of one website with another. Comparative tests are commonly used to compare a website against peer or competitor sites, however it can also be used to compare two designs to establish which provides the best user experience.

Exploratory Usability Testing

Before a new product is released, exploitative usability testing can establish what content and functionality a new product should include to meet the needs of its users. Users test a range of different services where they are given realistic scenarios to complete which helps to highlight any gaps in the market that can be taken advantage of and illustrate where to focus design effort.


Advantages

There are many advantages of usability testing including:

- Issues and potential problems are highlighted before the product is launched
- It increases the likelihood of usage and repeat usage
- It minimises the risk of the product failing
- Users are better able to reach their goals, which results in the business meeting its targets
- It helps uncover usability issues before the product is marketed.
- It helps improve end-user satisfaction
- It makes your system highly effective and efficient
- It helps gather true feedback from your target audience who actually use your system during usability test. You do not need to rely on "opinions" from random people.


Methods of Usability Testing
There are two methods available to do usability testing -
   
Laboratory Usability Testing:. This testing is conducted in a separate lab room in presence of the observers. The testers are assigned tasks to execute. The role of the observer is to monitor behavior of the testers and report outcome of  testing. The observer remains silent during the course of testing.  In this testing both observers and testers are present in same physical location.

Remote Usability Testing  : Under this testing observers and testers are remotely located. Testers access the System Under Test, remotely and perform assigned tasks. Tester's voice , screen activity , testers facial expressions are recorded by an automated software. Observers analyze this data and report findings of the test. 

Wednesday, 5 July 2017

MT - 75 - Burn Down Vs Burn Up Chart

These two type of charts are used to track the work in any project. A burn down chart shows how much work is remaining to be done while the burn up chart shows the progress in terms of the work completed and total amount of work. These charts are mostly used in Agile and scrum methodology. How the chart look like :

         
 Burn Down Chart : A single line showing the work remaining in the project on each day. Vertical axis is amount of work and horizontal is time. It can be used to see the project velocity. We can compare the actual velocity against the velocity required to meet the deadline. We can calculate the exact percentage. The straight dotted line shows the ideal velocity of project. We can also see that when a team is ahead or behind the schedule.




There is a problem with the burn down chart that it is assumed that the amount of work should not change. So this chart doesn't shows realistic data where the additional workitems comes up or any workitem is removed to meet the dead lien. So a new chart Burn Up chart was introduced.

Burn Up Chart : 
The dotted line shows ideal completion rate. The blue line shows the actual task completed and the red line shows the total task after addition or deletion of the tasks.
Here addition or deletion of any task can be depicted easily



Burn Down chart is ideal for those projects where amount of work is fixed but burn up can be used in any projects.

Tuesday, 4 July 2017

MT - 74 - Quality Function Deployment !

Quality function deployment (QFD) is a quality management technique that translates the needs of the customer into technical requirements for software. Quality professionals refer to QFD by many names, including matrix product planning, decision matrices, and customer-driven engineering. Whatever you call it, QFD is a focused methodology for carefully listening to the voice of the customer and then effectively responding to those needs and expectations. QFD identifies three types of requirements:

Normal requirements. The objectives and goals that are stated for a product or system during meeting with customer. If these requirements are present, the customer is satisfied.(Minimal Functional and Performance)

Expected requirements. These requirements are implicit to the product or system and may be so fundamental that the customer does not explicitly state them. Their absence will be a cause for significant dissatisfaction.(important implicit requirements that is ease of use)

Exciting requirements. These features go beyond the customer’s expectations and prove to be very satisfying when present.(highly prized and valued)

Functional deployment is used to determine the value of each function that is required for the system. Information deployment identifies both the data objects and events that the system must consume and produce. These are tied to the functions. Finally, task deployment examines the behaviour of the system or product within the context of its environment. Value analysis is conducted to determine the relative priority of requirements determined during each of the three deployments.

Function Deployment : Determines value of required Function

Information Deployment : Focuses on data objects and events produced or consumed by system

Task Deployment : Product Behaviour and implied operating environment

MT - 73 - Operational Analysis Principle !

The Requirement analysis is one of the major step to decide operation in Software testing where the requirements are mapped to test scenarios and based on it the operational behaviour of software is characterised. It decides that a actual system should work based on the requirements.
Software requirements analysis may be divided into five areas of effort:
(1)     problem recognition,
(2)     evaluation and synthesis,
(3)     modelling,
(4)     specification, and
(5)     review.

There are various models which helps in deciding the Operations of a software.Investigators have identified analysis problems and their causes and have developed a variety of notations and corresponding sets of heuristics to overcome them. Each analysis method has a unique point of view.

- The information domain of a problem must be represented and understood.
- The functions that the software is to perform must be defined.
- The behaviour of the software must be represented.
- The models that depict information, function, and
- The models that depict information function and behaviour must be partitioned in a manner that uncovers details in a layered fashion.
- The analysis process should move from essential information toward implementation detail.

In addition to these operational analysis principles for requirements engineering:
- Understand the problem before you begin to create the analysis model.
- Develop prototype that enable a user to understand how human/machine interaction will occur.
- Record the origin of and the reason for every requirement.
- Use multiple views of requirements.
- Rank requirements.
- Work to eliminate ambiguity