SOFTWARE TESTING TUTOR

SOFTWARE TESTING


PART-1

 Key Players and Their Roles:
·        Business sponsor(s) and partners           
1.     Provides funding
2.     Specifies requirements and deliverables
3.     Approves changes and some test results
·        Project manager                                         
1.     Plans and manages the project 
·        Software developer(s)                                 
2.     Designs, codes, and builds the application
3.     Participates in code reviews and testing
4.     Fixes bugs, defects, and shortcomings
·        Testing Coordinator(s)      
1.     Creates test plans and test specifications based on the requirements and functional, and technical documents
·        Tester(s)                                                       
1.     Executes the tests and documents results

Service – based companies and Product – based companies
Service – based companies:
They provide service and develop software for other companies
They provide software which is and specified as per the client company’s requirement and never keep the code of the developed product and does not provide the software to any other company other than the client company.
Ex – Wipro, Infosys, TCS, Accenture
Product – based companies:
The develop software products and sell it to many companies which may need the software and make profits for themselves
They are the sole owners of the product they develop and the code used and sell it to other companies which may need the software.
Ex – Oracle, Microsoft
Software Testing FrameWork
Software testing answers questions that development testing and code reviews can’t. 
·        Does it really work as expected?
·        Does it meet the users requirements? 
·        Is it what the users expect?
·        Do the users like it?
·        Is it compatible with our other systems?
·        How does it perform?
·        How does it scale when more users are added?
·        Which areas need more work?
·        Is it ready for release? 

What can we do with the answers to these questions? 
·        Save time and money by identifying defects early 
·        Avoid or reduce development downtime
·        Provide better customer service by building a better application
·        Know that we’ve satisfied our users’ requirements
·        Build a list of desired modifications and enhancements for later versions
·        Identify and catalogue reusable modules and components
·        Identify areas where programmers and developers need training
Software testing has three main purposes: verification, validation, and defect finding. 
·        The verification process confirms that the software meets its technical specifications.  A “specification” is a description of a function in terms of a measurable output value given a specific input value under specific preconditions.  A simple specification may be along the line of “a SQL query retrieving data for a single account against the multi-month account-summary table must return these eight fields <list> ordered by month within 3 seconds of submission.”
·        The validation process confirms that the software meets the business requirements.  A simple example of a business requirement is “After choosing a branch office name, information about the branch’s customer account managers will appear in a new window.  The window will present manager identification and summary information about each manager’s customer base: <list of data elements>.”   Other requirements provide details on how the data will be summarized, formatted and displayed.
·        A defect is a variance between the expected and actual result.  The defect’s ultimate source may be traced to a fault introduced in the specification, design, or development (coding) phases.







PART-2
Manual vs Automation
The very first moment you enter into an organization is configuring mail Id to Outlook (if you get in as an experienced then you should do it yourself else the Admin will do it)….the very first 20 days is where processing of your data, verification will be done…and will be assigned to a project within a month after then. For Fresher training will be provided based on companies requirement and then assigned to a project.
Check in to download the necessary documents to your local working folder in your local machine from the Repository/Database/VSS/CVS whichever is maintained by your company & Check out to Upload the documents you have worked on back to the Repository (contains all the necessary documents(Configurable Items - PMP, SRS, FRS, Test Plan, Templates-TS, TC, Status reports-DSR,WSR, DDR, Retesting Report, etc.., Non Configurable Items – MOM, Status Reports.., etc.., ) and the work history of all resources allotted to that particular project).

Explain Software Development Life Cycle Models?
Test methodology determines how an application will be tested and what will be tested. Example of methodologies: waterfall, agile etc.
STLC & Defect Life Cycle are a part of SDLC.
1.    What is Waterfall Model?
The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to the systems development life cycle, the waterfall model describes a development method that is linear and sequential. Waterfall development has distinct goals for each phase of development. Imagine a waterfall on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the development proceeds to the next phase and there is no turning back.

Drawbacks of Waterfall Model
In waterfall model, backtracking is not possible i.e, we cannot back and change requirements once the design stage is reached. Change in requirements – leads to change in design – thus bugs enter the design – which leads to change in code which results in more bugs. Thus the requirements are freezed once the design of the product is started.
Drawback of requirements freezing – the customer may not be satisfied if the changes he requires is not incorporated in the product. The end result of waterfall model is not a flexible product.
Major drawback of waterfall model – testing is a small phase which is done after coding. Requirement is not tested, design is not tested, if there is a bug in the requirement, it goes on till the end and leads to lot of re-work.
Advantages of waterfall model requirements do not change nor does design and code, so we get a stable product.
Applications of waterfall model
Used in – developing a simple application
              For short term projects
              Whenever we are sure that the requirements will not change

2.    Explain V – MODEL / V & V MODEL (Verification and Validation Model)?
This model came up in order to overcome the drawback of waterfall model – here testing starts from the requirement stage itself.
The V & V model is shown below.
1)   In the first stage, the client sends the CRS both to developers and testers. The developers translate the CRS to the SRS.
The testers do the following tests on CRS,
               1. Review CRS
                        a. Conflicts in the requirements
                        b. Missing requirements
                        c. Wrong requirements
               2. Write Acceptance Test plan
               3. Write Acceptance Test cases
The testing team reviews the CRS and identifies mistakes and defects and send it to the development team for correcting the bugs. The development updates the CRS and continues developing SRS simultaneously.
2) In the next stage, the SRS is sent to the testing team for review and the developers start building the HLD of the product. The testers do the following tests on SRS,
               1.  Review SRS against CRS
                        a. Every CRS is converted to SRS
                        b. CRS not converted properly to SRS
               2. Write System Test plan
               3. Write System Test case
The testing team reviews every detail of the SRS if the CRS has been converted properly to SRS.
3) In the next stage, the developers start building the LLD of the product. The testers do the following tests on HLD,
               1. Review HLD
               2. Write Integration test plan
               3. Write Integration test case
4) In the next stage, the developers start with the coding of the product. The testing team carries out the following tasks,
               1. Review LLD
               2. Write Functional test plan
               3. Write Functional Test case
After coding, the developers themselves carry out unit testing or also known as white box testing. Here the developers check each and every line of code and if the code is correct. After white-box testing, the s/w product is sent to the testing team which tests the s/w product and carries out functional testing, integration testing, system testing and acceptance testing and finally deliver the product to the client.
How to handle requirement changes in V&V?
Whenever there is change in requirement, the same procedure continues and the documents will be updated.

Advantages of V&V model
1) Testing starts in very early stages of product development which avoids downward flow of defects which in turn reduces lot of rework
2) Testing is involved in every stage of product development
3) Deliverables are parallel/simultaneous – as developers are building SRS, testers are testing CRS and also writing ATP and ATC and so on. Thus as the developers give the finished product to testing team, the testing team is ready with all the test plans and test cases and thus the project is completed fast.
4) Total investment is less – as there is no downward flow of defects. Thus there is less or no re-work

Drawbacks of V&V model
1) Initial investment is more – because right from the beginning testing team is needed
2) More documentation work – because of the test plans and test cases and all other documents
Applications of V&V model
We go for V&V model in the following cases,
1) For long term projects
2) For complex applications
3) When customer is expecting a very high quality product within stipulated time frame because every stage is tested and developers & testing team are working in parallel

3.   Explain Spiral Model?
In Spiral model, the software product is developed in small modules. Let us consider developing a s/w product X. X is built by integrating A,B,C and D.
The module A – requirements of the module is collected first and then the module is designed. The coding of module A is done after which it is tested for defects and bugs.
The module B – once module A has been built, we start the same process for module B. But while testing module B, we test for 3 conditions – a)test module B b)test integration of module B with A c)test module A.
The module C – after building module A,B, we start the same process for module C. Here we test for the following conditions – 1) test module c, b, a 2) test for integration of C and B, C and A, A and B.
And thus the cycle continues for different modules. Thus in the above example, module B can be built only after module A has been built correctly and similarly for module C.

For spiral model, the best example that we can consider is the MS-Excel application.
The MS-Excel sheet consists of a number of cells that are the components of Excel sheet.
Here we have to create the cells first (module A). Then we can do operations on the cells like merge cells into two, split cell into half (module B ). Then we can draw graphs on the excel sheet (module C).

Advantages of Spiral Model:
Requirement changes are allowed.
·        After we develop one feature / module of the product, then only we can go on to develop the next module of the product.
·        Whenever the customer request for major changes in requirements in a particular module, then we change only that module and do testing of both unit and integration of units. This change in requirements comes up in a separate cycle just to do the changes.
·        Whenever the customer request minor changes in the product, then the s/w team makes the minor changes along with the new module to be developed simultaneously in a single cycle. We don’t consider making the minor change in a separate cycle of the spiral model due to time and resource constraints.
·        The documents collected by Business analysts during requirement collection stage is known as CRS (Customer Requirement Specification ) or BRS ( Business Requirement Specification ) or BS ( Business Specification ). In this document the client explains how their business works or the requirement of the s/w he needs. The BA gathers CRS from the client and translates it into SRS (Software Requirement Specification). The SRS contains how the software should be developed and is given by the BA to developers. For more detailed explanation of how to go about developing the s/w, the BA/developer builds another document – FS (Functional Specification). FS explains how each and every component should work.

Drawbacks of Spiral Model
·        Traditional model and thus developers only did testing job as well
.
Applications of Spiral Model
·        Whenever there is dependency in building the different modules of the software, then we use Spiral Model.
·        Whenever the customer gives the requirements in stages, we develop the product in stages.

4Explain PROTOTYPE DEVELOPMENT MODEL?
The requirements are collected from the client in a textual format. The prototype of the s/w product is developed. The prototype is just an image / picture of the required s/w product. The customer can look at the prototype and if he is not satisfied, then he can request more changes in the requirements.
Prototype testing means developers/ testers are checking if all the components mentioned in requirements are existing or not.
The difference b/w prototype testing and actual testing – in PTT, we are checking if all the components are existing, whereas, in ATT we check if all components are working
From “REQUIREMENT COLLECTION” to “CUSTOMER REVIEW”, textual format has been converted to image format. It is simply extended requirement collection stage. Actual design starts from “DESIGN” stage.
Prototype development was earlier done by developers. But, now it is done by web designers/content developers. They develop prototype of the product using simple ready-made tools. Prototype is simply an image of the actual product to be developed.

Advantages of Prototype model
1) In the beginning itself, we set the expectation of the client.
2) There is clear communication b/w development team and client as to the requirements and the final outcome of the project
3) Major advantage is – customer gets the opportunity in the beginning itself to ask for changes in requirements as it is easy to do requirement changes in prototype rather than real applications. Thus costs are less and expectations are met.

Drawbacks of Prototype model
1) There is delay in starting the real project
2) To improve the communication, there is an investment needed in building the prototype.

Applications
We use this model when,
1) Customer is new to the s/w
2) When developers are new to the domain
3) When customer is not clear about his own requirement
There are 2 types of prototype,

Static Prototype – entire prototype of the requirement is stored in a word document with explanation and snapshots and instructions on how to go about building the s/w, how the finished product will look like and its working etc.

Dynamic Prototype – similar to a browser, but we can’t enter any information. Only the features are available without entering data. It’s like a dummy page, made out of HTML with tags and links to different pages representing features of the project

Folder Structure in Manual Testing


So from the above Picture we can understand that Automation is only a small part of Software Testing Life Cycle.

Test Execution Folder Structure is as below
Within the V1.0 (each Version) there are multiples of Builds as below

What is Manual Testing?
        Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of Test automation. 
        Manual testing is a laborious activity that requires the tester to possess a certain set of qualities;
        To be patient
        Observant
        Speculative
        Creative
        Innovative
        Open-minded
        Resourceful
        Un-opinionated
        Skillfull.


Software Development Life Cycle:

Each of these stages have a definite Entry and Exit criteria, Activities & Deliverables associated with it.

In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial, we will focus of activities and deliverables for the different stages in SDLC. Let us look into them in detail.

1.  Requirements Collection
·        Done by Business Analysts and Product Analysts
·        Gathering requirements
·        Translates business language into software language
·        For ex, let us consider the example of banking software.

Feasibility Study
  Done by software team consisting of project managers, business analysts, architects, finance, HR, developers but not testers
Architect – is the person who tells whether the product can be developed and if yes, then which technology is best suited to develop it.
Here we check for,
·        Technical feasibility
·        Financial feasibility
·        Resource feasibility

2.  Design

There are 2 stages in design,
                                                HLD – High Level Design
                                                LLD – Low Level Design
HLD – gives the architecture of the software product to be developed and is done by architects and senior developers
LLD – done by senior developers. It describes how each and every feature in the product should work and how every component should work. Here, only the design will be there and not the code.
For ex, let us consider the example of building a house.

3.  Coding / Programming
·        Done by all developers – seniors, juniors, freshers
·        This is the process where we start building the software and start writing the code for the product.
4.  Testing
·        Done by test engineers
·        It is the process of checking for all defects and rectifying it.

Software Testing Life Cycle (STLC)
a.)    Requirement Analysis:
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.
Activities
  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus.
  • Prepare Requirement Traceability Matrix (RTM).
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required).
Deliverables 
  • RTM
  • Automation feasibility report. (if applicable)
  
b.)    Test Plan
This phase is also called Test Strategy phase. Typically, in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
Activities
  • Preparation of test plan/strategy document for various types of testing
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities.
  • Training requirement
Deliverables 
  • Test plan /strategy document.
·         Effort estimation document.
c.)  Test Scenario

Test scenario is a logical grouping of test cases and it mentions the sequence in which the test cases are to be executed.

d.) Test Case 

Test case is a unit level document describing the inputs, steps of execution and the expected result of each test condition for every requirement from the BRD. Testers determine whether the application is working correctly or not based on the test case that is being executed. A test case is marked as "Pass" if the application works as expected and is marked as "Fail" if otherwise. Test cases also aide in generating test status metrics.
This phase involves creation, verification and rework of test cases & test scripts. Test data, is identified/created and is reviewed and then reworked as well.
Activities
  • Create test cases, automation scripts (if applicable)
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available)
Deliverables 
  • Test cases/scripts 
  • Test data
 e.) Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development StageTest team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities 
  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build
Deliverables 
  • Environment ready with test data set up 
  • Smoke Test Results.

f.) Test Execution
 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
Activities 
  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure
Deliverables 
  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports
g.) Test Cycle Closure
Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.
Activities
  • Evaluate cycle completion criteria based on Time,Test coverage, Cost, Software, Critical Business Objective, Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.
Deliverables 
  • Test Closure report 
  • Test metrics
Finally, summary of STLC along with Entry and Exit Criteria
STLC Stage
Entry Criteria
Activity
Exit Criteria
Deliverables
Requirement Analysis
Requirements Document available (both functional and non functional)
Acceptance criteria defined.
Application architectural document available.
Analyse business functionality to know the business modules and module specific functionalities.
Identify all transactions in the modules.
Identify all the user profiles.
Gather user interface/authentication, geographic spread requirements.
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM).
Identify test environment details where testing is supposed to be carried out.
Automation feasibility analysis (if required).
Signed off RTM
Test automation feasibility report signed off by the client
RTM
Automation feasibility report (if applicable)
Test Planning
Requirements Documents
Requirement Traceability matrix.
Test automation feasibility document.
Analyze various testing approaches available
Finalize on the best suited approach
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Approved test plan/strategy document.
Effort estimation document signed off.
Test plan/strategy document.
Effort estimation document.
Test case development
Requirements Documents
RTM and test plan
Automation analysis report
Create test cases, automation scripts (where applicable)
Review and baseline test cases and scripts
Create test data
Reviewed and signed test Cases/scripts
Reviewed and signed test data
Test cases/scripts
Test data
Test Environment setup
System Design and architecture documents are available
Environment set-up plan is available
Understand the required architecture, environment set-up
Prepare hardware and software requirement list
Finalize connectivity requirements
Prepare environment setup checklist
Setup test Environment and test data
Perform smoke test on the build
Accept/reject the build depending on smoke test result
Environment setup is working as per the plan and checklist
Test data setup is complete
Smoke test is successful
Environment ready with test data set up
Smoke Test Results.
Test Execution
Baselined RTM, Test Plan , Test case/scripts are available
Test environment is ready
Test data set up is done
Unit/Integration test report for the build to be tested is available
Execute tests as per plan
Document test results, and log defects for failed cases
Update test plans/test cases, if necessary
Map defects to test cases in RTM
Retest the defect fixes
Regression testing of application
Track the defects to closure
All tests planned are executed
Defects logged and tracked to closure
Completed RTM with execution status
Test cases updated with results
Defect reports
Test Cycle closure
Testing has been completed
Test results are available
Defect logs are available
Evaluate cycle completion criteria based on - Time, Test coverage , Cost , Software Quality , Critical Business Objectives
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work product to the customer.
Test result analysis to find out the defect distribution by type and severity
Test Closure report signed off by client
Test Closure report
Test metrics






Requirement traceability matrix 

RTM is a matrix tying up requirements with the test cases. It is a way of making sure that every requirement has a corresponding test case which will be tested thereby ensuring complete requirements coverage.

5.  Installation
·        Done by installation engineers
·        To install the product at a client’s place for using after the software has been developed and tested.
For ex, consider the example of software to be developed and installed at Reliance petrol bunk.

6.  Maintenance
Here as the customer uses the product, he finds certain bugs and defects and sends the product back for error correction and bug fixing.


SDLC vs. STLC


SDLC (Software Development Life cycle)
STLC (Software Test Life Cycle)
SDLC is Software Development Lifecycle; it is a systematic approach to develop software.
The process of testing software in a well planned and systematic way is known as software testing life cycle (STLC).
Requirements Gathering or Documents Gathering
Requirements Analysis is done is this phase, software requirements are reviewed by test team.
Design
Test Planning, Test analysis and Test design is done in this phase. Test team reviews design documents and prepares the test plan.
Coding or development
Test construction and verification is done in this phase, testers write test cases and finalizes test plan.
Testing
Test Execution and bug reporting, manual testing, automation testing is done, defects found are reported. Re-testing and Regression testing is also done in this phase.
Deployment
Final testing and implementation is done is this phase and final test report is prepared.
Maintenance
Maintenance testing is done in this phase.




Bug Life Cycle or Defect Life Cycle in Software Testing
Bug Life Cycle or Defect Life Cycle in Software Testing
Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.

The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as follows:

        New:  When a defect is logged and posted for the first time. It’s state is given as new.
        Assigned:  After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
        Open:  At this state the developer has started analyzing and working on the defect fix.
        Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
        Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.
        Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.
        Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.
        Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.
        Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.
        Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.
        Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.
        Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 
        Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of color of some text then it is not a bug but just some change in the looks of the  application.

WHAT IS DEFECT TRACKING?

Developer develops the product – test engineer starts testing the product – he finds a defect – now the TE must send the defect to the development team.
He prepares a defect report – and sends a mail to the Development lead saying “bug open”.
Development lead looks at the mail and at the bug – and by looking at the bug – he comes to know to which development engineer developed that feature which had a bug – and sends the defect report to that particular developer and says “bug assigned”.
The development engineer fixes the bug – and sends a mail to the test engineer saying “bug fixed” – he also “cc mail” to the development lead.
Now the TE takes the new build in which the bug is fixed – and if the bug is really fixed – then sends a mail to the developer saying “bug closed” and also “cc mail” to the development lead.
Every bug will have an unique number.
If the defect is still there – it will be sent back as “bug reopen”.

We should also send a copy of the defect report to the TL. Why do we do this ? Because,
·        He should be aware of all the issues that are there in the project
·        To get visibility (i.e, he should know that we are working)

90% of projects – we don’t take permission from Test Lead to send bugs to development team.

Around 10% of projects, we take permission because,
·        Customer is new – for ex, Reliq has a testing team which is testing a product developed by Vodafone developers. We cant send all sorts of major, minor and critical bugs to their development team. So test lead first approves the defect and then sends it to the development team saying it’s a valid bug.
·        When we are new to the project

When we should send defects to development team? – as soon as we catch the defect, we send it to development team.
Why do we send it immediately?
·        Otherwise someone else will send the defect (common features)
·        Development team will have sufficient time to fix the bug if we send the bug asap.

Defect Reporting Template

How to check for duplicate bugs?

When developer changes the status to duplicate, then the TE should check whether the previous bug & the sent bug is same or not.
To check whether its duplicate or not, click on advance search& get it confirmed if the bug is duplicate or not. If it’s not a duplicate, then TE should give proper justification.

Click on Advanced Search (see figure next page)
To avoid duplicate bugs, go to Advanced Search.
Whenever we catch a bug – Before logging the bug to developer for fixing – first go & check whether it is logged before or not. To do so, click on Advanced Search& enter the data in thetext field & click on Search. You will get the bug ID(S). if you enter ‘password’ & search, then it will give you different bug ID having password text. We must go & check it & log it for fixing if it’s not logged before.
Defect found by Testing team should never be closed just like that by the Development team. TE from customer point of view looks the product – so if a developer says it’s a minor bug, testing team always considers it as major bug.



Advantages of Manual Testing:
        Manual Testing is eye ball testing
        Applications with short life cycles.
        Applications that have GUIs that constantly changes
        It requires less time and expense to begin productive manual testing.
        Automation cannot replace human intuition, inference, and inductive reasoning.
        Automation Testing cannot change course in the middle of a test run to examine something that had not been previously considered.
        Manual QA testing can be used in both small and big projects.
        Easily we can update our test case according to project movement.
        It is covered in limited cost.
        Easy to learn for new people who are entered into testing.
        Manual QA Testing is more reliable than automation (in many cases automation will not cover all cases)

Disadvantages of Manual Testing:
        GUI objects size difference and colour combination etc is not easy to find out in manual testing.
        Load testing and performance testing is not possible in manual testing.
        Running test manually is very time consuming job.
        Regression Test cases are time consuming if it is manual testing.
                          
Comparison to Automated Testing:
        Test automation may be able to reduce or eliminate the cost of actual testing. A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labour that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labour than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time consuming task of interpreting the results.
        Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice.
        Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabelled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly.
               
                       PART-3

Software Test Documents - Test Plan, Test Scenario, Test Case, Traceability Matrix in detail

Explain about Software Test Documents (artifacts)?
Testing documentation involves the documentation of artifacts which should be developed before or during the testing of Software.
Documentation for Software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing etc. This section includes the description of some commonly used documented artifacts related to Software testing such as:
        Test Plan
        Test Scenario
        Test Case
        Traceability Matrix

Define Test Plan
A test plan outlines the strategy that will be used to test an application, the resources that will be used, the test environment in which testing will be performed, the limitations of the testing and the schedule of testing activities. Typically the Quality Assurance Team Lead will be responsible for writing a Test Plan.
A test plan will include the following: 
1.     Test Plan id
2.     Introduction
3.     Test items
4.     Features to be tested
5.     Features not to be tested
6.     Test techniques
7.     Testing tasks
8.     Suspension criteria
9.     Features pass or fail criteria
10.                         Test environment (Entry criteria, Exit criteria)
11.                         Test deliverables
12.                        Staff and training needs
13.                        Responsibilities
14.                        Schedule

Define Test Scenario
A one line statement that tells what area in the application will be tested. Test Scenarios are used to ensure that all process flows are tested from end to end. A particular area of an application can have as little as one test scenario to a few hundred scenarios depending on the magnitude and complexity of the application.

The term test scenario and test cases are used interchangeably however the main difference being that test scenarios has several steps however test cases have a single step. When viewed from this perspective test scenarios are test cases, but they include several test cases and the sequence that they should be executed. Apart from this, each test is dependent on the output from the previous test.


General Test Scenarios
1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like ‘Select’
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user’s first name should be labelled properly as ‘First Name’
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behaviour after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate mark up for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

Define Test Case
Test cases involve the set of steps, conditions and inputs which can be used while performing the testing tasks. The main intent of this activity is to ensure whether the Software Passes or Fails in terms of its functionality and other aspects. There are many types of test cases like: functional, negative, error, logical test cases, physical test cases, UI test cases etc.

Furthermore test cases are written to keep track of testing coverage of Software. Generally, there is no formal template which is used during the test case writing. However, following are the main components which are always available and included in every test case:
Test case ID
Product Module

Product version
Revision history
Purpose
Assumptions
Pre-Conditions
Steps
Expected Outcome
Actual Outcome
Post Conditions
Many Test cases can be derived from a single test scenario. In addition to this, some time it happened that multiple test cases are written for single Software which is collectively known as test suites.


Test Environment:
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development StageTest team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
Activities 
  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build
Deliverables 
  • Environment ready with test data set up 
  • Smoke Test Results.
Test Execution:
 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
Activities 
  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure
Deliverables 
  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports
Test Closure:
Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.
Activities
  • Evaluate cycle completion criteria based on Time,Test coverage, Cost, Software, Critical Business Objective, Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.
Deliverables 
  • Test Closure report 
  • Test metrics

Define Traceability Matrix
Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table which is used to trace the requirements during the Software development life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user defined templates for RTM.

Each requirement in the RTM document is linked with its associated test case, so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also include and linked with its associated requirements and test case.

The main goals for this matrix are:

Make sure Software is developed as per the mentioned requirements.

Helps in finding the root cause of any bug, Helps in tracing the developed documents during different phases of SDLC.

Traceability matrix in Software testing with example template
What is Traceability Matrix:
A traceability matrix is a document, usually in the form of a table that correlates any two base lined documents that require a many-to-many relationship to determine the completeness of the relationship. It is often used with high-level requirements (these often consist of marketing requirements) and detailed requirements of the product to the matching parts of high-level design, detailed design, test plan, and test cases.
A requirements traceability matrix may be used to check to see if the current project requirements are being met, and to help in the creation of a request for proposal, software requirements specification, various deliverable documents, and project plan tasks.
What is the need for Requirements Traceability Matrix in Software Testing?
Automation requirement in an organization initiates it to go for custom built Software. The client who had ordered for the product specifies his requirements to the development Team and the process of Software Development gets started. 
In addition to the requirements specified by the client, the development team may also propose various value added suggestions that could be added on to the software. But maintaining a track of all the requirements specified in the requirement document and checking whether all the requirements have been met by the end product is a cumbersome and a laborious process.
The remedy for this problem is the Requirements Traceability Matrix.
What is Traceability Matrix from Software Testing perspective?
        A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed.
        This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id
Types of Traceability Matrix:
        Forward Traceability – Mapping of Requirements to Test cases
        Backward Traceability – Mapping of Test Cases to Requirements
        Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.
Why Bi-Directional Traceability is required?
        Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements.
        This will help us in identifying if there are test cases that do not trace to any coverage item— in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This “backward” Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements?
        Through Forward Traceability – we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not?
        Forward Traceability Matrix ensures – We are building the Right Product. Backward Traceability Matrix ensures – We the Building the Product Right.
Disadvantages of not using Traceability Matrix [some possible (seen) impact]:
No traceability or Incomplete Traceability Results into:
        Poor or unknown test coverage, more defects found in production 
        It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release.
        Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc
Benefits of using Traceability Matrix:
        Make obvious to the client that the software is being developed as per the requirements.
        To make sure that all requirements included in the test cases.
        To make sure that developers are not creating features that no one has requested.
        Easy to identify the missing functionalities.
        If there is a change request for a requirement, then we can easily find out which test cases need to update.
        The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
                    Ensures that every requirement has atleast one test case.
                    If suddenly requirement is changed – we will be knowing which is the exact test case or automation script to  be modified.
                    We will come to know which test case should be executed manually and which are to be done automatically.
Steps to create Traceability Matrix:
        Make use of excel to create Traceability Matrix:
        Define following columns:
        Base Specification/Requirement ID (If any)
        Requirement ID
        Requirement description
        TC 001
        TC 002
        TC 003.. So on.
        Identify all the testable requirements in granular level from requirement document. Typical requirements you need to capture are as follows: 
        Used cases (all the flows are captured) 
        Error Messages 
        Business rules 
        Functional rules
        SRS 
        FRS and So on…
        Identity all the test scenarios and test flows.
        Map Requirement IDs to the test cases. Assume (as per below table), Test case “TC 001” is your one flow/scenario. Now in this scenario, Requirements SR-1.1 and SR-1.2 are covered. So mark “x” for these requirements.
        Now from below table you can conclude –
        Requirement SR-1.1 is covered in TC 001
        Requirement SR-1.2 is covered in TC 001
        Requirement SR-1.5 is covered in TC 001, TC 003 [Now it is easy to identify, which test cases need to be updated if there is any change request].
        TC 001 Covers SR-1.1, SR, 1.2 [we can easily identify that test cases covers which requirements].
        TC 002 covers SR-1.3,  So on..

Requirement ID
Requirement description
TC 001
TC 002
TC 003
SR-1.1
User should be able to do this
x


SR-1.2
User should be able to do that
x


SR-1.3
On clicking this, following message should appear

x

SR-1.4


x

SR-1.5

x

x
SR-1.6



x
SR-1.7


x


This is a very basic traceability matrix format. You can add more following columns and make it more effective:

ID, Assoc ID, Technical Assumption(s) and/or Customer Need(s), Functional Requirement, Status, Architectural/Design Document, Technical Specification, System Component(s), Software Module(s), Test Case Number, Tested In, Implemented In, Verification, Additional Comments.
The RTM Template shows the Mapping between the actual Requirement and User Requirement/System Requirement. 
Any changes that happens after the system has been built we can trace the impact of the change on the Application through RTM Matrix. This is also the mapping between actual Requirement and Design Specification. This helps us in tracing the changes that may happen with respect to the Design Document during the Development process of the application. Here we will give specific Document unique ID, which is associated with that particular requirement to easily trace that particular document. 
In any case, if you want to change the Requirement in future then you can use the RTM to make the respective changes and you can easily judge how many associated test scripts will be changing.
Difference between Test Scenario and Test Case
Test Scenario
Test Case
Test Scenario is ‘What to be tested’
Test Case is ‘How to be tested’
Test scenario is nothing but test procedure.
Test Case consists of set of input values, execution precondition, expected Results and executed post-condition developed to cover certain test Condition.
The scenarios are derived from use cases.
Test cases are derived (or written) from test scenario.
Test Scenario represents a series of actions that are associated together.
Test Case represents a single (low level) action by the user.
Scenario is thread of operations
Test cases are set of input and output given to the System.


For example:
        Checking the functionality of Login button is Test scenario
        Test Cases for this Test Scenario are:
        Click the button without entering user name and password.
        Click the button only entering User name.
        Click the button while entering wrong user name and wrong password and etc...

Verification and Validation in Software Testing

        Validation checks that the product design satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements. This is done through dynamic testing and other forms of review.
Verification and validation is not the same thing, although they are often confused. Boehm succinctly expressed the difference between them:
        Verification: Are we building the product right?
According to the Capability Maturity Model (CMMI-SW v1.1),
        Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610].
        Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]
In other words,
        Validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications.
        Validation ensures that "you built the right thing". Verification ensures that "you built it right".
        Validation confirms that the product, as provided, will fulfill its intended use.
From testing perspective:
        Fault – wrong or missing function in the code.
        Failure – the manifestation of a fault during execution.
        Malfunction – according to its specification the system does not meet its specified functionality.
Within the modelling and simulation community, the definitions of validation, verification and accreditation are similar:
        Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s). Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
        Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.
                  
PART-4

Build Workflow
Testing Process
The types of test are:
·        Functional Testing
·        Integration Testing
·        System Testing
·        Compatibility Testing – Test in different OS, different browsers, different versions
·        Usability Testing – check  whether it is user friendly
·        Accessibility Testing
·        Ad- hoc Testing
·        Smoke Testing
·        Regression Testing
·        Security Testing
·        Performance Testing
·        Globalization Testing – only if it is developed for multiple languages


What is black box testing and white box testing?
Black Box Testing:
Black-box testing is a method of software testing that examines the functionality of an application (e.g. what the software does) without peering into its internal structures or workings.
Definition by ISTQB
        Black box testing: Testing, either anatomic or non-functional, after advertence to the Centralized anatomy of the basic or system.

White Box Testing:
White box testing is as well accepted as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box Testing, Code-Based Testing or Structural Testing!
Definition by ISTQB
        White-box testing: Testing based on an assay of the centralized anatomy of the basic or system

Black box testing and its advantages and disadvantages
What is black box testing?
Black Box Testing, also known as Behavioural Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional.
Definition by ISTQB: 

Black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system. Black box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
This method is named so because the software program, in the eyes of the tester, is like a black box; inside which one cannot see.



This method of attempts to find errors in the following categories:
        Incorrect or missing functions
        Interface errors
        Errors in data structures or external database access
        Behaviour or performance errors
        Initialization and termination errors.
Example:
A tester, without knowledge of the internal structures of a website, tests the web pages by using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected outcome.
Advantages of Black Box Testing:
Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications

Tester need not know programming languages or how the software has been implemented
Tests can be conducted by a body independent from the developers, allowing for an objective perspective and the avoidance of developer-bias
Test cases can be designed as soon as the specifications are complete
What are the advantages of black box testing?
            The advantages of this type of testing include:
1.     The test is unbiased because the designer and the tester are independent of each other.
2.     The tester does not need knowledge of any specific programming languages.
3.     The test is done from the point-of-view of the user, not the designer.
4.     Test cases can be designed as soon as the specifications are complete.
Entry and exit criterion - when can a project be accepted for testing (e.g. only when smoke testing passes) and when can a project be termed as testing complete (e.g. when all test cases are executed and all high level bugs are fixed)
Disadvantages of Black Box Testing:
        Only a small number of possible inputs can be tested and many program paths will be left untested

without clear specifications, which is the situation in many projects, test cases will be difficult to design

Tests can be redundant if the software designer/ developer has already run a test case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in Black Box Testing
Black Box Testing Techniques:
Following are some techniques that can be used for designing black box tests.
        Equivalence partitioning
        Boundary Value Analysis
        Cause Effect Graphing
Equivalence partitioning
Equivalence Partitioning is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data.
Boundary Value Analysis
Boundary Value Analysis is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/outside of the boundaries as test data.
Cause Effect Graphing
Cause Effect Graphing is a software test design technique that involves identifying the cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph, and generating test cases accordingly.
Define Equivalence Partitioning with Examples
What is Equivalence Partitioning?
The technique is to divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should handle them equivalently), hence ‘equivalence partitioning’. Equivalence partitions are also known as equivalence classes – the two terms mean exactly the same thing.
Example 1 for Equivalence partitioning:
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same, so one test case for valid input data should be sufficient.
2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
Example 2 for Equivalence partitioning:
For example in a savings Bank account,
3% rate of interest is given if the balance in the account is in the range of $0 to $100,
5% rate of interest is given if the balance in the account is in the range of $100 to $1000,
And 7% rate of interest is given if the balance in the account is $1000 and above.
We would initially identify three valid equivalence partitions and one invalid partition as shown below. [Click on the image for Zoom view]
Example 3 for Equivalence partitioning:
A store in city offers different discounts depending on the purchases made by the individual. In order to test the software that calculates the discounts, we can identify the ranges of purchase values that earn the different discounts. For example, if a purchase is in the range of $1 up to $50 has no discounts, a purchase over $50 and up to $200 has a 5% discount, and purchases of $201 and up to $500 have a 10% discounts, and purchases of $501 and above have a 15% discounts.
Now we can identify 4 valid equivalence partitions and 1 invalid partition as shown below:
Invalid Partition
Valid Partition(No Discounts)
Valid Partition(5%)
Valid Partition(10%)
Valid Partition(15%)
$0.01
$1-$50
$51-$200
$201-$500
$501-Above

Define Boundary Value Analysis with Examples:
What is Boundary Value Analysis?
A test data selection technique in which values are chosen to lie along data extremes where Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a systems works correctly for these special values then it will work correctly for all values in between.
Example 1 for Boundary Value Analysis: 
Password field accepts minimum 6 characters and maximum 12 characters. [Range is 6-12]
Write Test Cases considering values from Valid region and each Invalid Region and Values which define exact boundary.
We need to execute 5 Test Cases for our Example 1.
1. Consider password length less than 6
2. Consider password of length exactly 6
3. Consider password of length between 7 and 11
4. Consider password of length exactly 12
5. Consider password of length more than 12 
Note: 1st and 5th Test Cases are considered for Negative Testing
Example 2 for Boundary Value Analysis:
Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Example 3 for Boundary Value Analysis:
Name text box allows 1-30 characters. So in this case writing test cases by entering each character once will be very difficult so then will choose boundary value analysis.
So in this case at max 5 test cases will come:
Test case1: minimum -1 character: Validating not entering anything in text box
Test case2: minimum +1 character: Validating with only one char
Test case3: maximum -1 character: Validating with 29 chars
Test case4: minimum +1 character: Validating with 31 characters
Test case1: any one middle number: validating with 15 chars

Integration Testing and Types of Integration Testing
Integration Testing:
Combining the modules and testing the flow of data between them. Integration Testing is divided into 2 types.

Incremental Integration Testing:
Adding the modules incrementally and checking the data flow between them. Modules are added in a sequential fashion.
This can be done in two ways 
        Top-Down Approach
        Bottom-Up approach.




Integration testing has a number of sub-types of tests that may or may not be used, depending on the application being tested or expected usage patterns.   
       Compatibility Testing – Compatibility tests insures that the application works with differently configured systems based on what the users have or may have.  When testing a web interface, this means testing for compatibility with different browsers and connection speeds.  
       Performance Testing – Performance tests are used to evaluate and understand the application’s scalability when, for example, more users are added or the volume of data increases.   This is particularly important for identifying bottlenecks in high usage applications.  The basic approach is to collect timings of the critical business processes while the test system is under a very low load (a ‘quiet box’ condition) and then collect the same timings with progressively higher loads until the maximum required load is reached.  For data retrieval application, reviewing the performance pattern may show that a change needs to be made in a stored SQL procedure or that an index should be added to the database design.   
       Stress Testing – Stress Testing is performance testing at higher than normal simulated loads.  Stressing runs the system or application beyond the limits of its specified requirements to determine the load under which it fails and how it fails. A gradual performance slow-down leading to a non-catastrophic system halt is the desired result, but if the system will suddenly crash and burn it’s important to know the point where that will happen.  Catastrophic failure in production means beepers going off, people coming in after hours, system restarts, frayed tempers, and possible financial losses.  This test is arguably the most important test for mission-critical systems. 
      Load Testing – Load tests are the opposite of stress tests.  They test the capability of the application to function properly under expected normal production conditions and measure the response times for critical transactions or processes to determine if they are within limits specified in the business requirements and design documents or that they meet Service Level Agreements.  For database applications, load testing must be executed on a current production-size database.  If some database tables are forecast to grow much larger in the foreseeable future then serious consideration should be given to testing against a database of the projected size.
Performance, stress, and load testing are all major undertakings and will require substantial input from the business sponsors and IT staff in setting up a test environment and designing test cases that can be accurately executed.  Because of this, these tests are sometimes delayed and made part of the User Acceptance Testing phase.  Load tests especially must be documented in detail so that the tests are repeatable in case they need to be executed several times to ensure that new releases or changes in database size do not push response times beyond prescribed requirements and Service Level Agreements.

What Is System Testing?
Testing the behaviour of the whole software/system as defined in software requirements specification (SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.
System testing is done after integration testing is complete. 
In system testing, there are two types of testing
Functionality testing is to test whether application functioning as per requirement or not.
Non-functionality testing are several types:
        Load,
        Stress,
        Performance,
        Reliability,
        Security,
        Usability
        Configuration,
        Compatibility (forward & Backward),
        Scalability, Etc...
There are essentially three main kinds of system testing:
·        Alpha testing           
·        Acceptance testing
·        Beta testing

Define Alpha Testing:
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application:
        Spelling Mistakes
        Broken Links
        Cloudy Directions
The Application will be tested on machines with the lowest specification to test loading times and any latency problems.

What is Acceptance Testing - Alpha, Beta testing
What is Acceptance Testing?
It is testing the S/W or Application with the intent of confirming readiness of the product for customer acceptance.
This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.
More ideas will be shared about the application and more tests can be performed on it to gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application that will result in system crashers or major errors in the application.
By performing acceptance tests on an application the testing team will deduce how the application will perform in production. There are also legal and contractual requirements for acceptance of the system.
When is it performed?
Acceptance Testing is performed after System Testing and before making the system available for actual use.
Who performs it?
        Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). Usually, it is the members of Product Management, Sales and/or Customer Support.
        External Acceptance Testing is performed by people who are not employees of the organization that developed the software.
        Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the software for them. [This is in the case of the software not being owned by the organization that developed it.]
        User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers’ customers.
Definition by ISTQB for Acceptance Testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Define Beta Testing?
Beta Testing
This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a "real-world" test and partly to provide a preview of the next release. In this phase the audience will be testing the following:
        Users will install, run the application and send their feedback to the project team.
        Typographical errors, confusing application flow, and even crashes.
        Getting the feedback, the project team can fix the problems before releasing the software to the actual users.
        The more issues you fix that solve real user problems, the higher the quality of your application will be.
        Having a higher-quality application when you release to the general public will increase customer satisfaction.




Compatibility Testing - Definition, Types, Tools Used
What is Compatibility testing?
        Compatibility testing is to check whether your software is capable of running on different hardware, operating systems, applications, network environments or mobile devices.
        Compatibility Testing is a type of the Non-functional testing
        Initial phase of compatibility testing is to define the set of environments or platforms the application is expected to work on.
        Tester should have enough knowledge on the platforms / software / hardware to understand the expected application behaviour under different configurations.
        Environment needs to be set-up for testing with different platforms, devices, networks to check whether your application runs well under different configurations.
        Report bug, Fix the defects, Re-test to confirm defect fixing.

Types of Compatibility testing:
        Hardware
        Operating Systems
        Software
        Network
        Browser
        Devices
        Mobile
        Versions of the software
Let’s look into compatibility testing types briefly.
Hardware: It checks software to be compatible with different hardware configurations.
Operating Systems: It checks your software to be compatible with different Operating Systems like Windows, Unix, Mac OS etc.
Software: It checks your developed software to be compatible with other software’s. For example: MS Word application should be compatible with other software like MS Outlook, MS Excel, VBA etc.
Network: Evaluation of performance of system In network with varying parameters such as Bandwidth, Operating speed, Capacity. It also checks application in different networks with all parameters mentioned earlier.
Browser: It checks compatibility of your website with different browsers like Firefox, Google Chrome, Internet Explorer etc.
Devices: It checks compatibility of your software with different devices like USB port Devices, Printers and Scanners, Other media devices and Blue tooth.
Mobile: Checking you software is compatible with mobile platforms like Android, IOS etc.
Versions of the software: It is verifying you software application to be compatible with different versions of software. For instance checking your Microsoft Word to be compatible with Windows7, Windows7 SP1, Windows7 SP 2, Windows 7 SP 3.
There are two types of version checking.
       Types of Version Checking
        Backward compatibility Testing
        Forward compatibility Testing
Backward compatibility Testing: is to verify the behaviour of the developed hardware/software with the older versions of the hardware/software.
Forward compatibility Testing: is to verify the behaviour of the developed hardware/software with the newer versions of the hardware/software.
Tools for compatibility testing
        Adobe Browser Lab – Browser Compatibility Testing - This tool helps check your application in different browsers.
        Secure Platform – Hardware Compatibility tool - This tools includes necessary drivers for a specific hardware platform and it provides information on tool to check for CD burning process with CD burning tools.
        Virtual Desktops - Operating System Compatibility - This is used to run the applications in multiple operating systems as virtual machines. N Number of systems can be connected and compare the results.

Adhoc Testing - Definition, Types, Advantages, Disadvantages
Adhoc Testing
Definition:
        Adhoc testing is an informal testing type with an aim to break the system.
        This testing is usually an unplanned activity.
        It does not follow any test design techniques to create test cases. In fact is does not create test cases altogether!
        It is primarily performed if the knowledge of testers in the system under test is very high.
        Testers randomly test the application without any test cases or any business requirement document.
        Adhoc testing can be achieved with the testing technique called Error Guessing.
        Error guessing can be done by the people having enough experience on the system to “guess” the most likely source of errors.

Types of adhoc testing
        Buddy Testing
        Pair testing
        Monkey Testing
Buddy Testing
Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from development team and another person will be from testing team. Buddy testing helps the testers develop better test cases and development team can also make design changes early. This testing usually happens after unit testing completion.
Pair testing
Two testers are assigned modules, share ideas and work on the same machines to find defects. One person can execute the tests and another person can take notes on the findings. Roles of the persons can be a tester and scriber during testing.
Buddy testing is combination of unit and system testing together with developers and testers but Pair testing is done only with the testers with different knowledge levels.(Experienced and non-experienced to share their ideas and views)
Monkey Testing
Randomly test the product or application without test cases with a goal to break the system
Advantages of Adhoc Testing:
        Adhoc Testing saves lot of time as it doesn’t require elaborate test planning, documentation and test case design.
        It checks for the completeness of testing and find more defects then planned testing.
Disadvantages of Adhoc Testing: 
        This testing requires no documentation/ planning /process to be followed. Since this testing aims at finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test-steps or requirements mapped to it.


How to test a pen?
Let us see how to do Manual Testing,
·        Functional Testing – each part as per requirement – refill, pen body, pen cap, pen size
·        Integration Testing – Combine pen and cap, and integrate other different parts and see whether they work fine
·        Smoke Testing – basic functionality – writes or not
·        Ad-hoc Testing – throw the pen down and start writing, keep it vertically up and write, write on the wall
·        Usability Testing – whether user friendly or not – whether we can write it for longer periods of time comfortably
·        Compatibility Testing – different environments, different surfaces, weather conditions – keep it in oven and then write, keep it in freezer and write, try and write on water
·        Performance Testing – writing speed
·        Recovery Testing – throw it down and write
·        Globalization Testing – I18N testing – whether the print on the pen is as per country language and culture – L10N testing – price standard, expiry date format
·        Reliability Testing – drop it down and write, continuously write and see whether it leaks or not
·        Accessibility Testing – usable by handicapped people

Comments

Popular posts from this blog

QA's approach 2 Java - Understanding Static context

Selenium 4 absolute beginners - How to create Batch execution file

Technologies - Log4J - Create Log4j Configuration File - Where ? How ? What ?