Frequently asked manual and software testing interview questions and answers for freshers and 2-5 year experienced testers on Testing, Quality Assurance, defects, Whitebox and blackbox testing, Software Test Life Cycle etc.
Software testing is a process of evaluating a system by manual and/or automated means to verify that it meets the specified requirements. It is a structured process that uncovers the defects in the product. It involves the operation of a system or application under controlled conditions(Should include both normal and abnormal conditions) and evaluating the results.
- To find the difference between specifications and developed system
- To get adequate trust and confidence on the product
- To show that the product works with negligible risks
- To offer advice on product quality and risks
- A software can perform 100000 correct operations per second hence it will also have the ability to perform 100000 wrong operations, if it is not tested properly
Quality Assurance is the measure of the quality of the process that are used to create a quality product.
4. Explain the difference between Testing and QA?
Testing is done to measure the quality of a product. Whereas, Quality Assurance is done to measure the quality of process used to create a better quality product.
5. Explain the meaning of Quality Control?
Quality control is a set of procedures intended to ensure that a product adheres to a predefined set of quality checks or to check if the product meets the requirements set by the client or customer.
6. Explain the approach to software testing?
Software testing approach is to detect the defects of various severity and various priority at different stages of software development cycle, by using different categories and types of testing.
7. Explain the difference between debugging and testing?
Testing is a process of finding defects where as debugging is a process of finding the cause of those defects and fix them. Debugging is not testing. Debugging always occurs as a consequence of testing activity.
8. Define the software quality assurance activities
- Applying Technical Methodologies(To follow proper mothods for developing software)
- Conducting Formal Technical Reviews(FTR)
- Testing of the software
- Enforcing the standards(customer/management imposed standards)
- Controlling of change(Check the need for change, and document the same change done)
- Measurement(capture of software metrics for measuring the quality)
- Keeping the records for proofs(Documentation, Review notes etc)
9. Define a defect
The variation in the actual result and expected result can be termed as a Defect
10. What are the broad categories of Software Testing?
The major categories of software testing are,
- Static and Dynamic testing
- Whitebox and blackbox testing
- Manual and Automated testing
- Verification and Validation testing
11. Define Static testing?
Testing conducted without executing a program or a software is called as static testing. It is a kind of verification that we do on the software products before the process of compilation and creation of an executable. It is more of Requirement review, design review, code review, walkthrough and audits.
Whitebox Testing is used to test the internal structure of a software program and working of a application. It's also termed as Clearbox Testing, Structural Testing and Glassbox Testing.
14. Define Blackbox Testing?
Blackbox testing is a method of software testing wherein the applications GUI and functionality are tested without bothering to know the internal structure of application code.
15. Define Manual Testing?
Tests conducted by testers manually is called Manual Testing, wherein they compare the expected and actual functionality are in accordance with the test cases.
16. Define Automation Testing?
Tests conducted by using software tool is called Automation Testing, wherein the expected results are fed into the tool to be compared with the actual output of software being tested.
17. Explain SDLC?
Various Software Development Life Cycle phases are
- Project planning,
- Feasibility Study,
- System Analysis,
- Requirement Definition,
- System Design,
- Acceptance, Installation and Deployment,
18. Explain STLC?
Carious Software Test Life Cycle are
- Test Planning : Preparing test strategy and planning
- Test Development : Creating testing cases and environment
- Review : Reviewing the testcases and then update the same if required
- Test Execution : Execution of testcases
- Result Analysis
- Bug tracking
- Reporting / Test cycle closure
19. Explain Verification Testing?
It's a static method of verifying the code and design of an application. Let's Say, Are we building the product right.
20. Define Validation Testing?
Its a dynamic or actual testing of the application. Let's Say, Are we building the right product.
21. Define Ad Hoc Testing?
Random functional test to break the application without referring to any test document, most scenario taken up would be negative cases.
22. Define Smoke Testing?
It is to verify if the critical functionality of the system is working fine and ready to go ahead with complete testing, it's non exhaustive testing.
23. Define Sanity Testing?
It's narrow regression test that focuses on one or a few areas of functionality, sanity is usually narrow and deep.
Graphical User Interface testing is a process of testing a products graphical interface, to ensure the application interface meets the customer written specification.
26. Define System Testing?
System testing is software testing conducted on complete, integrated system to evaluate the systems compliance with its specified requirements.
27. Define Regression Testing?
The intent of regression testing is to ensure that the changes made to fix a bug or a new functionality added intermediately have not induced a fault or bug in related functionalities.
28. Define Re-testing?
Testing the functionality related to a bug fix to check if the same is fixed is re-testing.
29. Define Exploratory Testing?
It's a testing approach wherein the tester gets to simultaneously learn the application functionality which helps in test design and test execution.
30. Explain User Acceptance Testing?
Testing is carried out by client to determine if the requirements of a specification or contract are met.
31. Define Usability Testing?
It is to evaluate a application/product by hiring users to participate in a task to provide observations/views according to their usage of the product.
32. Define Integration Testing?
Testing two or more modules/functionalities together with the intent of finding interface defects between the modules/functionalities.
33. Different types of software development ?
Some of the commonly used software development types are :
- Waterfall model
- V model
- Agile model
- spiral model etc
34. Explain Waterfall model?
It's a sequential process wherein the process flow is as mentioned below :
- Project planning, Requirement gathering and Analysis,
- Software design and Implementation
- Testing and Deployment
35. Explain V model?
Its a model wherein the process of development and testing proceeds at same phase like verification at development phase and validation at testing phase.
36. Explain Agile model?
Agile model is typically a incremental model, wherein rapid cycles of developmental release of modules at smaller duration for better coverage of requirement.
It's a agile methodology wherein the emphasis is on feedback, self management and short iterations of development and product testing.
39. List the Scrum roles?
Product Owner, Team and Scrum Master.
40. Types of Integration Testing?
Bottom up and Top down approach
41. Explain Bottom Up approach
An approach where the testing proceeds from sub modules to main module, if the main module is not yet available then we use DRIVER to simulate main module.
42. Explain Top Down approach
An approach where the testing proceeds from main module to sub module, if the sub module is not yet available then we use STUB to simulate sub module.
43. Explain DRIVER in testing
Driver is a function or process which produce input to sub modules
44. Explain STUB in testing
A Stub is a way to capture output of main module
45. List Test case design techniques
- Equivalence Class Partitioning
- Boundary Value Analysis
- Error guessing
46. Explain Equivalence Class Partitioning
It's a test design technique in black box testing, wherein the test cases are designed to cover each partition atleast once.
47. Explain Boundary Value Analysis
It's a test design technique in black box testing, wherein the test cases are designed to cover edges of each partition atleast once.
48. Explain Error Guessing
It's a test design technique in black box testing, wherein the test cases are designed in accordance to the experience of the tester to guess the defective application functionality.
49. Explain Severity
Defines the impact of a defect on application with respect to the functionality.
50. Explain Priority
Defines the importance of bug and the urgency to fix the same.
51. Explain Exit Criteria
It's a set of specific conditions considered to end the testing process, which is based on "Test Plan".
52. List Static Testing Techniques
Inspections, Data Flow Analysis, Walk through, Reviewing, etc.
53. List Dynamic Testing Techniques
Unit testing, integration testing, system testing, etc.
54. Explain Decision Table
Decision Table is also referred to as "Cause Effect" table, it provides a systematic way of stating complex business rules for testers to explore the effects of combinations of different inputs and other software states that must correctly implement business rules.
55. Explain Bug Cycle
- Assign -> Rejected/Deferred
56. Explain Traceability Matrix
Traceability Matrix is used to check the test case coverage with respect to specification.
57. Explain Data Driven Testing
Testing process involving complex and huge data sets by documenting the same, which will be used as input to testing.