1 Abstract
Testing is one of the common tasks almost all the
application and software developers out there are adopting. One who is not
using automated testing is really a dumb ass and too slow to be in a fast lane.
Applications have lots of errors in them when they are developed. The test
after development approach cannot simply point out all of them. The purpose of
testing is not to completely point out all the defects, but to keep checking
the most common types of bugs that could be introduced in the application while
trying to change the other features.
2 Introduction
What most of the companies out there do? They have one
or two products as their main product. Most of them being analytics
application, forums, portals etc . What these applications offer is some set of
features and functions. There are so many modules and each click of a button in
each page is expected to be linked with some form of feature. Each month or so,
they have to upgrade their applications with some requests of the clients.
Clients pay lots of money for the applications, and their demands are also too
much. They want refinements, logic change, design change, additional features
integrated in the application etc. During the process of adding these
additional features, there are often change in the previous modules or they are
affected in some way.
3 Types of Testing
3.1 Smoke Test
Smoke testing or sanity testing is one very basic form
of testing where a software application is tested with very basic features. The
approach is
Go to some particular pages
Set some values in some inputs
Click some buttons/links
Verify some features occur or not.
There is a test-plan where these things are written.
sn
|
Location(Page)
|
Expected
features
|
Test/Result
|
Remarks
|
1
|
There
should be a test box. Starting to type there should result in suggestions
dropping down.
|
Pass
with Comments
|
The
expected conditions were met but these things were noticed
|
|
2
|
drive.google.com
|
Now the person doing the manual testing has to go to the
location. Do what is to be done specified in the test plan. There may be a
separate column for that as well. There is a test result column which has a
conditional formatting. Most commonly, Green for Pass, Red for Fail and Yellow
for pass-with-comments. Sometimes the user has to have a good understanding of
the product (domain knowledge) to do what is instructed to be done. If there is
anything worth mentioning that is noticed during the test, there is a Remarks
column.
3.2 Regression Test
Regression testing is a testing where you navigate to
each pages there is on the application, and check certain values or features
that’s needs to be verified. It’s the QA’s responsibility to prepare a
checklist for the pages to be tested and the features for each pages that are
to be tested or verified one round before taking the product to production
environment at verify them at least once after the release for quick bugs
discovery and hot fixes.
sn
|
page
|
Feature
|
Result
|
Remark
|
1
|
Page 1
|
feature 1
|
Pass
|
|
2
|
Page 1
|
feature 2
|
Fail
|
|
3
|
Page 1
|
feature 3
|
||
4
|
Page 1
|
|||
5
|
Page 2
|
|||
6
|
Page2
|
The template seems same as the Smoke
test but the main difference between Regression test and Smoke test is that
it is much detailed and covers every page to be tested and every features that
is to be tested. This is a very long checklist of items to be tested.
3.3 Data Validation Test
There may or may not be a data validation test based on
the scenario and the scope of change and also the type of change. Generally,
the general data validation test is covered by the smoke and regression test
itself. While testing the general features you happen to be checking the data
also that are populated in the tables or paragraphs, forms, drop-downs etc. The
data in the backend may be directly affecting the items and options you see
from your account in the application. A
separate data validation test is needed in some of the major changes like
server-migration, transferring data from one place to another, pointing to
different instance of the backend etc. Separate SQL scripts may also be made to
compare the tables of the backend.
3.4 Before After Test
This is the test which is most often done each time
adding new features in the existing application, or migrating data/application
from one place to another etc. One module being developed, and the changes
might be affecting the others as well. In the development environment, the
changes and the new modules are integrated and two instances of application are
run. One of the past release which is assumed to be correct(mostly) and other
which is the latest application. The application should point to identical
dataset which may or may not be the same instance and if there is a security
framework or dashboard feature control system implemented in the application
then the two accounts used for testing should have the same settings. Now
unlike the regression and smoke test where main focus in on the features like
you clicking some buttons and links and expecting something to happen, here you
focus on the data. You could simply use the text comparison tools where you
feed with string of different text and compare if they are exactly the same.
Mostly the differences are found in tables, forms etc. Some custom developed
automated testing applications could find which table which column in which
page is not matching in the two applications.
Application
|
Application a on xyz1 module x, Application b on xyz2 module b
|
|||
Tested by
|
Rajan Prasad Upadhyay
|
|
|
|
Date
|
20 july to 30 july 2013
|
|
|
|
sn
|
Column1
|
Feature
|
Result
|
Remark
|
1
|
Page 1 of a and page 1 of b
|
feature 1
|
Pass
|
|
2
|
feature 2
|
Fail
|
||
3
|
feature 3
|
|||
4
|
||||
5
|
||||
6
|
I am just reusing the template for simplicity.
3.5 Load Testing
This is not a very common testing. Load testing is
generally done for those applications where performance is an issue. For
example a web application takes too long to load, or some feature takes whole
lot of time, One module takes too long to process etc. Load testing in one
sentence is the test of response time vs number of users. When the number of
users simultaneously using an application increases, the throughput decreases
and response time increases. On the basis of how many users might be using the
system simultaneously, proper care should be taken to keep the response time to
an acceptable limit.
There are some other kinds of testing also but these are
the most common types of testing most of the applications are doing for their
product. They are investing even millions yearly for them getting automated.
4 Tools
There are lots of tools being used for software
applications testing. I don’t personally know whole lot of tools but am
familiar with a few of them. The most commonly used tool to automate the
web-application is Selenium-WebDriver. It is an api and is available in most
common languages. What it does is it acts as a user and could open a url, make
clicks, find elements and grab their texts, execute javascripts etc using a
browser just as a human would do. JMeter is a free tool in java for performance
testing. You create a test plan add different ready-made components, configure
them and run them. You could visualize your data in different ways.
Summary
So this was my understanding of the general way it is in this world. The article's main purpose is to give a general concept of the processes followed while testing, and the way testing is done, tools used for testing, etc.
This comment has been removed by the author.
ReplyDelete