Black Box Testing

Black box testing focuses on the determining whether or not a program do what it is supposed to do based on its functional requirement. It  is sometimes also called functional or behavioral testing.

Basically Black box testing is a kind of testing technique where the tester is unaware of  the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Aim : To determine whether the function appears to work according to specifications

Black box testing attempts to find errors in the external behavior of the code mostly in the following categories:

  • incorrect or missing functionality
  • interface errors
  • errors in data structures used by interfaces
  • behavior or performance errors
  • initialization and termination errors

It is advisable that the person doing black box testing should not be programmer of the software under test and doesn’t know anything about the structure of the code. Since the programmers of the code are innately biased and are more likely to test that program does what they programmed it to do, while the reality is program should be tested in-order to make sure that the program does what customer wants it to do.

ANATOMY OF TEST CASE

The format of the test case design is very important while doing black box testing. Let me give you a simple example of the test case planning template :

Test ID Description Expected Results Actual Results
1 Player 1 rolls dice and
moves.
Player 1 moves on board.
2 Player 2 rolls dice and
moves.
Player 2 moves on board.
3 Precondition: Game is in test
mode, SimpleGameBoard is
loaded, and game begins.
Number of players: 2
Money for player 1: Rs 1200
Money for player 2: Rs 1200
Player 1 dice roll: 3
Player 1 is located at Blue 3.

clear descriptions

A bit of advice would be that the test case description should be very clear and specific so that
the test case execution is repeatable. Even if you will always be the person executing the
test cases, pretend you are passing the test planning document to someone else to perform
the tests 🙂

strategies of black box testing

Since writing and executing test are fairly expensive process so we need to make sure to write the tests for the kind of things that the customer will do most often or
even fairly often with the motto of finding as many defects as possible with few test cases.

1. test of customer requirements

Black box test cases are based on customer requirements. We basically want to make sure that every customer requirements should be tested at least once. Find the defect as early as possible before delivering the end product to the customer.

2. Equivalence partitioning

It is basically a strategy which can be used to reduce the number of test cases that need to be developed.It divides the input domain of a program into classes and for  each of these equivalence classes, the set of data should be treated the same by the module under test and should produce the same answer. Test cases should be designed so the inputs lie within these equivalence classes.

I am running out of time now. I will post more on examples of equivalence partitioning and other methods such as boundary value condition, decision table etc. So stay tuned and enjoy testing! Cheers 🙂 🙂

Welcome back !

Let’s take an example of an application X. The tester doesn’t have any internal knowledge of the system, all he knows is about the input data and the output it is going to produce. Say this application accepts all the values between 1 – 10 (including 1 and 10). Now, the work of the tester is to test this application X using the above given information. Since the input domain of the application contains all the number from -∞ to +∞ which is an infinite set. He can’t test for all the cases so in order to reduce the number of test data he will use the technique equivalence partitioning.

…… -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ……

——P1———–|  |————P2——————|   |—–P3——

By which the whole set in divided into three equivalence partition P1(-∞, 0], P2 [1, 10] and P3 [11, ∞). where P1 and P3 are invalid partitions and P2 is valid Partition.

P1  and P3  => Invalid partition

P2                => Valid partition

In test cases tester will use random data from each of the equivalence classes and data from same equivalence classes should show same behavior for application X.

3. Boundary value Analysis

It states that if the equivalence classes has been derived out of which we are going to pick test data for making test cases then we should pick data on the boundaries because these are the points where there is maximum chances that a programmer will do mistakes there.

Bugs lurk in corners and congregate at boundaries – Boris Beizer

For the equivalence classes which we got above, our test data should be on boundaries i.e the test data after performing boundary value analysis on the equivalence partitions  will be {0, 1, 10, 11}.

Advertisements

Implement a battery of unit tests for SSSD

Introduction

An idea of implementing a battery of unit tests for SSSD(System Service Security Daemon) using cmocka unit test framework is proposed after having a thorough discussion with the SSSD upstream maintainer jhrozek (Jakub Hrozek, #sssd). Actually, it is not just writing better automated test codes but a total refinement of SSSD unit-tests using the cmocka unit testing framework in such a way that it will reduce complexity of unit testing code and making it efficient and provide a good mocking framework for better testing for other developers. Following are the details of the project and the proposed plan of action.

Abstract

Implementing unit tests for SSSD modules using cmocka unit testing framework with proper refactoring, minimum boilerplates and better test coverage. The tests would focus on both covering the new features but mostly on creating test for the core SSSD features, providing developers with better confidence when writing new code

Benefits to Fedora community

  • Contributing the set of unit tests to the SSSD would greatly improve its stability long-term and would help raise confidence when pushing new SSSD versions into Fedora or other distributions.
  • Making SSSD tests less complicated and mock-based unittesting framework would certainly result into an improved testing mechanism and error handling in SSSD.
  • Improvement in the test coverage will result in improvement of code quality of the SSSD.
  • Writing unit-test will help in deeper confidence in the correct behaviour of SSSD code and eventually result in easier resolution of many of the issuses related to SSSD

Project Details

The aim of the project is not just quality assurance of SSSD but to provide a proper implementation of a Unit testing framework rather than just a proof-of-concept. It has far greater goals. SSSD is an important part of the authentication picture for Fedora and other Linux distributions. Unfortunately the current version of SSSD lacks proper unit testing framework for exercising the code which are only reachable when SSSD is connected to the network. This project deals more about writing new tests based on the cmocka framweork and complete refinement of old written SSSD tests using the check framework. The idea here is to dig deeper into testing to provide and maintain long-term robustness and quality of SSSD. It is also important that the new cmocka based tests should be less complex and more efficient. It should have more automated behavior and minimum or no boilerplate code. It should also the coding style set by SSSD coding guidelines.

The other important feature of the framework should be that it should be sustainable long-term in order to support further SSSD improvements. In other words, the tests must be easy to modify when the core SSSD code changes to minimize the time needed to fix the unit tests after architectural changes are performed to the SSSD. This feature would allow the SSSD developers to be more confident of refactoring changes in the daemon itself.

Tools Required During Development

  • the Talloc and Tevent libraries
  • Cmocka library
  • Coverage tool : lcov
  • Vim (IDE)

 

The outline of my work plans

The initial stage of my work deals with becoming familiar with SSSD and learning concepts of cmocka unit-testing framework as mentioned in plan.

The general idea for the unit tests is to cover the two most important parts:

  • retrieving user information
  • authenticating users.

The following diagram gives a pictorial representation of the core components of SSSD and how they interact. Sssdsoc.png
Basically the whole project is divided into two phases, which mimick how the SSSD itself is structured:

  • Phase I : building provider tests
  • Phase II: building responder tests

Because of the large size of the SSSD project, the unit testing framework would focus on the core SSSD features that are enabled in most, if not all, SSSD deployments. In particular, the unit tests would cover only the NSS and PAM responders, while the back end (provider) tests would cover the LDAP users and group code.

Time-line for Milestones

The project is planned to be split across following weekly phases:

[Project Week 1]

Learning the tevent library and the asynchronous model

[Project Week 2]

Learning the tevent library and the async model. Might include some experimenting and reading the code.

[Project Week 3]

Reading the current NSS responder test and augmenting the “get user by name/ID tests”

[Project Week 4]

Adding a similar test for retrieving groups as added for users in the previous step.

[Project Week 5]

Adding another test for the initgroups operation.

[Project Week 6]

Studying the PAM responder code.

[Project Week 7]

Adding a unit test that would cover the PAM responder. Only login (PAM authentication phase) can be covered.

[Project Week 8]

Learning the backend and provider internals. The current DNS update tests might be a good start.

[Project Week 9]

Creating unit tests for retrieving LDAP users. These tests would not be big by themselves, but would include code to be reused by other LDAP tests later

[Project Week 10]

Creating unit tests for storing LDAP groups without nesting (RFC2307)

[Project Week 11]

Creating unit tests for storing LDAP groups with nesting (RFC2307bis)

[Project Week 12]

An extra week to polish the work before submission

Deliverables

Better and improved test codes of SSSD with following features:

  • Tests covering NSS and PAM responders
  • Contribute to the overall code quality by uncovering issues with the unit tests
  • Less complex test infrastructure
  • More efficient testing mechanism

Pytest : An Automation Testing Tool

Py.test is an alternative to unit test suite which provides more pythonic way of writing our tests. The overhead for creating unit tests is reduced almost to zero!

Jobs of Automated testing tools :

  • It verifies the code changes work outs.
  • Provides help when test fails, providing necessary details where and why the test failed.
  • It makes writing tests easy and fun.

Some fundamental features of Pytest are :

  • It’s cross project testing tool.
  • Provides useful information when tests fail.
  • no boiler plate (repetitive code) test code.
  • deep extensibilty
  • distribute tests to multiple hosts

Installing Pytest

$ pip install -U pytest       # or
$ easy_install -U pytest

Writing tests using pytest

In the previous post I showed how unittest test is used for testing by a taking a simple example of a calculator application.I am using the same example here to guide you thorough the basics of using pytest. Following is the code for calculator application.

# calculator.py

class Calculator:
    def add(self, x, y):
        return x + y

    def sub(self, x, y):
        return x - y

    def mul(self, x, y):
        return x * y

    def div(self, x, y):
        assert(y != 0);
        return x / y

The pytest code for this application is :


  # test_pytest_calculator.py

  class TestCalculator:
      def test_add(self, setup):
          cal = setup
          res = cal.add(10, 2)
          assert 12 == res

      def test_sub(self, setup):
          cal = setup
          res = cal.sub(7, 4)
          assert 3 == res

      def test_mul(self, setup):
          cal = setup
          res = cal.mul(5, 25)
          assert 125 == res

      def test_div(self, setup):
          cal = setup
          res = cal.div(20, 4)
          assert 5 == res

# conftest.py

from calculator import Calculator

def pytest_funcarg__setup(request):
    cal = Calculator()
    return cal

Now go to the command prompt in terminal and type the following command to run the test_calculator module

$ py.test test_calculator.py

for more verbose output use -v attribute.

$ py.test -v test_calculator.py

I am stopping myself here as am bit running out of time now.I will be updating this post soon with more topics showing skipping, expected to fail tests, marking a test and using some advanced plugins like Pep8, Pyflakes, codecheckers.
Happy Testing! cheers 🙂

Welcome back!

Here is an assert introspection for py.test


def test_assert_introspection():
    ''' with Unittest.py '''
    assert x        # assertTrue()
    assert x == 1   # assertEqual(x, 1)
    assert x != 2   # assertNotEqual(x, 2)
    assert not x    # assertFalse(x)

Marking Test functions/methods

py.test.mark.skipif(expression)                                    # for skipping tests

py.test.mark.xfail(expression)                                      # for expected to fail tests

py.test.mark.Name                                                          # use your own custom marking

Here is an example showing skipping test methods and xfail test methods:


# test_pytest_calculator.py
import py

class TestCalculator:
    def test_add(self, setup):
        cal = setup
        res = cal.add(10, 2)
        assert 12 == res

    @py.test.mark.skipif('True')
    def test_sub(self, setup):
        cal = setup
        res = cal.sub(7, 4)
        assert 3 == res

    @py.test.mark.xfail
    def test_mul(self, setup):
        cal = setup
        res = cal.mul(5, 25)
        assert 150 == res

    def test_div(self, setup):
        cal = setup
        res = cal.div(20, 4)
        assert 5 == res

Testing small project

Let’s do testing of a small scanner application which reads a file and looks for the url(s) present in the file and stores them in a separate list.
Following is the code for scanner application.


import urllib

class Scanner:
    def __init__(self, config):
        self.config = config

    def extract_urls(self, path):
        urls = []
        for line in path.readlines():
            line = line.strip()
            for urlprefix in self.config.urlprefixes:
                if line.startswith(urlprefix):
                    urls.append(line)
        return urls

and the py.test module for this application is :


import py
from myscan.scanner import Scanner

class config:
    pass

def test_extract_url(tmpdir):
    path=tmpdir.join('foo.ini')
    path.write("Testing Scanner\nhttp://pytest.org\nhttps://google.com\n")
    print path.read()
    con = config()
    con.urlprefixes = ['http://', 'https://']
    Scan = Scanner(con)
    urls = Scan.extract_urls(path)
    assert len(urls) == 2
    assert urls == ['http://pytest.org', 'https://google.com']

Some Advanced Plugins

py.test provides some important plugins, some of them which are most widely used are:

  • figleaf : It checks the test code coverage
  • codecheckers (pyflakes, pep8) : checks standardization of code, indentation, spacing etc and tells which of the modules where imported and not used etc to make code better.

Installing figleaf plugin

$ easy_install pytest-figleaf                       # or

$ pip install pytest-figleaf

Using figleaf plugin

$ py.test –figleaf test_module.py

Installing codecheckers plugin

$ easy_install pytest-codecheckers         # or

$ pip install pytest-codecheckers

Now let’s wind up this basic tutorial.Hope you all have enjoyed and will find testing interesting and fun.Happy testing! cheers 🙂

Unittest – Unit Testing Framework (Python)

Unit testing : It refers to the kind of testing where the tester refers to a small software module at a time and testing it in an isolated fashion.

Unittest supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework module.This module provides classes that make it easy to support these qualities for a set of tests.

Test Fixture : A test fixture is setting up well known and fixed environment in which tests are run so that to get a particular/expected  outcome.

A test is generally done in four phases.

Four Phases of a test are :

  • set up – It is used for setting up test fixture.
  • Exercise – interact with system under test.
  • verify – determine whether the expected outcome has been obtained.
  • Tear down – for cleanup of test fixture so that it returns to original state.

Note : Unit testing is a kind of white box testing.

Here I am giving a brief introduction  to unitttesting (python) so that those who wants to get started with software testing will find it interesting and easier.

I have taken a simple example of  a calculator application and will try to test it through unit testing frame work provided by unittest module in python.

# calculator.py

class Calculator:
def add(self, x, y):
return x + y

def sub(self, x, y):
return x - y

def mul(self, x, y):
return x * y

def div(self, x, y):
assert (y != 0)
        return x / y

The following is the unittest code for testing this calculator application.


# test_calculator.py

import unittest
from calculator import Calculator

class TestCalculator(unittest.TestCase):
def setUp(self):
self.cal = Calculator()

def test_add(self):
res = self.cal.add(10, 2)
self.assertEqual(12, res)

def test_sub(self):
res = self.cal.sub(7, 4)
self.assertEqual(3, res)

def test_mul(self):
res = self.cal.mul(5, 25)
self.assertEqual(125, res)

def test_div(self):
res = self.cal.div(20, 4)
self.assertEqual(5, res)

if __name__ == '__main__':
unittest.main()

In unittest framework we have to create a subclass of unittest TestCase as I have create in the above code. Here TestCalculator is the subclass of unittest TestCase class. The important thing to note here is that the name of the testcase class start with the word Test(i.e should follow this pattern ‘ Test*’ ) and the test methods name follows the pattern ‘ test* ‘. The methods test_add, test_sub, test_mul, test_div is used for testing add(), sub() , mul(), div() methods of Calculator class.Then setUp() method is used to create test fixture for the tests.

Skipping tests and Expected failures:

unittest provides methods for skipping test methods as well as test class and to mark a test as expected failures when we know that the test is going to fail.Here is an example code showing test skipping and expected failures.

# test_calculator.py

import unittest
from calculator import Calculator

class TestCalculator(unittest.TestCase):
def setUp(self):
self.cal = Calculator()

def test_add(self):
res = self.cal.add(10, 2)
self.assertEqual(12, res)

@unittest.skip("Demonstrating method skippping")
def test_sub(self):
res = self.cal.sub(7, 4)
self.assertEqual(12, res)

    @unittest.skipIf(2 > 0, "Demonstrating method skipping using skipIf")
def test_mul(self):
res = self.cal.mul(5, 25)
self.assertEqual(125, res)

@unittest.expectedFailure
def test_div(self):
res = self.cal.div(20, 4)
self.assertEqual(5, res)

if __name__ == '__main__':
unittest.main()