Learn IT Girl

It’s quite exciting that I have been selected as a mentor in Learn IT Girl program. I will be mentoring a girl student from Romania in learning Python language and eventually guiding her in creating an open source application.

“Learn IT, Girl!” is an international mentorship program, which means that every woman accepted as a scholar will be guided by an international mentor.

AimTo help women learn a programming language while doing an awesome project!
Any woman irrespective of their age can apply to become a scholar, as long as they want to learn a programming language. The mentors will help them to learn by trying things out, they will give them resources, answer their questions, and help them understand how coding works. To make it a bit more fun, the mentors and the scholars will be from different countries.
It’s a three month program (17th Nov, 2014 – 8th Feb, 2015) where scholars will work on their project in collaboration with their mentors. The projects will be hosted on GitHub, so the scholars will also learn how to handle Open Source projects. In time, if their programs become popular, more people will offer to help them with their projects, so the scholars will also have a chance to become team leaders as well.

Installing KDE Plasma 5 .1 on Ubuntu 14.10

I just installed KDE Plasma 5.1 on my Ubuntu 14.10 system and find it pretty cool and prodigious. KDE Plasma 5.1 is available for Ubuntu 14.04 Trusty Tahr and Linux Mint 17 KDE via the Neon PPA and for Kubuntu 14.10 via the Kubuntu Next PPA.

Follow the steps to install KDE Plasma 5.1 on Ubuntu 14.10

sudo apt-add-repository ppa:kubuntu-ppa/next
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install kubuntu-plasma5-desktop plasma-workspace-wallpapers

For uninstalling:

sudo apt-get install ppa-purge
sudo ppa-purge ppa:kubuntu-ppa/next

 

 

beta365

V-ERAS Call for Volunteers is open

Being part of Italian Mars Society as a summer intern for GSoC 2014 project has been an awesome experience for me. We Mars fellas had an awesome time developing some cool virtual reality applications which will help mankind in its Mars exploration mission. Now, the time has come for the initial demonstration of the overall rotation.

We are pleased to announce that volunteer positions are now open for participating as crew members at the first V-ERAS rotation scheduled to run from December 7 till December 14, 2014. Volunteers are requested to send their applications to: v-eras@marssociety.it by September 30, 2014 in order to be considered.

The Italian Mars Society is seeking four volunteers to participate as members of the crew of the Virtual European MaRs Analogue Station (V-ERAS) during a simulation of human Mars exploration operations. The mission will take place in Madonna di Campiglio, Trento, Italy from December 7, 2014 through December 14, 2014 As currently planned, the crew will consist of four individuals chosen primarily for their skills as scientists and technologists in areas including psychology, physiology, medicine, mission operations, human factors and habitability. Considering that this is the startup phase of the project and the need to strongly interact with the IMS engineering team, proficiency in areas such as Computer Science, Software Engineering, Physics Simulation Engines will be considered strong assets. Applicants should have either a four-year college degree or equivalent experience.

Crew members will only be required to pay for their own transportation to/from Trento, Italy. The living expenses will be covered by IMS and by Dolomites Astronomical Observatory – Carlo Magno Hotel , who is sponsoring the initiative.

Volunteers should send their applications to: v-eras@marssociety.it by September 30, 2014 in order to be considered. Both volunteer investigators who bring with them a proposed program of research of their own compatible with the objectives of the Mission Science Agenda (see below), and those simply wishing to participate as members of the crew supporting the ongoing investigations, will be considered. Both individual applications and group applications of up to an entire crew (4 people) will also be considered.

Applications to V-ERAS should include:

• Your full name
• Full contact information (home/work address, telephone numbers, email address(es)
• A copy of your resume
• The crew position (engineer, , etc.) that you are seeking
• Experience in leading teams if you wish to be considered for the position of crew commander

The Italian Mars Society – Via Dalmine, 10/A – 24035 Curno (BG) Italy – Tel. +39 035_611942 Fax +39 035_4377161 http://www.marssociety.itsegreteria@marssociety.it

• Research project(s) for your rotation. Proposing a research project well in line with the V-
ERAS Science Agenda will be a strong advantage. For 7 days, these four crew members will conduct a sustained program of immersive virtual reality simulation (See Mission Science Agenda). Each accepted crewmember will be required to read and sign the V-ERAS crew application documents including a Waiver, Release and Indemnity.

For further information about the ERAS project, please visit our website at: http://www.erasproject.org.
For enquiries about this call please contact: v-eras@marssociety.it

Mission Science Agenda
The European MaRs Analogue Station for Advanced Technologies Integration (ERAS) is a program spearheaded by the Italian Mars Society (IMS), whose main goal is to provide an effective test bed for field operation studies in preparation for human missions to Mars. Preliminarily to its construction, IMS has started the development of an immersive Virtual Reality
(VR) simulation of the ERAS Station (V-ERAS). The major advantage of such virtualization is that it will be possible to undertake training sessions with a crew that can interact with its future environment before the actual station is built. This way a more effective design of the station and associated missions, and strong cost reductions can be achieved. The main objective of this activity will be the simulation and validation of the data obtained during the training sessions so that they can be used for the design of the station itself. Many ergonomics and human factors will be considered in the virtual model in order to be verified and validated before the actual ERAS habitat construction.

The initial V-ERAS setup will be based on the following key elements:
• ERAS Station simulation using an appropriate game engine supporting a virtual reality headset
• Full body and hand gesture tracking
• Integration of an omnidirectional treadmill
• Support for crewmembers’ health monitoring
• Multiplayer Support

The following figure is a depiction of the V-ERAS classroom setup with the four V-ERAS stations, as currently conceived.
The Italian Mars Society – Via Dalmine, 10/A – 24035 Curno (BG) Italy – Tel. +39 035_611942 Fax +39 035_4377161
http://www.marssociety.itsegreteria@marssociety.it

Examples of science activities for the first V-ERAS Crew rotation include (but are not limited to):
• Habitat Design Review: For this first rotation, the ERAS habitat design will still be in a prototyping stage. We are very interested in the expertise on space habitat design that the selected crew can bring onboard for the definition of a more refined design. We are convinced that the VR technology is the most appropriate for an effective transfer of such expertise into the design.
• Reduced Gravity Simulation: We intend to explore the possibility to simulate Martian reduced gravity environments via extensions of the omnidirectional treadmill currently being developed by IMS.
• Crew Health Monitoring: Considering its critical importance, we are embedding crew health monitoring in V-ERAS from its inception. Continuous monitoring of crewmembers’ health is key in anticipating any issues and developing appropriate countermeasures. The availability of a crew health monitoring will be instrumental in the V-ERAS crew rotations for the establishment of the knowledge, methods, and standards for the design of an integrated, autonomous, crew health management system. For this first rotation we will be using a full set of biometric devices. Also in this case we are very much interested in expertise on the medical field for the best use of those capabilities.

• Physics Engine: We intend to validate and extend the physics engine utilization made within the V-ERAS simulations.

• Scientific missions planning / Crew training: In terms of scientific mission planning / crew training we intend to focus on some specific scenarios centered on human-robot collaboration. In this context we intend to profit of the parallel development on advanced robotics ongoing in ERAS. In particular, simulated Extravehicular Activities (EVA) missions will be attempted with crewmembers, real and simulated robots operating on the Martian surface and supported by advanced Human Machine Interfaces (HMI) such as voice commanding.

• Outreach: V-ERAS will be unique to other Mars Analogues in that it will be highly accessible to the public, thereby increasing the outreach effectiveness. During the first rotation, outreach/educational events will be held, such as Mars exploration dedicated conferences. Everything will be organized in order to limit the impact on scientific activities, but crewmembers should be ready to dedicate a little part of their time to outreach activities such as interaction with visitors or interviews. The Waiver, Release and Indemnity document to be signed will also specifically cover those outreach activities.

Main call page : You can find the complete call attached or available at this web site.

Full body and hands gestures tracking

Abstract

Integration of whole body motion and hand gesture tracking of astronauts to ERAS(European MaRs Analogue Station for Advanced Technologies Integration) virtual station. Skeleton tracking based feature extraction methods will be used for tracking whole body movements and hand gestures, which will have a visible representation in terms of the astronaut avatar moving in the virtual ERAS Station environment.

Benefits to ERAS

“By failing to prepare, you are preparing to fail.” ― Benjamin Franklin

It will help astronauts in getting familiar with the their habitat/station, the procedures to enter/leave it, the communication with other astronauts and rovers, etc. Thus preparing themselves by getting a before hand training in the Virtual environment will boost their confidence and will reduce chances of failures to great extent, ultimately resulting in increase in the success rate of the mission.

Project Details

INTRODUCTION

An idea of implementing integration of full body and hand gesture tracking mechanism is proposed after having a thorough discussion with the ERAS community. The method proposed use 3D skeleton tracking technique using a depth camera known as a Kinect sensor(Kinect Xbox 360 in this case) with the ability to approximate human poses to be captured, reconstructed and displayed 3D skeleton in the virtual scene using OPENNI, NITE Primesense and Blender game engine. The proposed technique will perform the bone joint movement detections in real time with correct position tracking and display a 3D skeleton in a virtual environment with abilities to control 3D character movements. The idea here is to dig deeper into skeleton tracking features to track whole body movements and hand gesture capture. The software should also maintain long-term robustness and quality of tracker. It is also important that the code should be less complex and more efficient. It should have more automated behavior and minimum or no boilerplate code. It should also follow the standard coding style set by the IMS(Italian Mars Society) coding guidelines.

The other important feature of the tracker software should be that, it should be sustainable long-term in order to support further future improvements. In other words, the codes and tests must be easy to modify when the core tracker code changes, to minimize the time needed to fix the code and tests after architectural changes are performed to the tracker software. This feature would allow the developers to be more confident of refactoring changes in the software itself. Following are the details of the project and the proposed plan of action.

REQUIREMENTS DURING DEVELOPMENT

Hardware Requirements 
  • Kinect Sensor(Kinect Xbox 360)
  • A modern PC/Laptop
Software Requirements 
  • OpenNI/NITE library
  • Blender game engine
  • Tango server
  • Python 2.7.x
  • Python Unit-testing framework
  • Coverage
  • Pep8
  • Pyflakes
  • Vim (IDE)

THE OUTLINE OF WORK PLAN

Skeleton Tracking will be done using Kinect sensor and OpenNI/NITE framework. Kinect sensor will generates a depth map in real time, where each pixel corresponds to an estimate of the distance between the Kinect sensor and the closest object in the scene at that pixel’s location. Based on this map, application will be developed to accurately track different parts of the human body in three dimensions.

OpenNI allows the applications to be used independently of the specific middleware and therefore allows further developing codes to interface directly with OpenNI while using the functionality from NITE Primesense Middleware. The main purpose of NITE Primesense Middleware is an image processing, which allows for both hand-point tracking and skeleton tracking. Tracking of whole skeleton can be done using this technique however main focus of the project will be on developing framework for full body motion and hand gesture tracking which can be later integrated with ERAS Virtual station. The following flow chart gives a pictorial view of working steps.

SkeletonTracking1.png

Basically, The whole work is divided into three phases :

  • Phase I  : Skeleton Tracking
  • Phase II  : Integrating tracker with Tango server and prototype development of a glue object
  • Phase III : Displaying 3D Skeleton in 3D virtual scene

Phase I : Skeleton Tracking
Under this phase comes tracking of full body movements and hand gesture capturing. RGB and depth stream data are taken from the Kinect sensor and is passed to PSDK(Prime Sensor Development Kit) for skeleton calibration.

Skeleton Calibration : Calibration is done to gain control over the controlling device.

Skeleton calibration can be done :

  • Manually, or
  • Automatically

Manual Calibration :

For manual calibration user is require to stand in front of Kinect with his whole body visible and has to stand with both hands in air(‘psi’ position) for few seconds. This process might take 10 seconds or more depending upon the position of Kinect sensor.

Automatic Calibration :

It enable NITE to start tracking user without requiring a calibration pose. It also helps to create skeleton shortly after user enters the scene. Although skeleton appears immediately but auto-calibration takes several seconds to settle at accurate measurements. Initially skeleton might be noisy and less accurate but once auto-calibration determines stable measurements the skeleton output becomes smooth and accurate.

However, Analyzing cons and limitation of both method. In the proposed application, I will be giving option to the user to choose among two given calibration method. Considering the fact that a user can go out of view only if the training session is interrupted. So we will ask the user (that will occupy always the same VR station) to do a manual calibration at the beginning of the week and then an automatic recalibration can happen every time a simulation restart in the same training rotation.

Skeleton Tracking : Once calibration is done OpenNI/NITE will start the algorithm for tracking the user’s skeleton. If the person goes out of the frame but comes in really quick, the tracking continues. However, if the person stays out of the frame for too long, Kinect recognizes that person as a new user once she/he comes back, and the calibration needs to be done again. Once advantage which we get here is that Kinect doesn’t require to see the whole body if the tracking is configured as the upper-body only.
output : NITE APIs will return the positions and orientations of the skeleton joints.

Phase II : Integrating tracker with Tango server and prototype development of a glue object’
As whatever we are doing here must be ready for supporting multi-player (a crew of 4/6 astronaut)so there will be 4/6 Kinect sensors and 4/6 computers supporting each a virtual station. The application must be able to populate each astronaut environment with the avatars of all crew members. The idea is that Tango will provide around the skeleton data of all crew members for cross visualization. Skeleton data obtained from each instances of tracker will be published to Tango server as tango parameters. A prototype will be developed for changing reference frame of the tracked data from NITE framework to blender reference frame and send it to blender framework for further processing. It is called a glue object since, it acts as a interface between the NITE and blender framework. Since blender has support for python bindings, this glue object will be created via Python.

GlueObject1.png

Phase III : Displaying 3D Skeleton in 3D virtual scene
Under this step work will be done in-order to get all skeleton(s) data from glue object and is transferred to blender framework, where 3D skeleton will be displayed in the 3D virtual scene driven by blender game engine. Basically it provides a simulation of user in virtual environment. The idea here is that 3D skeleton inside the virtual environment will mimic the same gestures/behavior which is performed by user in real world.

Deliverables

An application that tracks full body movement and hand gesture for effective control of astronaut’s avatar movement with following features.

  • application will detect the movement and display the user’s skeleton in 3D virtual environment in real time and the positions of the joints are presented accurately
  • It can detect many users’ movements simultaneously
  • Bones and joints can be displayed in 3D model in different colors with the name of user on top of head joint
  • It can display the video of RGB and depth during the user movement
  • Users can interact with 3D virtual scene with rotation and zoom functions while user can also see avatar in a variety of perspectives
  • It can display 3D virtual environment in a variety of formats(3DS and OBJ). Also, virtual environment can be adjusted without interpretation of the motion tracking
  • Proper automated test support for the application with automated unit test for each module.
  • Proper documentation on the work for developers and users

To view more detailed application checkout this link – https://wiki.mozilla.org/Abhishek/IMS_Gsoc2014Proposal

Selected For GSoC(Google Summer Of Code) 2014

I guess I am quite late to post this good news as I was busy with my academics and final year project. I have been selected for the GSoC (Google Summer Of Code) 2014 program as a student intern.  Google Summer of Code is a global program that offers students stipends to write code for open source projects. Google have worked with the open source community to identify and fund exciting projects for the summer.

This summer I will be working for IMS (Italian Mars Society) for their V-ERAS(Virtual European MaRs Analogue Station for Advanced Technologies Integration) project. My work includes integration of whole body motion and hand gesture tracking of astronauts to ERAS(European MaRs Analogue Station for Advanced Technologies Integration) virtual station. Skeleton tracking based feature extraction methods will be used for tracking whole body movements and hand gestures, which will have a visible representation in terms of the astronaut avatar moving in the virtual ERAS Station environment.

To know more about Google Summer of Code check this link http://www.google-melange.com/gsoc/homepage/google/gsoc2014

Reference :

[1] https://www.google-melange.com/gsoc/project/details/google/gsoc2014/abhisheksingh/5733935958982656

[2] https://wiki.mozilla.org/Abhishek/IMS_Gsoc2014Proposal

Detailed study on changes in APIs of Microsoft Kinect SDK from Beta2 to V1.0(c#/VB)

Since my final year project involves use of Kinect sensor and Microsoft Kinect SDK for development, I went through lot of tutorials on using Kinect sensor with Kinect SDK. But most of the sources on net give details about the research version of Kinect SDK DLLs which is kind of older in present scenario as it went through a lot of changes. So, I thought of writing this post in order to bring some light in context with the changes done to beta version to new version of Kinect SDK.

Beta 1 (June 2011) release of the Kinect for Windows SDK , our C#/VB  accessible APIs have been provided by Microsoft.Research.Kinect.dll. The public APIs all were organized into 2 namespaces: Microsoft.Research.Kinect.Nui and Microsoft.Research.Kinect.Audio.

Microsoft research team made the decision back in early 2011 that we would mark their beta releases of APIs as “Research” to indicate the fact that this was an early version of their APIs, created in concert with Microsoft Research.

On February 1, 2012, they released their final v1.0 SDK. In that release, thier DLL is now called Microsoft.Kinect.dll. And all of their public APIs are in the Microsoft.Kinect namespace.

Microsoft.Research.Kinect.dll which had two namespaces Microsoft.Research.Kinect.Nui and Microsoft.Reasearch.Kinect.Audio was changed to Microsoft.Kinect.dll with only one namespace Microsoft.Kinect. So, now only Microsoft.Kinect alone will provide the functionality which were provided by namespaces Microsoft.Reasearch.Kinect.Nui and Microsoft.Reasearch.Kinect.Audio.

In the v1.0 SDK and > versions camera class is removed. Also, Runtime object is changed to KinectSensor object.

The following table will gives more info on changes to KinectSensor API

 

Runtime KinectSensor
Kinects KinectSensors
Initialize Start
Uninitialize Stop
—— Dispose
VideoStream ColorStream
VideoFrameReady ColorFrameReady
SkeletonEngine SkeletonStream
——– AllFrameReady
InstanceIndex ——–
InstanceName DeviceConnectionId
——— UniqueKinectId
NuiCamera ———
ElevationMaximum MaxElevationAngle
ElevationMinimum MinElevationAngle
——— MapDepthFrameToColorFrame
——— MapDepthFrameToColorImagePoint
——— MapDepthToSKeletonPoint
——— MapSkeletonPointToDepth
——— MapSkeletonPointToColor
RuntimeOptions —————-
KinectDeviceCollection KinectSensorCollection
——– Dispose
StatusChangedEventArgs —————
KinectRuntime Sensor
Camera —————
GetColorpixelCoordinatesFromDepthPixel MapToColorPixel
ElevationMaximum MaxElevationAngle
ElevationMinimum MinElevationAngle
ElevationAngle Message “moved to KinectSensor”
KinectStatus
——- Undefined
——- Initializing
——– DeviceNotGenuine
——– DeviceNotSupported
——- InsufficientBandwidth

Installing/Recovering lost grub

I found lot of people having trouble with installing grub or recovering their lost grub. Generally, when we install windows in a system already having Linux operating system then windows boot loader overwrites the grub and thus when user boots the system they doesn’t get option to select their OS at the boot time. This problem can be overcome by installing grub. Grub can be installed either by using windows recovery  disk or by using Linux live-disk. A lot of stuffs are there on internet on recovering grub through windows recovery disk. In this post I will show you how to install grub through Linux live-disk.

Step 1 : Boot your Linux live-disk and choose “try” option. You don’t need to install it.

step 2 : use following command to see all your disk partitions

$ sudo fdisk -l

search for Linux partition in the output of the above command

For Debian users :

$sudo mount /dev/sda /mnt

$ sudo grub-install /mnt/boot

$ sudo update-grub

For Fedora user’s :

Open terminal and mount /boot directory of the Linux system(already present in your system) to some location (say /mnt)

$ sudo mount /dev/sda6/boot /mnt

The grub2 packages contain commands for installing a bootloader and for creating a bootloader configuration file. grub2-install will install the bootloader – usually in the MBR, in free unpartioned space, and as files in /boot.

$ sudo grub2-install –boot directory=/mnt/boot /dev/sda

grub2-mkconfig will create a new configuration based on the currently running system, what is found in /boot, what is set in /etc/default/grub, and the customizable scripts in /etc/grub.d/ . A new configuration file is created with:

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

And whola! you are all done. Now restart your computer to see the magic.. 🙂 🙂

A Campaign against online shopping frauds

Rightly said “Justice delayed is Justice denied“. I am filling this petition to seek justice for the customers who have been cheated by the iluvshopping.in (www.iluvshopping.in) online shopping website. The advancement in internet technology has also given space to some of the online shopping frauds in the current technological scenario.

iluvshopping.in (www.iluvshopping.in) claims to be an e-commerce website – an online shopping site from where customers can buy products. In the last 3-4 month there is a rise in the customers who have been cheated by this site. Their products were neither delivered nor their money were refunded. They are not only cheating money from customers but are playing with their faith. When being contacted through mails, they generally ignore their mails and when they reply they give some lame excuses with lots of bullshit lies. Their IVRS contact number +91 9323 001 002 is fake which never works. Even customers tried calling Amrit Talreja, Founder of iluvshopping.in and Owner of The Mobile Station on his contact number +(91)-9819020690, 7666911911 but it failed. They have put their customers number in reject list.

I too had the same bitter experience with this site. When I checked their Facebook page here https://www.facebook.com/iluvshopping.in?filter=2 I found similar cases there in last four months. Mr. Amrit Talreja and their team has shown very nasty behavior towards their customers by blocking their number and telling lies to them. Their acts are unforgettable. It is my sincerest request to you to please help us in spreading awareness among the customers and drawing attention of competent authorities against this online shopping website and help the victim customers through refund of their money along with the compensation.

Many of the complaints have been removed anonymously from the FB page of this site. However I am able to pick some.

Some comments from the victim customers:

Abhishek Singh I totally, support Mr Satyen Chimulkar. Whatever he says and experienced is true. I too had the same bitter experience with this site. I placed my order on October 19, 2013 1:02:36 PM IST) – a Samsung Galaxy Tab 2 16 GB, 3G, Wifi (Titanium Silver) through iluvshopping.in . It has been more than 100 days and till now they haven’t delivered the product. After some 20 days from my order date,  I asked for refund of my money but they always gave some excuse and said tha they are facing technical problem. It will be refunded soon, but I haven’t received it till today. It’s more than 100 days. Their contact number is fake number which never works and they are not replying to my mails properly. Once or twice they reply and they will make some excuses like we have suffering from some technical issue or your refund process has been started and you will receive it in 5-7 days. But till now I haven’t received my money.

Satyen Chimulkar Its a fake website.
I posted yesterday about how manty true buyers have actually used the site.
But my post has been deleted unanimously and i had been replied last night at 12:59 am that item out of stock wherein i had a mail it was shipped out from inventory through dtdc courier.
How many people have faced such things???…

Joe Prakash ILuvShopping didnt repond my mail and phone calls.The product deliver day is only 3-5 days.but still they didnt ship the product.Mr.Satyen please help me?…

Joe Prakash they are cheeted the customers.fake website

Reghu Siva BOGUS COMPANY THEY WILL NOT DELIVER OUR PRODUCT PURCHASED THEY ONLY HAVE TO PAYMENT BY NET BANKING/CREDIT/DEBIT CARD&THEIR MOBILE NUMBER IS FAKE NUMBER .PEOPLE BEWARE OF SUCH ILLEGAL BUSINESS ONLINE TRADERS

 I am still wondering how these frauds are running their cheap business. Is Cyber crime-cell, other competent laws and authorities in India are so lame? I seek an answer..

This world is bad because good people like you are not acting against bad.

 It’s time for a change now. A change that can be brought by us.Let’s make this world a peaceful place a world free of frauds, a world of faith, a world of belief that’s all I want and I guess it’s the same with you..

Please sign this petition to bring justice to victims of iluvshopping.in and to draw attentions of competent authorities in India to make stricter laws against such crimes and keep checks of such fraud shopping sites.

Petition link : http://www.change.org/en-IN/petitions/consumer-court-and-cyber-crime-department-take-strict-action-against-iluvshopping-in

InCTF 2014

Amrita University and Amrita Centre for Cyber Security

proudly present

InCTF ’14

National Level “Capture the Flag” style ethical hacking contest

Not a day passes when several machines are compromised and infections spread rampantly in the world today. The cyber world has witnessed several dangerous attacks including the Stuxnet virus and it’s successor Duqu. Other recent attacks include the Flame malware, which managed to disguise itself as a legitimate Windows software. It exploited a bug in Windows to obtain a certificate which allowed itself to authenticate itself as genuine Windows software. Other notable examples include rise of botnets such as the highly resilient Zeus banking trojan and the Conficker worm. There have also been instances of espionage by government agencies on one another such as the recent incident where Georgia CERT discovered a Russian hacker spying on them.

Indian websites offer little or no resistance to such attacks. The Computer Emergency Response Team, India has been tracking defacements of Indian websites amongst other security incidents. Their monthly and annual bulletins detail the various vulnerabilities and malware infections in various Indian websites. It’s really sad that with so much talent and skill, Indian websites are compromised frequently and nothing can be done to stand this wave of attacks on them. InCTF is a Capture the Flag style ethical hacking contest, a strategic war-game designed to mimic the real world security challenges. Software developers in India have little exposure to secure coding practices and the effects of not adopting such practices-one of the main reasons why systems are compromised quite easily these. Following such simple practices can help prevent such incidents. InCTF ‘14 is from December 2013 to March 2014 and is focused exclusively on the student community. You can participate from your own university and no travel is required. No prior exposure or experience in cyber security needed to participate.

What you need to do?

  1. Form a team (minimum three and maximum five members from your college).
  2. Approach a faculty/mentor and request him/her to mentor your team.
  3. Register online at InCTF portal.

Great Rewards

See Prizes.

  1. Teams are awarded prizes based on their performance.
  2. Deserving teams are well awarded. Exciting prizes to be won.

So, what are you waiting for? It’s simple: Register, Learn, Hack!

Keep up with us

Timeline

Form a team (min. of 3 and max. of 5 students per team).
Complete the online registration before 10:00 AM on 14th February, 2014. Register(as an individual first), then create a team using the InCTF Contest Portal. Each team must have a mentor. A mentor is a faculty member of the college who can help you. Students in a team must be from the same college.
Once you complete the team registration the team members can login to the portal and access the first round questions. The first round is a learning round. You can refer and find solutions to the questions you receive. We’ll provide you with hints if needed.The last date for submission of solutions to the questions is 14th February 2014 upto 7:00 P.M. Solutions must be submitted in PDF format. If there are any attachments, kindly send them as an archive along with the PDF.
The skills you gained will be tested in the second round. In the second round teams will be tested on binary exploitation, forensics, reverse engineering and web based exploits. All teams can take part in the second round. In case of a tie, the first round will be used to break it. The second round will be held from 15th February, 2014 to on 23rd February, 2014.
The exciting final attack defence style CTF round will be held online using a VPN. More details will be published later on.

Black Box Testing

Black box testing focuses on the determining whether or not a program do what it is supposed to do based on its functional requirement. It  is sometimes also called functional or behavioral testing.

Basically Black box testing is a kind of testing technique where the tester is unaware of  the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

Aim : To determine whether the function appears to work according to specifications

Black box testing attempts to find errors in the external behavior of the code mostly in the following categories:

  • incorrect or missing functionality
  • interface errors
  • errors in data structures used by interfaces
  • behavior or performance errors
  • initialization and termination errors

It is advisable that the person doing black box testing should not be programmer of the software under test and doesn’t know anything about the structure of the code. Since the programmers of the code are innately biased and are more likely to test that program does what they programmed it to do, while the reality is program should be tested in-order to make sure that the program does what customer wants it to do.

ANATOMY OF TEST CASE

The format of the test case design is very important while doing black box testing. Let me give you a simple example of the test case planning template :

Test ID Description Expected Results Actual Results
1 Player 1 rolls dice and
moves.
Player 1 moves on board.
2 Player 2 rolls dice and
moves.
Player 2 moves on board.
3 Precondition: Game is in test
mode, SimpleGameBoard is
loaded, and game begins.
Number of players: 2
Money for player 1: Rs 1200
Money for player 2: Rs 1200
Player 1 dice roll: 3
Player 1 is located at Blue 3.

clear descriptions

A bit of advice would be that the test case description should be very clear and specific so that
the test case execution is repeatable. Even if you will always be the person executing the
test cases, pretend you are passing the test planning document to someone else to perform
the tests 🙂

strategies of black box testing

Since writing and executing test are fairly expensive process so we need to make sure to write the tests for the kind of things that the customer will do most often or
even fairly often with the motto of finding as many defects as possible with few test cases.

1. test of customer requirements

Black box test cases are based on customer requirements. We basically want to make sure that every customer requirements should be tested at least once. Find the defect as early as possible before delivering the end product to the customer.

2. Equivalence partitioning

It is basically a strategy which can be used to reduce the number of test cases that need to be developed.It divides the input domain of a program into classes and for  each of these equivalence classes, the set of data should be treated the same by the module under test and should produce the same answer. Test cases should be designed so the inputs lie within these equivalence classes.

I am running out of time now. I will post more on examples of equivalence partitioning and other methods such as boundary value condition, decision table etc. So stay tuned and enjoy testing! Cheers 🙂 🙂

Welcome back !

Let’s take an example of an application X. The tester doesn’t have any internal knowledge of the system, all he knows is about the input data and the output it is going to produce. Say this application accepts all the values between 1 – 10 (including 1 and 10). Now, the work of the tester is to test this application X using the above given information. Since the input domain of the application contains all the number from -∞ to +∞ which is an infinite set. He can’t test for all the cases so in order to reduce the number of test data he will use the technique equivalence partitioning.

…… -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ……

——P1———–|  |————P2——————|   |—–P3——

By which the whole set in divided into three equivalence partition P1(-∞, 0], P2 [1, 10] and P3 [11, ∞). where P1 and P3 are invalid partitions and P2 is valid Partition.

P1  and P3  => Invalid partition

P2                => Valid partition

In test cases tester will use random data from each of the equivalence classes and data from same equivalence classes should show same behavior for application X.

3. Boundary value Analysis

It states that if the equivalence classes has been derived out of which we are going to pick test data for making test cases then we should pick data on the boundaries because these are the points where there is maximum chances that a programmer will do mistakes there.

Bugs lurk in corners and congregate at boundaries – Boris Beizer

For the equivalence classes which we got above, our test data should be on boundaries i.e the test data after performing boundary value analysis on the equivalence partitions  will be {0, 1, 10, 11}.