12 Testing-BigData

What is Big Data Testing?

BigData testing is defined as testing of Bigdata applications. Big data is a collection of large datasets that cannot be processed using traditional computing techniques. Testing of these datasets involves various tools, techniques, and frameworks to process. Big data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, variety, and velocity. You can learn more about Big Data, Hadoop and MapReduce here

In this tutorial, you will learn-

Big Data Testing Strategy

Testing Big Data application is more verification of its data processing rather than testing the individual features of the software product. When it comes to Big data testing, performance and functional testing are the keys.

In Big data testing, QA engineers verify the successful processing of terabytes of data using commodity cluster and other supportive components. It demands a high level of testing skills as the processing is very fast. Processing may be of three types

img

Along with this, data quality is also an important factor in Hadoop testing. Before testing the application, it is necessary to check the quality of data and should be considered as a part of database testing. It involves checking various characteristics like conformity, accuracy, duplication, consistency, validity, data completeness, etc.

How to test Hadoop Applications

The following figure gives a high-level overview of phases in Testing Big Data Applications

img

Big Data Testing can be broadly divided into three steps

Step 1: Data Staging Validation

The first step of big data testing also referred as pre-Hadoop stage involves process validation.

Tools like Talend, Datameer, can be used for data staging validation

Step 2: “MapReduce” Validation

The second step is a validation of “MapReduce”. In this stage, the tester verifies the business logic validation on every node and then validating them after running against multiple nodes, ensuring that the

Step 3: Output Validation Phase

The final or third stage of Big Data testing is the output validation process. The output data files are generated and ready to be moved to an EDW (Enterprise Data Warehouse) or any other system based on the requirement.

Activities in the third stage include

Architecture Testing

Hadoop processes very large volumes of data and is highly resource intensive. Hence, architectural testing is crucial to ensure the success of your Big Data project. A poorly or improper designed system may lead to performance degradation, and the system could fail to meet the requirement. At least, Performance and Failover test services should be done in a Hadoop environment.

Performance testing includes testing of job completion time, memory utilization, data throughput, and similar system metrics. While the motive of Failover test service is to verify that data processing occurs seamlessly in case of failure of data nodes

Performance Testing

Performance Testing for Big Data includes two main action

Performance Testing Approach

Performance testing for big data application involves testing of huge volumes of structured and unstructured data, and it requires a specific testing approach to test such massive data.

img

Performance Testing is executed in this order

  1. The process begins with the setting of the Big data cluster which is to be tested for performance
  2. Identify and design corresponding workloads
  3. Prepare individual clients (Custom Scripts are created)
  4. Execute the test and analyzes the result (If objectives are not met then tune the component and re-execute)
  5. Optimum Configuration

Parameters for Performance Testing

Various parameters to be verified for performance testing are

Test Environment Needs

Test Environment needs to depend on the type of application you are testing. For Big data testing, the test environment should encompass

Big data Testing Vs. Traditional database Testing

Properties Traditional database testing Big data testing
Data Tester work with structured data Tester works with both structured as well as unstructured data
Testing approach is well defined and time-tested The testing approach requires focused R&D efforts
Tester has the option of “Sampling” strategy doing manually or “Exhaustive Verification” strategy by the automation tool “Sampling” strategy in Big data is a challenge
Infrastructure It does not require a special test environment as the file size is limited It requires a special test environment due to large data size and files (HDFS)
Validation Tools Tester uses either the Excel-based macros or UI based automation tools No defined tools, the range is vast from programming tools like MapReduce to HIVEQL
Testing Tools can be used with basic operating knowledge and less training. It requires a specific set of skills and training to operate a testing tool. Also, the tools are in their nascent stage and over time it may come up with new features.

Tools used in Big Data Scenarios

Big Data Cluster Big Data Tools
NoSQL: CouchDB, DatabasesMongoDB, Cassandra, Redis, ZooKeeper, HBase
MapReduce: Hadoop, Hive, Pig, Cascading, Oozie, Kafka, S4, MapR, Flume
Storage: S3, HDFS ( Hadoop Distributed File System)
Servers: Elastic, Heroku, Elastic, Google App Engine, EC2
Processing R, Yahoo! Pipes, Mechanical Turk, BigSheets, Datameer

Challenges in Big Data Testing

Automation testing for Big data requires someone with technical expertise. Also, automated tools are not equipped to handle unexpected problems that arise during testing

It is one of the integral phases of testing. Virtual machine latency creates timing problems in real time big data testing. Also managing images in Big data is a hassle.

Performance testing challenges

Summary

13 FAQ

13 FAQ