Wednesday, March 19, 2014

LoadRunner 12.0 Released.............





March 17, 2014: HP released its most anticipated, revamped, and packed with awesome features for performance testing, HP LoadRunner 12.

Key observations / new features are:


Cloud-based load generators.

 HP describes this feature as "cloud bursting". Users now have the ability to provision load generators on AWS (Amazon Web Service) cloud servers from within LoadRunner or Performance Center.
 Licensing - 50 vUsers free:
 Providing fully-functional applications that allow small-scale testing allow prolonged evaluations and proof-of-concept exercises.

VUGEN improvements:

There are a variety of improvements as you would expect. Key ones are:

The ability to review replay statistics for tests after each run.
Including details on total connections, disconnections and bytes downloaded.
The ability to edit common file types in the editor.
Support for recording in the Internet Explorer 11, Chrome v30 and Firefox v23 browsers.
The ability to create scripts from Wireshark or Fiddler files.
The ability to record HTML5 or SPDY protocols.

TruClient improvements:

TruClient script converter. This basically replays your TruClient scripts and records the HTTP/HTML traffic allowing you to create these script typers from TruClient recordings. This is similar to recording GUI scripts and then converting to other script types.

The addition of support for Rendezvous points, IP spoofing, VTS2 and Shunra network virtualisation in TruClient scripts.

Linux Load Generator improvements:

Building on the increased support for Linux Load Generators in 11.5x, LDAP, DNS, FTP, IMAP, ODBC, POP3, SMTP and Windows Sockets scripts can now be replayed through UNIX load generators.

CI/CD support:

Better integration with Jenkins etc.

Platform support:

Support for installation on Windows Server 2012:
(LoadRunner 11.x and PC 11.x only supported up to W2K8 which was a barrier to enterprise adoption).
LoadRunner components can now run in a "non-admin" user account with UAC and DEP enabled.


Get your own Copy: ( Trail Version ) : HP LoadRunner 12

HP LoadRunner Data Sheet:


---Source from: http://blog.trustiv.co.uk/2014/03/first-look-loadrunner-12

Monday, January 27, 2014

Big Data Testing VS ETL Testing



Big Data Testing VS ETL Testing

Whether it is a Data Warehouse (DWH) or a BIG Data Storage system, the basic component that's of interest to us, the testers, is the 'Data'. At the fundamental level, the data validation in both these storage systems involves validation of data against the source systems, for the defined business rules. It's easy to think that, if we know how to test a DWHwe know how to test the BIG Data storage system.
But, unfortunately, that is not the case! In this blog, focusing on some of the differences in these storage systems and suggest an approach to BIG Data Testing.
Let us look at these differences from the following 3 perspectives:

-Data

Four fundamental characteristics by which the data in DWH and BIG Data storage systems differ are the Data VolumeData VarietyData Velocity and Data Value.



DWH (Data Warehouse)
Big Data
Typical Data volumes which the current DWH systems are capable of storing is in terms of Gigabytes.
The BIG Data storage systems can store & process data sizes more than Petabytes.
When it comes to Data variety, there are no constraints on the type of data that can be stored and processed within a BIG Data storage system.
DWHs, can store and process only 'Structured' data.Whether it is 'structured' or 'unstructured' can be stored and efficiently processed within a tolerable elapsed time in BIG Data Storage system.
The data is stored in DWH is through 'Batch Processing', BIG Data implementations support 'Streaming' data too.
DWH systems are based on RDBMS.
The BIG Data storage systems are based on File system.
DWH systems have limitations on the linear data growth.BIG Data implementations such as the ones based on Apache Hadoop have no such limitations as they are capable of storing the data in multiple clusters.
Validation tools for DWH systems testing are based on SQL (Structured Query language). For BIG Data, in Hadoop eco system range from pure programming tools like MapReduce (which supports coding in Java, Peal, Ruby, Python etc) to wrappers that are built on top of MapReduce like HIVE QL or PIGlatin.

What does this mean to the tester?


DWH - Tester
Big Data - Tester
DWH tester has the advantage of working with 'Structured' data. (Data with static schema).
But BIG Data tester may have to work with 'Unstructured or Semi Structured' data (Data with dynamic schema) most of the time.
The tester needs to seek the additional inputs on 'how to derive the structure dynamically from the given data sources' from the business/development teams.
When it comes to the actual validation of the data in DWH, the testing approach is well-defined and time-tested.
 Tester has the option of using 'Sampling' strategy manually or 'Exhaustive verification' strategy from within automation tools like Infosys Perfaware (proprietary DWH Testing solution).
 Considering the huge data sets for validation, even 'Sampling' strategy is a challenge in the context of BIG Data Validation.
RDBMS based databases (Oracle, SQL Server etc) are installed in the ordinary file system.
So, testing of DWH systems does not require any special test environment as it can be done from within the file system in which the DWH is installed. 
When it comes to testing BIG Data in HDFS, the tester requires a test environment that is based on HDFS itself.
Testers need to learn the how to work with HDFS as it is different than working with ordinary file system.
 The DWH testers use either the xl based macros or full-fledged UI based automation tools. Validation tools for DWH systems testing are based on SQL (Structured Query language).For BIG Data, there are no defined tools. Tools presently available in the Hadoop eco system range from pure programming tools like MapReduce (which supports coding in Java, Peal, Ruby, Python etc) to wrappers that are built on top of MapReduce like HIVE QL or PIGlatin.

-Conclusion

Experience in DWH at the least, can only shorten the learning curve of the BIG Data tester in understanding the extraction, loading transformation of the data from source systems to HDFS at the conceptual level. It does not provide any other advantage.
BIG Data testers have to learn the components of the BIG Data eco system from the scratch. Till the time, the market evolves and fully automated testing tools are available for BIG Data validation, the tester does not have any other option but to acquire the same skill set as the BIG Data developer in the context of leveraging the BIG Data technologies like Hadoop. This requires a tremendous mindset shift for both the testers as well as the testing units within the organization.

--Thanks.


Thursday, January 16, 2014

Apache JMeter - Version 2.11 Released.............


  
 Download -Version 2.11




New Improvements

HTTP(S) Test Script Recorder improvements.

JMS Publisher/Point to Point : Add ability to set typed values in JMS header properties.

View Results Tree : Add an XPath Tester.

Ability to choose the client alias for the cert key in JsseSslManager such that Mutual SSL auth testing can be made more flexible.

Add a "Save as Test Fragment" option.

Summariser is be enabled by default in Non GUI mode.

Transaction Controller:Change default property "Include duration of timer..." for newly created element.

Go to Changes for more info. Follow on Twitter.