Tuesday, December 8, 2015

File Move

// Import for File class
import java.io.*;

class Yes

 public static void main(String args[]) throws IOException

 // Create a file object for directory C:\java
 File f=new File("D:\\Root\\Test\\");

 String s=f.getAbsoluteFile().getParentFile().getName();


 // List all the files in the directory, get their File objects.
 File[] files=f.listFiles();

  // Loop till end of all files
  for(int i=0;i<files.length;i++)

  // Get the name of each file
  String name=files[i].getName();

  // File should not be a directory, and file should have an extension [this logic also filters folders containing .]


   // Print the name of the file, and the extension
   FileReader in = new FileReader(f+"\\"+name);
   FileWriter out = new FileWriter("D:\\Root\\Test2\\"+name);
   int c;

   while ((c = in.read()) != -1)

  System.out.println("Moving Done");


Tuesday, August 25, 2015

// Include ( Copy ) the Driver file to the Project file.

File sourceFile = new File("velocity-1.5.jar");
String name = sourceFile.getName();

File targetFile = new File(path2+"\\"+name);
System.out.println("Copying file : " + sourceFile.getName() +" from Java Program");

//copy file from one location to other
try {
FileUtils.copyFile(sourceFile, targetFile);
System.out.println("Included the Driver to the Project");
} catch (IOException e1) {
// TODO Auto-generated catch block

Friday, August 14, 2015

SP Inputs

cStmt.registerOutParameter("inOutParam", Types.INTEGER);
cStmt.setInt(2, 1);
int outputValue = cStmt.getInt(2);
getBigDecimal(int, int)
Get the value of a NUMERIC parameter as a java.math.BigDecimal object.
 o  getBoolean(int)
Get the value of a BIT parameter as a Java boolean.
 o  getByte(int)
Get the value of a TINYINT parameter as a Java byte.
 o  getBytes(int)
Get the value of a SQL BINARY or VARBINARY parameter as a Java byte[]
 o  getDate(int)
Get the value of a SQL DATE parameter as a java.sql.Date object
 o  getDouble(int)
Get the value of a DOUBLE parameter as a Java double.
 o  getFloat(int)
Get the value of a FLOAT parameter as a Java float.
 o  getInt(int)
Get the value of an INTEGER parameter as a Java int.
 o  getLong(int)
Get the value of a BIGINT parameter as a Java long.
 o  getObject(int)
Get the value of a parameter as a Java object.
 o  getShort(int)
Get the value of a SMALLINT parameter as a Java short.
 o  getString(int)
Get the value of a CHAR, VARCHAR, or LONGVARCHAR parameter as a Java String.
 o  getTime(int)
Get the value of a SQL TIME parameter as a java.sql.Time object.
 o  getTimestamp(int)
Get the value of a SQL TIMESTAMP parameter as a java.sql.Timestamp object.
 o  registerOutParameter(int, int)
Before executing a stored procedure call, you must explicitly call registerOutParameter to register the java.sql.Type of each out parameter.
 o  registerOutParameter(int, int, int)
Use this version of registerOutParameter for registering Numeric or Decimal out parameters.
 o  wasNull()
callableStatement.setInt(1, 10);
callableStatement.registerOutParameter(2, java.sql.Types.VARCHAR);
callableStatement.registerOutParameter(3, java.sql.Types.VARCHAR);
callableStatement.registerOutParameter(4, java.sql.Types.DATE);
String userName = callableStatement.getString(2);
String createdBy = callableStatement.getString(3);
Date createdDate = callableStatement.getDate(4);

call.registerOutParameter(1, Types.INTEGER);

BIGINT  long[]    long
BINARY byte[][]                byte[]
BIT          boolean[]            boolean
DATE     java.sql.Date[]  java.sql.Date
DOUBLE               double[]              double
FLOAT   double[]              double
INTEGER              int[]       int
LONGVARBINARY            byte[][]                byte[]
REAL      float[]   float
SMALLINT           short[]  short
TIME      java.sql.Time[]  java.sql.Time
TIMESTAMP       java.sql.Timestamp[]     java.sql.Timestamp

  • getBoolean()
  • getByte()
  • getBytes()
  • getDate()
  • getDouble()
  • getFloat()
  • getInt()
  • getLong()
  • getObject()
  • getShort()
  • getString()
  • getTime()
  • getTimestamp()
  • registerOutParamter()

Sunday, August 2, 2015


package gui_Design;

import java.sql.*;

public class DB_Connection_Test extends GUI_Design{

String DB_URL=Text1;
String UserName=Text2;
String Password=Text3;
String[] Driver_Class = 



public void DB_Connection()

try {
catch (ClassNotFoundException e)

            System.out.println("Please include Classpath  Where DB Driver is located");
System.out.println("Driver is loaded successfully");
Connection conn = null;

        try {
            conn = DriverManager.getConnection(DB_URL,UserName,Password);
            if (conn != null)
                System.out.println("Database Connected");
                System.out.println("Database Connection Failed ");
        } catch (SQLException e) {
        System.out.println("Database connection Failed");


public void SQL_Query_Test()


public void SP_Test()



Buffer Test: 

import java.awt.Color;
import java.awt.Window;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.Scanner;

import javax.swing.DefaultCellEditor;
import javax.swing.DefaultComboBoxModel;
import javax.swing.JButton;
import javax.swing.JComboBox;
import javax.swing.JDialog;
import javax.swing.JFrame;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTable;
import javax.swing.ScrollPaneConstants;
import javax.swing.SwingUtilities;
import javax.swing.table.DefaultTableModel;

public class TablesDesign {
static String p,q;
static Boolean caught;
static String sb3;
static StringBuffer SB3=null;
static void Frame()
  final JFrame Frame = new JFrame("SP- Inputs & Outputs");
  Frame.setBounds(300, 200, 500, 270);
  final JPanel LPanel = new JPanel();
  final JPanel RPanel = new JPanel();
  LPanel.setBounds(0, 0, 290, 200);
  RPanel.setBounds(300, 0, 180, 200);
  final JPanel LPanel1 = new JPanel();
  final JPanel RPanel1 = new JPanel();
  LPanel1.setBounds(0, 0, 290, 200);
  RPanel1.setBounds(300, 0, 180, 200);
  String columnNames[] = { "INPUT DATA TYPE", "VALUE"};
      String columnNames2[] = {"OUTPUT RETURN TYPE"};
     Object[][] data = new Object[2][2];
     Object[][] data2 = new Object[4][2];
     DefaultTableModel tableModel = new DefaultTableModel(data,columnNames);
     final JTable table = new JTable(data,columnNames);
     final JTable table1 = new JTable(data2,columnNames2);
     JScrollPane scroll1 = new JScrollPane(table);
     scroll1.setBounds(0, 0, 290, 200);
     JScrollPane scroll2 = new JScrollPane(table1);
     scroll2.setBounds(0, 0, 180, 200);
     table.getColumnModel().getColumn(0).setCellEditor(new DefaultCellEditor(
          new JComboBox(new DefaultComboBoxModel(new String[] {
     table1.getColumnModel().getColumn(0).setCellEditor(new DefaultCellEditor(
          new JComboBox(new DefaultComboBoxModel(new String[] {
     final JButton Button1 = new JButton("OK" );
     Button1.setBounds(200, 210, 60, 25);
     Button1.addActionListener(new ActionListener() {
        public void actionPerformed(ActionEvent e) {
     caught = false;
     StringBuilder buf = new StringBuilder();
    for (int t=0;t<=1;t++)
             p =  (String) table.getModel().getValueAt(t, 0);
            q = (String) table.getModel().getValueAt(t, 1);

            for (int t1=0;t1<=3;t1++)
                     p =  (String) table1.getModel().getValueAt(t1, 0);

            caught =true;
    String result = buf.toString();
    String Output_B="\"Output Return Values: \"+";
    String Output_E="\"\\"+"\\End"+"\"";
    StringBuilder Output_F=new StringBuilder();
    catch (Exception Test)
    JOptionPane.showMessageDialog(null,"Input type/Output type should not be empty"+Test);
    caught = false;
    } finally {
               if(caught == false){
                System.out.println("Not Done");
     JButton Button2 = new JButton("Cancel" );
 Button2.addActionListener(new ActionListener() {
        public void actionPerformed(ActionEvent e) {
    if ( caught == false)
    System.out.println("Variables are not created successfully");
    System.out.println("Variables are created Successfull....");
    // Scanner sc = new Scanner(ClassLoader.class.getResourceAsStream("Drivers\\Test.jar"));
    String nn= ClassLoader.class.getName();
     Button2.setBounds(270, 210, 80, 25);
 public static void main(String[] args) {

Thursday, July 30, 2015

package gui_Design;

import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.StringWriter;

import org.apache.velocity.VelocityContext;
import org.apache.velocity.app.VelocityEngine;
import org.apache.velocity.runtime.RuntimeConstants;
import org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader;

public class FinalTest {
static StringBuffer stringBuffer = new StringBuffer();
public static void main(String[] args) throws Throwable {

static void Uma() throws Throwable
try {
File file = new File("C:/Users/Mahesh/Desktop/Ma.txt");
FileReader fileReader = new FileReader(file);
BufferedReader bufferedReader = new BufferedReader(fileReader);

String line;
while ((line = bufferedReader.readLine()) != null) {
// if (Integer.parseInt(line)<1)
// {
// stringBuffer.append(line+"\"+");
// stringBuffer.append("\n");
// }

//System.out.println("Contents of file:");

} catch (IOException e) {
VelocityEngine ve = new VelocityEngine();

        ve.setProperty(RuntimeConstants.RESOURCE_LOADER, "classpath");
        ve.setProperty("classpath.resource.loader.class", ClasspathResourceLoader.class.getName());
   org.apache.velocity.Template t = ve.getTemplate( "gui_Design/SP_Template.vm" );
   VelocityContext context = new VelocityContext();
   context.put("output", stringBuffer.toString());
        StringWriter writer = new StringWriter();



Wednesday, March 19, 2014

LoadRunner 12.0 Released.............

March 17, 2014: HP released its most anticipated, revamped, and packed with awesome features for performance testing, HP LoadRunner 12.

Key observations / new features are:

Cloud-based load generators.

 HP describes this feature as "cloud bursting". Users now have the ability to provision load generators on AWS (Amazon Web Service) cloud servers from within LoadRunner or Performance Center.
 Licensing - 50 vUsers free:
 Providing fully-functional applications that allow small-scale testing allow prolonged evaluations and proof-of-concept exercises.

VUGEN improvements:

There are a variety of improvements as you would expect. Key ones are:

The ability to review replay statistics for tests after each run.
Including details on total connections, disconnections and bytes downloaded.
The ability to edit common file types in the editor.
Support for recording in the Internet Explorer 11, Chrome v30 and Firefox v23 browsers.
The ability to create scripts from Wireshark or Fiddler files.
The ability to record HTML5 or SPDY protocols.

TruClient improvements:

TruClient script converter. This basically replays your TruClient scripts and records the HTTP/HTML traffic allowing you to create these script typers from TruClient recordings. This is similar to recording GUI scripts and then converting to other script types.

The addition of support for Rendezvous points, IP spoofing, VTS2 and Shunra network virtualisation in TruClient scripts.

Linux Load Generator improvements:

Building on the increased support for Linux Load Generators in 11.5x, LDAP, DNS, FTP, IMAP, ODBC, POP3, SMTP and Windows Sockets scripts can now be replayed through UNIX load generators.

CI/CD support:

Better integration with Jenkins etc.

Platform support:

Support for installation on Windows Server 2012:
(LoadRunner 11.x and PC 11.x only supported up to W2K8 which was a barrier to enterprise adoption).
LoadRunner components can now run in a "non-admin" user account with UAC and DEP enabled.

Get your own Copy: ( Trail Version ) : HP LoadRunner 12

HP LoadRunner Data Sheet:

---Source from: http://blog.trustiv.co.uk/2014/03/first-look-loadrunner-12

Monday, January 27, 2014

Big Data Testing VS ETL Testing

Big Data Testing VS ETL Testing

Whether it is a Data Warehouse (DWH) or a BIG Data Storage system, the basic component that's of interest to us, the testers, is the 'Data'. At the fundamental level, the data validation in both these storage systems involves validation of data against the source systems, for the defined business rules. It's easy to think that, if we know how to test a DWHwe know how to test the BIG Data storage system.
But, unfortunately, that is not the case! In this blog, focusing on some of the differences in these storage systems and suggest an approach to BIG Data Testing.
Let us look at these differences from the following 3 perspectives:


Four fundamental characteristics by which the data in DWH and BIG Data storage systems differ are the Data VolumeData VarietyData Velocity and Data Value.

DWH (Data Warehouse)
Big Data
Typical Data volumes which the current DWH systems are capable of storing is in terms of Gigabytes.
The BIG Data storage systems can store & process data sizes more than Petabytes.
When it comes to Data variety, there are no constraints on the type of data that can be stored and processed within a BIG Data storage system.
DWHs, can store and process only 'Structured' data.Whether it is 'structured' or 'unstructured' can be stored and efficiently processed within a tolerable elapsed time in BIG Data Storage system.
The data is stored in DWH is through 'Batch Processing', BIG Data implementations support 'Streaming' data too.
DWH systems are based on RDBMS.
The BIG Data storage systems are based on File system.
DWH systems have limitations on the linear data growth.BIG Data implementations such as the ones based on Apache Hadoop have no such limitations as they are capable of storing the data in multiple clusters.
Validation tools for DWH systems testing are based on SQL (Structured Query language). For BIG Data, in Hadoop eco system range from pure programming tools like MapReduce (which supports coding in Java, Peal, Ruby, Python etc) to wrappers that are built on top of MapReduce like HIVE QL or PIGlatin.

What does this mean to the tester?

DWH - Tester
Big Data - Tester
DWH tester has the advantage of working with 'Structured' data. (Data with static schema).
But BIG Data tester may have to work with 'Unstructured or Semi Structured' data (Data with dynamic schema) most of the time.
The tester needs to seek the additional inputs on 'how to derive the structure dynamically from the given data sources' from the business/development teams.
When it comes to the actual validation of the data in DWH, the testing approach is well-defined and time-tested.
 Tester has the option of using 'Sampling' strategy manually or 'Exhaustive verification' strategy from within automation tools like Infosys Perfaware (proprietary DWH Testing solution).
 Considering the huge data sets for validation, even 'Sampling' strategy is a challenge in the context of BIG Data Validation.
RDBMS based databases (Oracle, SQL Server etc) are installed in the ordinary file system.
So, testing of DWH systems does not require any special test environment as it can be done from within the file system in which the DWH is installed. 
When it comes to testing BIG Data in HDFS, the tester requires a test environment that is based on HDFS itself.
Testers need to learn the how to work with HDFS as it is different than working with ordinary file system.
 The DWH testers use either the xl based macros or full-fledged UI based automation tools. Validation tools for DWH systems testing are based on SQL (Structured Query language).For BIG Data, there are no defined tools. Tools presently available in the Hadoop eco system range from pure programming tools like MapReduce (which supports coding in Java, Peal, Ruby, Python etc) to wrappers that are built on top of MapReduce like HIVE QL or PIGlatin.


Experience in DWH at the least, can only shorten the learning curve of the BIG Data tester in understanding the extraction, loading transformation of the data from source systems to HDFS at the conceptual level. It does not provide any other advantage.
BIG Data testers have to learn the components of the BIG Data eco system from the scratch. Till the time, the market evolves and fully automated testing tools are available for BIG Data validation, the tester does not have any other option but to acquire the same skill set as the BIG Data developer in the context of leveraging the BIG Data technologies like Hadoop. This requires a tremendous mindset shift for both the testers as well as the testing units within the organization.