Thursday, April 18, 2013

Handling Dynamic URL in Jmeter

When we run the Dynamic URL script, It will show Forbidden or other errors in listener. To solve this problem  using Reqular Expression Extractor . The following steps explain how to handle dynamic URL using  Regular Expression Extractor.

Step1:  Record the site. 
Step2: Adding Regular Expression Extractor in the top      
  •   Adding Regular Expression Extractor( Right click on  sampler-> Post Processors->Regular Expression Extractor) in  Http request sampler.
  • In Regular Expression Extractor
                Reference Name: dynamicValue(Any name)
                Regular Expression:p_auth=([^=&])+
                Template: $0$
                Match No:1
Step3: Enter reqular expression name (Reference Name) in  the  url path.
Step4: Save and Run the script. The Result will be successful.

Tuesday, April 16, 2013

Jmeter Sample Examples for Begineers


Many of the beginners are struggling to learn the jmeter , how to use and what is the use of each Sampler in the jmeter.. Here some of the Examples for jmeter Samplers like (Post processors,Config elements,loop controllers etc.....).. 

Jmeter Examples

Enjoy with sample Examples....................   :)

Saturday, April 6, 2013

How to Monitor OS level monitors?

One method of monitoring OS resources utilization by examining the performance counter logs which contains several critical counters that are used when evaluating performance issues. There are a number of primary counters that we’ll look at and they include metrics for CPU, memory, and disk performance.

CPU Metrics

Counter Description
System\Processor Queue Length\(N\A) Logs the number of items waiting to be
processed by the CPU. Values higher than 2 indicates the need to add more or faster processors.
Processor\% Processor Time\_Total Records the current CPU utilization. This log
helps determine the need for additional processor capacity.
Processor\Interrupts\Sec Records the number of times processing is
stopped to handle a hardware request for disk or memory I/O. Values higher than 1000 may indicate a hardware issue.

Memory Metrics

Counter Description
Memory\Pages/sec\(N/A) Monitors the data written to or read from
memory. Values higher than 200 indicates the need to increase RAM.
Memory\Pages Faults/sec\(N/A) Records the number of times that data was not
found in memory.
Memory\Available Mbyers\(N/A) Monitors the amount of memory available to the
system. Values below 10% of total physical memory indicate the need for more
RAM.
Memory\Pool Nonpaged Bytes\(N/A) Records the amount of data that cannot be paged
on the disk.

Disk Metrics

Counter Description
PhysicalDisk\% Disk Time\DriveLetter Logs the amount of time for which disk is
active during the last monitoring period. Values higher than 80% indicate
that there may be a problem with the hard drive controller or insufficient
memory.
PhysicalDisk\Current Disk Queue
Length\DriveLetter
Logs the number of the items waiting to be
written to or read from the disk. Values higher than 2 indicate a problem
with the disk subsystem. RAID 5 or RAID 10 should be implemented to improve
performance.

Want to do Loadrunner Certification.............???????

Details For Loadrunner Certification:

1) HP AIS - Loadrunner v11(Recommended for Beginners)
    Exam HP0-M48 - HP LoadRunner 11.x Software
    Exam HP0-M49 HP Virtual User Generator 11.x Software 

2) HP ASE - LoadRunner v11(Recommended for Experts)
     Exam HP0-M99 - Advanced LoadRunner and Performance Center v11 Software 

1). HP AIS--Exam HP0-M48: 

Minimum Qualifications:
To pass this exam, it is recommended that you have at least three months experience with HP LoadRunner 11.x Software. Exams are based on an assumed level of industry-standard knowledge that may be gained from the training, hands-on experience, or other pre-requisite events. 

Exam Details:
The following are details about this exam:
 Number of items: 67
 Item types: multiple choice and drag-and-drop
 Exam time: 105 minutes
 Passing score: 72%
 Reference material: No on-line or hard copy reference material will be allowed at the testing site. 

Exam Content:
The following testing objectives represent the specific areas of content covered in the exam. Use this outline to guide your study and to check your readiness for the exam. The exam measures your understanding of these areas.























2). HP AIS--Exam HP0-M49: Exam Details:The following are details about this exam:
 Number of items: 63
 Item types: multiple choice and drag-and-drop
 Exam time: 105 minutes
 Passing score: 74%
 Reference material: No on-line or hard copy reference material will be allowed at the testing site. 

Exam Content:
The following testing objectives represent the specific areas of content covered in the exam. Use this outline to guide your study and to check your readiness for the exam. The exam measures your understanding of these areas.

















3).HP ASE --Exam HP0-M99: Minimum Qualifications: 
To pass this exam, you should have at least six months field experience in scripting using the Virtual User Generator, load test scenarios using the Controller and Analysis tools, automated software testing, and the software testing lifecycle. Exams are based on an assumed level of industry-standard knowledge that may be gained from the training, handson experience, or other pre-requisite events. You should also be knowledgeable about:
 Web interfaces, HTML, software testing fundamentals
 C
 Basic SQ 

Exam Details:
The following are details about this exam:2
 Number of items: 85
 Item types: Multiple choice and performance-based
 Exam time: 3 hours
 Passing score: 71.76%
 Reference material: This is a performance-based test. The candidate will be provided with a LoadRunner environment to perform tasks directed during the exam. No other on-line or hard copy reference material will be allowed.

Exam Content:

The following testing objectives represent the specific areas of content covered in the exam. Use this outline to guide your study and to check your readiness for the exam. The exam measures your understanding of these areas. 

Sections/Objectives:
1. Plan a load test
2. Install LoadRunner
3. Create and enhance Vuser scripts
4. Demonstrate advanced scripting
5. Configure load test scenarios
6. Analyze results
7. Demonstrate core Performance Center software knowledge
8. Performance based activity (VuGen scripting, Scenario setup, Analysi

Exam Registration:
To register for this exam, please go to the exam tab in The Learning Center and click on “Access more information”. Visit http://www.hp.com/go/ExpertONE for access
For Details see this links   HPO-M48 
                                          HPO-M49


Friday, April 5, 2013

Performance testing interview questions and answers

Q1. Why Performance Testing is performed?

Performance Testing is performed to evaluate application performance under some load and stress condition. It is generally measured in terms of response time for the user activity. It is designed to test the whole performance of the system at high load and stress condition.
Example: Customer like to withdraw money from an ATM counter, customer inserts debit or credit card and wait for the response. If system takes more than 5 min. then according to requirements system functioning is fail.

Type of Performance Testing:
  • Load: analogous to volume testing and determine how application deal with large amount of data.
  • Stress: examine application behavior under peak bursts of activity.
  • Capacity: measure overall capacity and determine at what time response time become unacceptable.

Q2. What are tools of performance testing?

Following are some popular commercial testing tools are:
  • LoadRunner(HP): this for web and other application. It provides a variety of application environments, platforms and database. Number of server monitors to evaluate the performance measurement of each component and tracking of bottlenecks.
  • QAload(Compuware): used for load testing of web, database and char-based system.
  • WebLoad(RadView): it allows comparing of running test vs. test metrics.
  • Rational Performance Tester (IBM): used to identify presence and cause of system performance bottlenecks.
  • Silk Performer (Borland): allow prediction of behavior of e-business environment before it is deployed, regardless of size and complexity.

Q3. Explain the sub-genres of Performance testing.

Following are the sub-genres of Performance Testing:
  • Load Testing: it is conducted to examine the performance of application for a specific expected load. Load can be increased by increasing the number of user performing a specific task on the application in a specific time period.
  • Stress Testing: is conducted to evaluate a system performance by increasing the number of user more than the limits of its specified requirements. It is performed to understand at which level application crash.
  • Volume Testing: test an application in order to determine how much amount of data it can handle efficiently and effectively.
  • Spike Testing: what changes happens on the application when suddenly large number of user increased or decreased.
  • Soak Testing: is performed to understand the application behavior when we apply load for a long period of time what happens on the stability and response time of application.

Q4.What is performance tuning?

To improve the system performance we follow a mechanism, known as Performance tuning. To improve the systems performance there are two types of tuning performed:
  • Hardware tuning: Optimizing, adding or replacing the hardware components of the system and changes in the infrastructure level to improve the systems performance is called hardware tuning.
  • Software tuning: Identifying the software level bottlenecks by profiling the code, database etc. Fine tuning or modifying the software to fix the bottlenecks is called software tuning.

Q5. What is concurrent user hits in load testing?

When the multiple users, without any time difference, hits on a same event of the application under the load test is called a concurrent user hit. The concurrency point is added so that multiple Virtual User can work on a single event of the application. By adding concurrency point, the virtual users will wait for the other Virtual users which are running the scripts, if they reach early. When all the users reached to the concurrency point, only then they start hitting the requests.

Q6. What is the need for Performance testing?

Performance testing is needed to verify the below:
  • Response time of application for the intended number of users
  • Maximum load resisting capacity of application.
  • Capacity of application to handling the number of transactions.
  • Stability of application under expected and unexpected user load.
  • Ensuring that users have proper response time on production

Q7. What is the reason behind performing automated load testing?

Following drawbacks of manual Load Testing that leads to Automation load testing:
  • Difficult to measure the performance of the application accurately.
  • Difficult to do synchronization between the users.
  • Number of real time users are required to involve in Performance Testing
  • Difficult to analyze and identify the results & bottlenecks.
  • Increases the infrastructure cost

Q8. What are the exiting and entering criteria in the performance testing?

We can start the performance testing of application during the design. After the execution of the performance testing, we collected the results and analyzed them to improve the performance. The performance tuning processed will be performed throughout the application development life cycle. Performance tuning is performed which is based on factors like release time of application and user requirements of application stability, reliability and scalability under load, stress and performance tolerance criteria. In some projects the end criteria is defined based on the client performance requirements defined for each section of the application. When product reaches to the expected level then that can be considered as the end criteria for performance testing.

Q9.How do you identify the performance bottlenecks situations?

Performance Bottlenecks can identify by monitoring the application against load and stress condition. To find bottleneck situation in performance testing we use Load Runner because provides different types of monitors like run-time monitor, web resource monitor, network delay monitor, firewall monitor, database server monitor, ERP server resources monitor and Java performance monitor. These monitors can help to us to determine the condition which causes increased response time of the application. The measurements of performance of the application are based on response time, throughput, hits per sec, network delay graphs, etc.

Q10. What activities are performed during performance testing of any application?

Following activities are performed during testing of application:
1. Create user scenarios
2. User Distribution
3. Scripting
4. Dry run of the application
5. Running load test and analyzing the result.,

Q11. How can we perform spike testing in JMeter?

Spike Testing is performed to understand what changes happens on the application when suddenly large number of user increased or decreased. Sudden changes in the number of user by increasing or decreasing at certain point of application and then monitoring the behavior. In JMeter spike testing can be achieved using Synchronizing Timer. The threads are blocked by synchronizing the timer until a particular number of threads have been blocked, and then release them at once thus creating large instantaneous load.

Q12. What is distributed load testing?

Distributed load testing: in this we test the application for a number of users accessing the application at a same time. In distributed load testing test cases are execute to determine the application behavior. Now application behavior is monitored, recorded and analyzed when multiple users concurrently use the system. Distributed load testing is the process using which multiple systems can be used for simulating load of large number of users. The reason for doing the distributed load testing is that to overcome the limitation single system to generate large number of threads.

Q13. Explain the basic requirements of Performance test plan.

Any Software Performance Test Plan should have the minimum contents as mentioned below:
  • Performance Test Strategy and scope definitions.
  • Test process and methodologies.
  • Test tool details.
  • Test cases details including scripting and script maintenance mechanisms.
  • Resource allocations and responsibilities for Testers.
  • Risk management definitions.
  • Test Start /Stop criteria along with Pass/Fail criteria definitions.
  • Test environment setup requirements.
  • Virtual Users, Load, Volume Load Definitions for Different Performance Test Phases.
  • Results Analysis and Reporting format definitions

Q14. What is throughput in Performance Testing?

Throughput in Performance testing is the amount of data sent by the server in responds to the client request in a given period of time or it is the number of units of work that can be handled per unit of time. The throughput is measured in terms of requests per second, calls per day, hits per second, reports per year, etc. In most of the cases, the throughput is calculated in bits per seconds. Higher the throughput value, higher the performance of the application It is includes the client side statistics.

Q15. What are the automated Performance testing phases?

The phases involved in automated performance testing are:
  • Planning/Design: This is the primary phase where team will be gathering the requirements of the performance testing. Requirements can be Business, Technical, System and Team requirements.
  • Build: This phase consists of automating the requirements collected during the design phase.
  • Execution: it is done in multiple phases. It consists of various types of testing like baseline, benchmarking testing
  • Analyzing and tuning: During the performance testing we will be capturing all the details related to the system like Response time and System Resources for identifying the major bottlenecks of the system. After the bottlenecks are identified we have to tune the system to improve the overall performance.

Q16. What is Performance Testing?

Performance Testing is performed to determine response time of the some components of the system perform under a particular workload. It is generally measured in terms of response time for the user activity. It is designed to test the overall performance of the system at high load and stress condition It identifies the drawback of the architectural design which helps to tune the application. It includes the following:
  • Increasing number of users interacting with the system.
  • Determine the Response time.
  • Repeating the load consistently.
  • Monitoring the system components under controlled load.
  • Providing robust analysis and reporting engines.

Q17. What is baseline testing?

Baseline testing is a testing which is performed on the application before coming to any conclusion. It can be either the verification or validation process which provides an idea of what the next stage has to do. It is very important testing technique, if done properly, 85% of performance problems can be identified and solved when proper baseline tests are done.

Q18. What is the testing lifecycle?

There is no standard testing life cycle, but it is consist of following phases:
  • Test Planning (Test Strategy, Test Plan, Test Bed Creation)
  • Test Development (Test Procedures, Test Scenarios, Test Cases)
  • Test Execution
  • Result Analysis (compare Expected to Actual results)
  • Defect Tracking
  • Reporting

Q19. What is the difference between baseline and benchmark testing?

The differences between baseline and benchmark testing are:
  • Baseline testing is the process of running a set of tests to capture performance information. This information can be used as a point of reference when in future changes are made to the application where as Benchmarking is the process of comparing your system performance against an industry standard that is given by some other organization.
  • Example: We can run baseline test of an application, collect and analyze results, and then modify several indexes on a SQL Server database and run the same test again, using the previous results to determine whether or not the new results were better, worse, or about the same.

Configuring SharePoint Performance Counters

Troubleshooting SharePoint performance issue is a daunting task. There are various logs that need to be analyzed and correlated to come to a conclusion. Sometimes an observation in one log can lead to verification in other logs along the same time. Performance counters play a vital role during performance investigation. Performance counters logs contain various data over a period of time and provides a snapshot on the resource consumption during the period.

It is very important to collect these logs on a continuous basis. This will allow creating a baseline of the resource consumption over a period of time. When an issue occurs, these counters can then be analyzed and compared with respect to the baseline data to understand any abnormal behavior in the system.

1. A predefined set of counters need to be defined that will be monitored on a continuous basis. The set of counters are different in web, index and database servers. The lists of counters for different types of servers are mentioned at the end of the article.

2. Create a text file named WFEPerfCounterList_MOSS2007.txt (name will change based on the server where you are configuring the counters) that contains the above set of counters.3. Now we will use the command prompt to create the counters in the perfmon. Note that we will create two set of counters – one of baseline and one for incident – on every server. The baseline counters will run continuously on all the servers and will poll every 1 min to collect the data for baseline. Incident counters will be turned on only when there is an issue. This will poll every 5 seconds to collect data. 
Baseline counters:logman create counter Perf_Baseline -s %COMPUTER NAME% -o C:\PerfLogs\Perf_Baseline_%COMPUTER NAME%.blg -f bin -v mmddhhmm -cf WFECounterList_MOSS2007.txt -si 00:01:00 -cnf 12:00:00 -b 4/29/2009 6:00AM -u "domain\userid" *

The name of the file (.txt) mentioned in the command needs to be updated based on the server where the counters are being created. This command will create a set of counters with the name "Perf_Baseline". All the counters that are mentioned in "WFECounterList_MOSS2007.txt" will be added. The data will be collected at a frequency of 1 minute. A new file will be created every 12 hours. It will be started automatically.


Incident counters:
logman create counter Perf_Incident -s %COMPUTER NAME% -o C:\PerfLogs\Perf_Incident_%COMPUTER NAME%.blg -f bin -v mmddhhmm -cf WFECounterList_MOSS2007.txt -si 00:00:05 -max 500 -u "domain\userid" *

The name of the file (.txt) mentioned in the command needs to be updated based on the server where the counters are being created. This command will create a set of counters with the name "Perf_Incident". All the counters that are mentioned in "WFECounterList_MOSS2007.txt" will be added. The data will be collected at a frequency of 5 seconds. A new file will be created every 12 hours. This will only create the counters but will not be started. This needs to be manually started only when an issue occurs.

To start the incident based counter, issue the following command from a command line prompt:
logman start Perf_Incident -s %COMPUTER NAME%.

The only difference between the above two counters is the frequency of running. Since the Incident counters collect the data every 5 seconds, it might cause an overhead for the system if run on a continuous basis.
Web/Query server (File name: WFECounterList_MOSS2007.txt)


\ASP.NET(*)\*
\ASP.NET v2.0.50727\*
\ASP.NET Apps v2.0.50727(*)\*
\.NET CLR Networking(*)\*
\.NET CLR Memory(*)\*
\.NET CLR Exception(*)\*
\.NET CLR Loading(*)\*
\.NET Data Provider for SqlServer(*)\*
\Processor(*)\*
\Process(*)\*
\LogicalDisk(*)\*
\Memory\*
\Network Interface(*)\*
\PhysicalDisk(*)\*
\SharePoint Publishing Cache(*)\*
\System\*
\TCPv4\*
\TCPV6\*
\Threads\*
\Web Service(*)\*
\Web Service Cache\*

Index server (File name: IndexCounterList_MOSS2007.txt)

\ASP.NET(*)\*
\ASP.NET v2.0.50727\*
\ASP.NET Apps v2.0.50727(*)\*
\.NET CLR Networking(*)\*
\.NET CLR Memory(*)\*
\.NET CLR Exception(*)\*
\.NET CLR Loading(*)\*
\.NET Data Provider for SqlServer(*)\*
\Processor(*)\*
\Process(*)\*
\LogicalDisk(*)\*
\Memory\*
\Network Interface(*)\*
\PhysicalDisk(*)\*
\SharePoint Publishing Cache(*)\*
\Web Service(*)\*
\Web Service Cache\*
\SharePoint Search Archival Plugin(*)\*
\SharePoint Search Gatherer\*
\SharePoint Search Gatherer Project(*)\*
\SharePoint Search Indexer Catalogs(*)\*
\SharePoint Search Schema Plugin(*)\*
\Office Server Search Archival Plugin(*)\*
\Office Server Search Gatherer\*
\Office Server Search Gatherer Projects(*)\*
\Office Server Search Indexer Catalogs(*)\*
\Office Server Search Schema Plugin(*)\*
\System\*
\TCPv4\*
\TCPv6\*
\Threads\*
Database server (File name: SQLCounterList_MOSS2007.txt)

\.NET Data Provider for SqlServer(*)\*
\Processor(*)\*
\Process(*)\*
\LogicalDisk(*)\*
\Memory\*
\PhysicalDisk(*)\*
\Network Interface(*)\*
\NBT Connection(*)\*
\Server Work Queues(*)\*
\Server\*
\SQLServer:Access Methods\*
\SQLServer:Catalog Metadata(*)\*
\SQLServer:Exec Statistics(*)\*
\SQLServer:Wait Statistics(*)\*
\SQLServer:Broker Activation(*)\*
\SQLServer:Broker/DBM Transport\*
\SQLServer:Broker Statistics\*
\SQLServer:BufferManager\*
\SQLServer:Transactions\*
\SQLAgent:JobSteps(*)\*
\SQLServer:Memory Manager\*
\SQLServer:Cursor Manager By Type(*)\*
\SQLServer:Plan Cache(*)\*
\SQLServer:SQL Statistics\*
\SQLServer:SQL Errors(*)\*
\SQLServer:Databases(*)\*
\SQLServer:Locks(*)\*
\SQLServer:General Statistics\*
\SQLServer:Latches\*
\System\*
\TCPv4\*
\TCPv6\*



Performance counters baseline


Sharepoint 2007 Performance monitoring using Perfmon counters