Archive for the ‘Analysis’ Category.

Web Page Rendering

Below are the step-by-step descriptions of how page get displayed in your browser when you simply enter some URL in address bar:

  1. You type an URL into address bar in your preferred browser.
  2. The browser parses the URL to find the protocol, host, port, and path.
  3. It forms a HTTP request
  4. To reach the host, it first needs to translate the human readable host into an IP number, and it does this by doing a DNS lookup on the host
  5. Then a socket needs to be opened from the user’s computer to that IP number, on the port specified (most often port 80)
  6. When a connection is open, the HTTP request is sent to the host
  7. The host forwards the request to the server software (most often Apache) configured to listen on the specified port
  8. The server inspects the request (most often only the path), and launches the server plugin needed to handle the request (corresponding to the server language you use, PHP, Java, .NET, Python?)
  9. The plugin gets access to the full request, and starts to prepare a HTTP response.
  10. To construct the response a database is (most likely) accessed. A database search is made, based on parameters in the path (or data) of the request
  11. Data from the database, together with other information the plugin decides to add, is combined into a long string of text (probably HTML).
  12. The plugin combines that data with some meta data (in the form of HTTP headers), and sends the HTTP response back to the browser.
  13. The browser receives the response, and parses the HTML (which with 95% probability is broken) in the response
  14. A DOM tree is built out of the broken HTML
  15. New requests are made to the server for each new resource that is found in the HTML source (typically images, style sheets, and JavaScript files). Go back to step 3 and repeat for each resource.
  16. Stylesheets are parsed, and the rendering information in each gets attached to the matching node in the DOM tree
  17. Javascript is parsed and executed, and DOM nodes are moved and style information is updated accordingly
  18. The browser renders the page on the screen according to the DOM tree and the style information for each node
  19. You see the page on the screen

And we get annoyed why the response time is very high but now at least I have some documentation to look at, while waiting the remaining fractions of a second before the page renders.

Splunk Overview

Splunk is powerful and versatile IT search software that takes the pain out of tracking and utilizing the information in your data center. If you have Splunk, you won’t need complicated databases, connectors, custom parsers or controls–all that’s required is a web browser and your imagination. Splunk handles the rest.
Use Splunk to:

  • Continually index all of your IT data in real time.
  • Automatically discover useful information embedded in your data, so you don’t have to identify it yourself.
  • Search your physical and virtual IT infrastructure for literally anything of interest and get results in seconds.
  • Save searches and tag useful information, to make your system smarter.
  • Set up alerts to automate the monitoring of your system for specific recurring events.
  • Generate analytical reports with interactive charts, graphs, and tables and share them with others.
  • Share saved searches and reports with fellow Splunk users, and distribute their results to team members and project stakeholders via email.
  • Proactively review your IT systems to head off server downtimes and security incidents before they arise.
  • Design specialized, information-rich views and dashboards that fit the wide-ranging needs of your enterprise.

Index new data

Splunk offers a variety of flexible data input methods to index everything in your IT infrastructure in real time, including live log files, configurations, traps and alerts, messages, scripts, performance data, and statistics from all of your applications, servers, and network devices. Monitor file systems for script and configuration changes. Enable change monitoring on your file system or Windows registry. Capture archive files and SNMP trap data. Find and tail live application server stack traces and database audit tables. Connect to network ports to receive syslog and other network-based instrumentation.
No matter how you get the data, or what format it’s in, Splunk indexes it the same way–without any specific parsers or adapters to write or maintain. It stores both the raw data and the rich index in an efficient, compressed, filesystem-based datastore–with optional data signing and auditing if you need to prove data integrity.

Search and investigate

Now you’ve got all that data in your system…what do you want to do with it? Start by using Splunk’s powerful search functionality to look for anything, not just a handful of predetermined fields. Combine time and term searches. Find errors across every tier of your IT infrastructure and track down configuration changes in the seconds before a system failure occurs. Splunk identifies fields from your records as you search, providing flexibility unparalleled by solutions that require setup of rigid field mapping rulesets ahead of time. Even if your system contains terrabytes of data, Splunk enables you to search across it with precision.

Capture knowledge

Freeform searching on raw data is just the start. Enrich that data and improve the focus of your searches by adding your own knowledge about fields, events, and transactions. Tag high-priority assets, and annotate events according to their business function or audit requirement. Give a set of related server errors a single tag, and then devise searches that use that tag to isolate and report on events involving that set of errors. Save and share frequently-run searches. Splunk surpasses traditional approaches to log management by mapping knowledge to data at search time, rather than normalizing the data up front. It enables you to share searches, reports, and dashboards across the range of Splunk apps being used in your organization.

Automate monitoring

Any search can be run on a schedule, and scheduled searches can be set up to trigger notifications or when specific conditions occur. This automated alerting functionality works across the wide range of components and technologies throughout your IT infrastructure–from applications to firewalls to access controls. Have Splunk send notifications via email or SNMP to other management consoles. Arrange for alerting actions to trigger scripts that perform activities such as restarting an application, server, or network device, or opening a trouble ticket. Set up alerts for known bad events and use sophisticated correlation via search to find known risk patterns such as brute force attacks, data leakage, and even application-level fraud.

Analyze and report

Splunk’s ability to quickly analyze massive amounts of data enables you to summarize any set of search results in the form of interactive charts, graphs, and tables. Generate reports on-the-fly that use statistical commands to trend metrics over time, compare top values, and report on the most and least frequent types of conditions. Visualize report results as interactive line, bar, column, pie, scatterplot and heat-map charts.

Searching in Splunk

The first time you use Splunk, you’ll probably start by just searching the raw data to investigate problems — whether it’s an application error, network performance problem, or security alert. Searching in Splunk is free form — you can use familiar Boolean operators, wildcards and quoted strings to construct your searches. Type in keywords, such as a username, an IP address, a particular message… You’re never limited to a few predetermined fields and you don’t need to confront a complicated query builder, learn a query language, or know what field to search on. You can search by time, host and source.

Go to the Search app

After logging into Splunk, you will see either the Welcome view or Splunk Home view.

  • If you’re in the Welcome view, select Launch search app.
  • If you’re in Splunk Home, select Search.
  • If you are in another app, select the Search app from the App menu, which is located in the upper right corner of the window.

This takes you to the Summary dashboard of the Search app. For more information about what you will find in the Search App.

Start with simple terms

To begin your Splunk search, type in terms you might expect to find in your event data. For example, if you want to find events that might be HTTP 404 errors, type in the keywords:

http 404

Your search results are all events that have both HTTP and 404 in the raw text; this may or may not be exactly what you want to find. For example, your search results will include events that have website URLs, which begin with "http://", and any instance of "404", including a string of characters like "ab/404".
You can narrow the search by adding more keywords:

http 404 "not found"

Enclosing keywords in quotes tells Splunk to search for literal, or exact, matches. If you search for "not" and "found" as separate keywords, Splunk returns events that have both keywords, though not necessarily the phrase "not found".
You can also use Boolean expressions to narrow your search further.

Add Boolean expressions

Splunk supports the Boolean operators: AND, OR, and NOT; the operators have to be capitalized. You can use parentheses to group Boolean expressions. For example, if you wanted all events for HTTP client errors not including 404 or 403, search with:

http client error NOT (403 OR 404)

In a Splunk search, the AND operator is implied; the previous search is the same as:

http AND client AND error NOT (403 OR 404)

This search returns all events that have the terms "HTTP", "client", and "error" and do not have the terms "403" or "404". Once again, the results may or may not be exactly what you want to find. Just as the earlier search for http 404 may include events you don’t want, this search may both include events you don’t want and exclude events you want.
Note: Splunk evaluates Boolean expressions in the following order: first, expressions within parentheses; then, OR clauses; finally, AND or NOT clauses.

Search with wildcards

Splunk supports the asterisk (*) wildcard for searching. Searching for * by itself means "match all" and returns all events up to the maximum limit. Searching for * as part of a word matches based on that word.
The simplest beginning search is the search for *. Because this searches your entire index and returns an unlimited number of events, it’s also not an efficient search. We recommend that you begin with a more specific search on your index.
If you wanted to see only events that matched HTTP client and server errors, you might search for:

http error (40* OR 50*)

This indicates to Splunk that you want events that have "HTTP" and "error" and 4xx and 5xx classes of HTTP status codes. Once again, though, this will result in many events that you may not want. For more specific searches, you can extract information and save them as fields.

Search with fields

When you index data, Splunk automatically adds fields to your event data for you. You can use these fields to search, edit the fields to make them more useful, extract additional knowledge and save them as custom fields. For more information about fields and how to use, edit, and add fields.
 Splunk lists fields that it has extracted in the Field Picker to the left of your search results in Splunk Web. Click a field name to see information about that field, add it to your search results, or filter your search to display only results that contain that field. When you filter your search with a field from the Field Picker, Splunk edits the search bar to include the selected field.
Alternately, you can type the field name and value directly into your search bar. A field name and value pair can be expressed in two ways: fieldname="fieldvalue" or fieldname=fieldvalue.
Note: Field names are case sensitive.
Let’s assume that the event type for your Web access logs is eventtype=webaccess and you saved a field called status for the HTTP status codes in your event data. Now, if you wanted to search for HTTP 404 errors, you can restrict your search to the specific field:
status=404

Use wildcards to match multiple field values

If you’re interested in seeing multiple values for the status field, you can use wildcards. For example, to search for Web access events that are HTTP client errors (4xx) or HTTP server errors (5xx), type:

eventtype=webaccess status=40* OR status=50*

Use comparison operators to match field values

You can use comparison operators to match a specific value or a range of field values.

Operator

Example

Result

=

field=foo

Field values that exactly match "foo".

!=

field!=foo

Field values that don’t exactly match "foo".

<

field<x

Numerical field values that are less than x.

>

field>x

Numerical field values that are greater than x.

<=

field<=x

Numerical field values that are less than and equal to x.

>=

field>=x

Numerical field values that are greater than and equal to x.

Note: You can only use <, >, <=, and >= with numerical field values, and you can only use = and != with multi-valued fields.

P.S. Thanks to Manik

Performance bottleneck symptoms

CPU Bottleneck Symptoms:

Symptoms for CPU bottlenecks include the following,
The Processor(_Total)\% Processor Time(measures the total utilization of your processor by all running processes) will be high. If the server typically runs at around 70% or 80% processor utilization then this is normally a good sign and means your machine is handling its load effectively and not underutilized. Average processor utilization of around 20% or 30% on the other hand suggests that your machine is underutilized and may be a good candidate for server consolidation using Virtual Server or VMWare.
Further to breakdown this %processor Time, monitor the counters – Processor(_Total)\% Privileged Time and Processor(_Total)\% User Time, which respectively show processor utilization for kernel- and user-mode processes on your machine. If kernel mode utilization is high, your machine is likely underpowered as it’s too busy handling basic OS housekeeping functions to be able to effectively run other applications. And if user mode utilization is high, it may be you have your server running too many specific roles and you should either beef hardware up by adding another processor or migrate an application or role to another box. The System\Processor Queue Length(indication of how many threads are waiting for execution) consistently greater than 2 or more for a single processor CPU is a clear indication of processor bottleneck . Also look at other counters like ASP\Requests Queued or ASP.NET\Requests Queued as well.

Tips to find out Application server bottlenecks:

  1. A high increase in application server processing time when the load is increased.
  2. One or more page components take more time when the same request db call is taking less execution time.
  3. The Static files are having less response time whereas the dynamic contents (servlets, jsp, etc) take more time.
  4. Network delay is negligible.
  5. Home Page gets displayed in few seconds even during the stress period(as it is fetched from the web server).
  6. Hits/sec & Throughput remains less.
  7. If the CPU/ Memory/Disk of the App server has any bottleneck symptoms.
  8. If the HTTP / HTTPS connections established doesn’t increase proportionally with the load.
  9. If the new connections established is very higher & the reused connections are very less.

Tips to find out Web server bottlenecks:

  1. Increased ‘Server Time’ breakup
  2. One or more page components of transaction takes more time where in the DB query is having less execution time.
  3. The static files are having high response time than the dynamic contents (servlets, jsp, etc)
  4. Network delay is negligible.
  5. Home Page takes more time for display.
  6. Hits/sec in the web server is very less.
  7. If the CPU/ Memory/Disk of the web server has any bottleneck symptoms.

Hardware Malfunctioning Symptoms:

  1. System\Context Switches/sec (measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode). If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if you are seeing a similar jump in the Processor(_Total)\Interrupts/sec counter on your machine.
  2. You may also want to check Processor(_Total)\% Privileged Time Counter and see if this counter shows a similar unexplained increase, as this may indicate problems with a device driver that is causing an additional hit on kernel mode processor utilization.
  3. If Processor(_Total)\Interrupts/sec does not correlate well with System\Context Switches/sec however, your sudden jump in context switches may instead mean that your application is hitting its scalability limit on your particular machine and you may need to scale out your application (for example by clustering) or possibly redesign how it handles user mode requests. In any case, it’s a good idea to monitor System\Context Switches/sec over a period of time to establish a baseline for this counter, and once you’ve done this then create a perfmon alert that will trigger when this counter deviates significantly from its observed mean value.

Memory Bottleneck Symptoms:

When it comes to the System memory, there are 3 things to monitor:

  1. Monitor Cache (Hits/Misses),
  2. Monitor Memory (Memory Available/sec, Process/Working Set),
  3. Monitor Paging (Pages Read/Sec, Pages Input/Sec, Page Faults/Sec, % Disk Processing)Memory\Available Bytes,

If this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM and don’t need to worry. The Memory\Pages/sec counter indicates the number of paging operations to disk during the measuring interval, and this is the primary counter to watch for indication of possible insufficient RAM to meet your server’s needs. You can monitor Process(instance)\Working Set for each process instance to determine which process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. A related counter is Memory\Cache Bytes, which measures the working set for the system i.e. the number of allocated pages kernel threads can address without generating a page fault. Finally, another corroborating indicator of insufficient RAM is Memory\Transition Faults/sec, which measures how often recently trimmed page on the standby list are re-referenced. If this counter slowly starts to rise over time then it could also indicating you’re reaching a point where you no longer have enough RAM for your server to function well.

Disk Bottleneck Symptoms:

A bottleneck from a disk can significantly impact response time for applications running on your system. Physical Disk (instance)\Disk Transfers/sec counter for each physical disk and if it goes above 25 disk I/Os per second then you’ve got poor response time for your disk. By tracking Physical Disk(instance)\% Idle Time, which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you’ve likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. In this case it’s time to upgrade your hardware to use faster disks or scale out your application to better handle the load. Look for the Physical Disk (instance)\Average Disk Queue length & Physical Disk (instance)\Current Disk Queue length parameters to get more details on the queued up requests.

Network Performance/Bottlenecks:

The first step in monitoring is to monitor the network performance, to make sure your network performance is good. There are some simple ways to do so. First monitor whether you are getting the same bandwidth which you are supposed to get. The easiest way to find out is to check the current bandwidth counter with your expected bandwidth. Also verify the rate at which the server sends and receives the data. Network performance depends on 2 factors, network cards and interfaces (Switches/Routers) configured on the servers.

Here are some of the counters to find network bottlenecks:

Network Interface: Current Bandwidth
This counter determines your current bandwidth of the network interface. Capture this counter value and correlate with bytes receives/sec, bytes send/sec and bytes total/sec.
If the bytes total should be at least half of your total bandwidth .If not so then we can confirm a network bottle neck

Network Interface: Bytes Total/sec:
To determine if your network connection is creating a bottleneck, compare the Network Interface: Bytes Total/sec counter to the total bandwidth of your network adapter card. To allow headroom for spikes in traffic, you should usually be using no more than 50 percent of capacity. If this number is very close to the capacity of the connection, and processor and memory use are moderate, then the connection may well be a problem. To determine the network utilization (throughput on a server’s network cards), you can check the following counters:

  1. Network\Bytes Received/sec
  2. Network\Bytes Sent/sec
  3. Network\Bytes Total/sec
  4. Network Current Bandwidth

If the total byte per second value is more than 50 percent of the total network utilization under average user/work load, then your server is having some problems under peak load conditions. Make sure you compare network counter values with Physical Disk\% Disk Time and Processor\% Processor Time utilization. If the disk time and processor time values are low but the network values are very high, there might be a problem with your network.
There are 2 ways to solve this problem:

  1. By optimizing the network card settings
  2. By adding an additional network card.

Analyzing IIS logs with LogParser

When users access your server running IIS, IIS logs the information. The logs provide valuable information that you can use to identify any unauthorized attempts to compromise your Web server.
Depending on the amount of traffic to your Web site, the size of your log file (or the number of log files) can consume valuable disk space, memory resources, and CPU cycles. You might need to balance the gathering of detailed data with the need to limit files to a manageable size and number. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed.

IIS log file format:

IIS log file format is a fixed (meaning that it cannot be customized) ASCII format. This file format records more information than other log file formats, including basic items, such as the IP address of the user, user name, request date and time, service status code, and number of bytes received. In addition, IIS log file format includes detailed items, such as the elapsed time, number of bytes sent, action (for example, a download carried out by a GET command), and target file. The IIS log file is an easier format to read than the other ASCII formats because the information is separated by commas, while most other ASCII log file formats use spaces for separators. Time is recorded as local time.

IIS log file location:

The IIS logs provide a great deal of information about the activity of a Web application. You can find the IIS logs in

systemroot\System32\LogFiles\W3SVCnumber, where number is the site ID for the Web site.

LogParser:

LogParser is a command line utility. The default behaviour of LogParser is it works like a “data processing pipeline”, by taking an SQL expression on the command line, and outputting the lines containing matches for the SQL expression.

Download LopParser from:
http://www.microsoft.com/downloads/en/details.aspx?familyid=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en

W3C Extended Logging Field Definitions


Prefix Meaning
s- Sever actions
c- Client actions
cs- Client-to-server actions.
sc- Server-to-client actions.


Field Appears As Description
Date date The date that the activity occurred.
Time time The time that the activity occurred.
Client IP Address c-ip The IP address of the client that accessed your server.
User Name cs-username The name of the authenticated user who accessed your server. This does not include anonymous users, who are represented by a hyphen (-).
Service Name s-sitename The Internet service and instance number that was accessed by a client.
Server Name s-computername The name of the server on which the log entry was generated.
Server IP Address s-ip The IP address of the server on which the log entry was generated.
Server Port s-port The port number the client is connected to.
Method cs-method The action the client was trying to perform (for example, a GET method).
URI Stem cs-uri-stem The resource accessed; for example, Default.htm.
URI Query cs-uri-query The query, if any, the client was trying to perform.
Protocol Status sc-status The status of the action, in HTTP or FTP terms.
Win32® Status sc-win32-status The status of the action, in terms used by Microsoft Windows®.
Bytes Sent sc-bytes The number of bytes sent by the server.
Bytes Received cs-bytes The number of bytes received by the server.
Time Taken time-taken The duration of time, in milliseconds, that the action consumed.
Protocol Version cs-version The protocol (HTTP, FTP) version used by the client. For HTTP this will be either HTTP 1.0 or HTTP 1.1.
Host cs-host Displays the content of the host header.
User Agent cs(User-Agent) The browser used on the client.
Cookie cs(Cookie) The content of the cookie sent or received, if any.
Referrer cs(Referer) The previous site visited by the user. This site provided a link to the current site.



The following is an example of a record in the extended log format that was produced by the Microsoft Internet Information Server (IIS):
——————————————————————————–
#Software: Microsoft Internet Information Server 6.0
#Version: 1.0
#Date: 2011-05-09 22:48:39
#Fields: date time c-ip cs-username s-ip cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-version cs(User-Agent) cs(Cookie) cs(Referrer)

2011-05-09 22:48:39 192.168.1.5 – 173.201.216.31 /GreenBlue.jpg – 200 540 324 157 HTTP/1.0 Mozilla/4.0+(compatible;+MSIE+4.01;+Windows+95) USERID=CustomerA;+IMPID=01234 http://www.punebids.com

Procedure:

  1. First note the date and timings of the test run.
  2. Collect the IIS log files of that particular timeframe from the Web Servers (specific to your Web Application) and put it in a single folder
  3. Write query for data collection from logs and run it in LogParser

To find URL Hit count:



[cce lang=”c”]
logparser “SELECT cs-uri-stem AS Url, count(cs-uri-stem) AS Hits FROM ‘C:\Logs\WebServers1-4\*.*’ WHERE date=to_timestamp(‘2011-05-10’,’yyyy-MM-dd’) and time>’01:55:00’ and time<’02:35:00’ and cs-uri-stem like ‘%asp%’ GROUP BY Url ORDER BY Hits DESC” –i:IISW3C –o:csv >”C:\Logs\CollectedUrlHits.txt”
[/cce]


Output:

Url,Hits

/Framework/website/Default.aspx,15678

/isvs/consulting/userprofile.asp,897

/DNA/Common/Portal/ClientHome.aspx,75

/DNA/Common/Clients/Clientlist.aspx,75

/DNA/Common/portal/DNAuserlist.aspx,75

Statistics:

Elements Processed: 245646

Elements output: 5

Execution time: 5.80 seconds

To find the HTTP Error count:



[cce lang=”c”]
logparser “SELECT cs-uri-stem AS Url, count(cs-uri-stem) AS Hits  FROM ‘C:\Logs\WebServers1-4\*.*’ WHERE date=to_timestamp(‘2011-05-10’,’yyyy-MM-dd’) and time>’01:55:00’ and time<’02:35:00’ and cs-uri-stem like ‘%asp%’ and sc-status=500  GROUP BY Url ORDER BY Hits DESC” –i:IISW3C –o:csv >”C:\Logs\Collected500Errors.txt”
[/cce]

Output:

Url,Hits

/Framework/website/Default.aspx,1

/isvs/consulting/userprofile.asp,2

/DNA/Common/Portal/ClientHome.aspx,1

Statistics:

Elements Processed: 245646

Elements output: 3

Execution time: 3.80 seconds





P.S. : Thanks to Rahul D.

Managing Application Pools

When you run IIS 6.0 in worker process isolation mode, you can separate different Web applications and Web sites into groups known as application pools. An application pool is a group of one or more URLs that are served by a worker process or set of worker processes. Any Web directory or virtual directory can be assigned to an application pool.

A worker process is user-mode code whose role is to process requests, such as processing requests to return a static page, invoking an ISAPI extension or filter, or running a Common Gateway Interface (CGI) handler.

Every application within an application pool shares the same worker process. Because each worker process operates as a separate instance of the worker process executable, W3wp.exe, the worker process that services one application pool is separated from the worker process that services another. Each separate worker process provides a process boundary so that when an application is assigned to one application pool, problems in other application pools do not affect the application. This ensures that if a worker process fails, it does not affect the applications running in other application pools.

Use multiple application pools when you want to help ensure that applications and Web sites are confidential and secure. For example, an enterprise organization might place its human resources Web site and its finance Web site on the same server, but in different application pools. Likewise, an ISP that hosts Web sites and applications for competing companies might run each company’s Web services on the same server, but in different application pools. Using different application pools to isolate applications helps prevent one customer from accessing, changing, or using confidential information from another customer’s site.

Application Pool Actions:

To manage Application Pools go to IIS Manager and click on application pool which you want to configure for recycling.

Tab and their descriptions:

Element Name Description
Add Application Pool Opens the Add Application Pool dialog box from which you can add an application pool to the Web server.
Set Application Pool Defaults Opens the Application Pool Defaults dialog box from which you can set default values that apply to all application pools that you add to the Web server.
Start Starts the selected application pool.
Stop Stops the selected application pool. This causes the Windows Process Activation Service (WAS) to shut down all running worker processes serving that application pool. An administrator must restart a stopped application pool or else requests made to applications in that application pool will receive HTTP 503-Service Unavailable errors.
Recycle Stops and restarts the selected application pool. Restarting an application pool causes the application pool to be temporarily unavailable until the restart is complete.
Basic Settings Opens the Edit Application Pool dialog box from which you can edit the settings that were specified when the application pool was created. This action is available only when an item is selected from the list on the feature page.
Recycling Opens the Edit Application Pool Recycling Settings wizard from which you can specify conditions under which to recycle an application pool and configure how recycling events are logged.
Advanced Settings Opens the Advanced Settings dialog box from which you can configure advanced settings for the selected application pool.
Rename Enables the Name field of the selected application pool so that you can rename the application pool.
Remove Removes the item that is selected from the list on the feature page.
View Applications Opens the Applications feature page from which you can view the applications that belong to the selected application pool.

Edit Application Pool Recycling Settings:

Use the Recycling Conditions page of the Edit Application Pool Recycling Settings Wizard to configure IIS to periodically restart worker processes in an application pool. This can help you to recover valuable system resources and to better manage faulty worker processes.

Tab and their descriptions:

Element Name Description
Regular time intervals (in minutes) Select this option to specify a time interval, in minutes, at which you want IIS to recycle the worker process. You might choose this option if you have an application that causes problems when it runs for an extended time. Based on what you know about the application, you should set the value to be less than the length of time elapsed before application failure.
Fixed number of requests Select this option to specify the number of requests after which you want IIS to recycle the worker process. You might choose this option if you have an application that causes problems after reaching a certain number of requests. Based on what you know about the application, you should configure the value to be less than the number of requests processed before application failure.
Specific time(s) Select this option to specify a time or times at which you want IIS to recycle the worker process in a 24-hour period. For example, to recycle a worker process at 4:30 A.M. and 4:30 P.M., enter 4:30 AM, 4:30 PM. The time that you specify uses the local time on the Web server. You might choose this option if you have an application that causes problems when it runs for an extended time and you want to recycle the application pool at a specific time, such as a time that is late at night or early in the morning, to avoid a negative impact on users. Based on what you know about the application, you should set the interval to be frequent enough to prevent application failure.
Virtual memory usage (in KB) Select this option to specify the maximum number of kilobytes of your system’s common virtual memory that can be used by a worker process before that process is recycled. You might choose this option when you notice a steady increase in the virtual memory used on your server. This might indicate that an application reserves memory multiple times, which fragments the memory heap. Entering too high a value can severely decrease system performance. At first, you should set the virtual memory threshold to be less than 70 percent of available virtual memory, and then adjust the setting if you have to.
Private memory usage (in KB) Select this option to specify the maximum number of kilobytes of privately allocated system physical memory that can be used by a worker process before the process is recycled. You might choose this option when you have an application that leaks memory. Entering too high a value can severely decrease system performance. At first, you should set this value to be less than 60 percent of the available physical memory on the server, and then adjust this setting if you have to.

Recycling Events to Log:

Use the Recycling Events to Log page of the Edit Application Pool Recycling Settings Wizard to configure IIS to log an event when a worker process is recycled.

You can configure IIS to log information for recycling events that you configure, such as at a fixed interval or for recycling events that occur at runtime, such as when an ISAPI declares itself unhealthy.

Tab and their descriptions:

Element Name Description
Regular time intervals Select this option to log an event when a worker process is recycled at a specified time interval. This option is available only when the Regular time intervals (in minutes) option is selected and a time interval is specified on the previous wizard page.
Virtual memory usage Select this option to log an event when a worker process is recycled after using a specified amount of virtual memory. This option is available only when the Virtual memory usage (in KB) option is selected and a number of kilobytes is specified on the previous wizard page.
Number of requests Select this option to log an event when a worker process is recycled after reaching a specified number of requests. This option is available only when the Fixed number of requests option is selected and a number of requests is specified on the previous wizard page.
Scheduled time(s) Select this option to log an event when a worker process is recycled at a specified time. This option is available only when the Specific time(s) option is selected and a time is specified on the previous wizard page.
Private memory usage Select this option to log an event when a worker process is recycled after using a specified amount of physical memory. This option is available only when the Private memory usage (in KB) option is selected and a number of kilobytes is specified on the previous wizard page.
On-demand Select this option to log an event when you recycle a worker process by using IIS Manager or Appcmd.exe to correct a problem.
Configuration changes Select this option to log an event when a change to configuration causes the application pool to recycle.
Unhealthy ISAPI Select this option to log an event when an ISAPI extension reports to the worker process that it is unhealthy.





P.S. : Thanks to Kranti.

Analyzing Processor

Monitor below counters to find if the processor is cause of low performance:

  • Processor: %Processor Time often exceeds 85%.
  • System: Processor Queue Length is often greater than 2.
  • On multiprocessor systems, System: % Total Processor Time often exceeds 50%.

But these symptoms don’t always indicate a processor problem. And even when the processor is the problem, adding extra processors doesn’t always solve it.

Understanding the Processor Counters

It is important to understand the components of the primary processor activity counters, and to distinguish them from each other.

Counter Description
System: % Total Processor Time For what proportion of the sample interval were all processors busy?
A measure of activity on all processors. In a multiprocessor computer, this is equal to the sum of Processor: % Processor Time on all processors divided by the number of processors. On single-processor computers, it is equal to Processor: % Processor time, although the values may vary due to different sampling time.
System: Processor Queue Length How many threads are ready, but have to wait for a processor?
This is an instantaneous count, not an average, so it’s best viewed in charts, rather than reports. Unlike disk queue counters, it counts only waiting threads, not those being serviced.
The queue length counter is on the System object because there is a single queue even when there are multiple processors on the computer.
Processor: % Processor Time For what proportion of the sample interval was each processor busy?
This counter measures the percentage of time the thread of the Idle process is running, subtracts it from 100%, and displays the difference.
This counter is equivalent to Task Manager’s CPU Usage counter.
Processor: % User Time
Processor: % Privileged Time
How often were all processors executing threads running in user mode and in privileged mode?
Threads running in user mode are probably running in their own application code. Threads running in privileged mode are using operating system services.
The user time and privileged time counters on the System and Processor objects do not always sum to 100%. They are measures of non-Idle time, so they sum to the total of non-idle time.
For example, if the processor was running the Idle thread for 85% of the time, the sum of Processor: % User Time and Processor: % Privileged Time would be 15%.
Process: % Processor Time For what proportion of the sample interval was the processor running the threads of this process?
This counter sums the processor time of each thread of the process over the sample interval.
Process: % Processor Time: _Total For what proportion of the sample interval was the processor processing?
This counter sums the time all threads are running on the processor, including the thread of the Idle process on each processor, which runs to occupy the processor when no other threads are scheduled.
The value of Process: % Processor Time: _Total is 100% except when the processor is interrupted. (100% processor time = Process: % Processor Time: Total + Processor: % Interrupt Time + Processor: % DPC Time) This counter differs significantly from Processor: % Processor Time, which excludes Idle.
Process: % User Time
Process: % Privileged Time
How often are the threads of the process running in its own application code (or the code of another user-mode process)? How often are the threads of the process running in operating system code?
Process: % User Time and Process: % Privileged Time sum to Process: % Processor Time.

Key Performance Indicator


Category Counter Description Optimal Value What if not Optimal Can Cause
Processor System\Context Switches/sec This counter gives the combined rate at which all processors switch from one thread to another. This counter is indicative of hardware devices functioning properly. Context switches per request (Context Switches/sec divided by Webservice\Total Method Requests/sec) should be low. As a general rule, context switching rates of less than 5,000 per second per processor are not worth worrying about. If context switching rates exceed 15,000 per second per processor, then there is a constraint. If this counter suddenly starts to increase, it may be an indicating of a malfunctioning device. If CPU utilization is low and very low level of context switching, it could mean that threads are getting blocked. Increased %Processor Time
Processor Processor(_Total)\% Processor Time % Processor Time is the percentage of elapsed time that the processor spends to execute a non-Idle thread. Less than 85 percent. This counter suggests either there is too much load on the system or there is some other factor causing the CPU to shoot up. This counter shall be used with % Privileged Time or Processor Queue Length System Unstability
Processor System \% Processor Queue Length % Processor Queue indicates the collection of threads that are ready but not able to be executed because there are threads in execution. Less than 2 This counter indicates there is too much load on the system or application threading logic needs improvement, If CPU %Time is constantly 85 percent or higher with PQL of 2 or more, then CPU is the bottleneck. If the CPU time is low with PQL of 2 or more, then threading logic shall be investigated. System Unstability
Disk LogicalDisk\% Idle Time % Idle Time reports the percentage of time during the sample interval that the disk was idle. Greater than 5 % If the value is less than 5% means disk is really busy. The Value should be greater than 20% typically. Increased %Processor Time
Disk LogicalDisk\Average Disk sec/Read Avg. Disk sec/Read is the average time, in seconds, of a read of data from the disk. Less than .015 sec if the value is more than .025 sec that means disk is bottleneck. Response Time would be affected. Increase Response Time
Disk LogicalDisk\Average Disk sec/Write Avg. Disk sec/Write is the average time, in seconds, of a write of data to the disk. Less than .015 sec if the value is more than .025 sec that means disk is bottleneck. Response Time would be affected. Increase Response Time
Memory Memory\Pool nonpaged Bytes This counter displays the last observed value of an area in system memory that contains objects that cannot be written to disk because they are still being referenced. Steady or No more than 10 percent increase since system start up Increase of this counter indicates a potential memory leak. System Unstability
Memory Memory\Available MBytes This counter gives the amount of physical memory available to processes running on the computer in Megabytes. 25% of physical memory. It would mean the application is not getting enough memory to operate and could result in application slowness. A gradual decline in Available memory may indicate a memory leak. Memory Leak
Memory Memory\Pages/sec Pages/sec is the rate at which pages are read from or written to disk to resolve hard page faults. Multiply this value with Physical Disk\ Avg.Disk sec/Transder and if the product of those 2 counters exceeds 0.1, then this would indicatres that RAM shall be added, because paging is taking more than 10 percent of disk access time Memory Bottleneck
Network Network Interface\Bytes Total/sec Indicates the rate when data send and received on the network adapter Less than 80% of the network bandwidth This counter will indicate if the Network bandwidth on the Adapter is getting bottleneck Adapter Bottleneck
Server Server \Bytes Total/sec Indicates the rate when data send and received on the network Less than 50% of the network bandwidth This counter will indicate if the Network bandwidth is getting bottleneck Bandwidth Bottleneck

Memory Analysis

Memory analysis is important to improve web application’s performance. To know the shortage of memory you can look at the frequency of the paging.

Paging is the process of moving pages (block of data) from RAM to hard disk to free the memory for other processes. Because of paging you can use more memory than actually exists, but more paging can cause low performance.

Monitoring Memory Usage

To monitor for a low-memory condition, use the following object counters:

  • Memory: Available Bytes

The Available Bytes counter indicates how many bytes of memory are currently available for use by processes.

Low values (e.g. 10 MB) for the Available Bytes counter can indicate that there is an overall shortage of memory on the server or that an application is not releasing memory.

  • Memory: Pages/sec

The Pages/sec counter indicates the number of pages that either were retrieved from hard disk due to hard page faults or written to hard disk to free space in the working set due to page faults.

A high rate for the Pages/sec counter could indicate excessive paging.

  • Memory: Page Faults/sec

Monitor the Memory: Page Faults/sec counter to make sure that the disk activity is not caused by paging.

Page faults/sec is the sum of hard and soft page faults. A soft page fault occurs when then the requested page is found elsewhere in physical memory. A hard page fault occurs when the requested page must be retrieved from disk.

Monitoring Excessive Paging Activity

As we know paging includes disk activity so to monitor paging we can monitor hard disk activities and to do that make sure to track disk usage counters such as the following along with memory counters:

  • Logical Disk\% Disk Time
  • Physical Disk\Avg. Disk Queue Length

If a low rate of page-read operations coincides with high values for % Disk Time and Avg. Disk Queue Length, there could be a disk bottleneck. However, if an increase in queue length is not accompanied by a decrease in the pages-read rate, then a memory shortage exists.

What is 90th Percentile Response Time?

The 90th percentile response time value is the value for which 90% of the data points are smaller and 10% are bigger.

To calculate the 90th percentile value:

1.       Sort the transaction instances by their value.

2.       Remove the top 10% instances.

3.       The highest value left is the 90th percentile.

Consider the below example:

There are 10 instances of transaction “TRANSACTION 01” with the values

4 sec 2 sec 3 sec 1 sec 5 sec 17 sec 8 sec 7 sec 6 sec 10 sec



1.       Sort values from best to worst:

1 sec 2 sec 3 sec 4 sec 5 sec 6 sec 7 sec 8 sec 10 sec 17 sec



2.       Remove top 10%, In our case i.e. value “17 sec

1 sec 2 sec 3 sec 4 sec 5 sec 6 sec 7 sec 8 sec 10 sec



3.       The highest value left is the 90th percentile value i.e. “10 sec” is the 90th percentile value.

Interview Questions

  1. What is load testing? – Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine whether it can handle peak usage periods.
  2. What is Performance testing? – Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? – Yes. Version 9.5.
  4. Explain the Load testing process?
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use Loadrunner’s graphs and reports to analyze the application’s performance.
  5. When do you do load and performance Testing? – We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? – The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? – The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play back the script in multi user mode? – The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a Vuser script is executed by a number of Vusers in a group.
  9. What is a rendezvous point? – You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? – A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? – We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? – Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? – Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? – Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.  In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? – Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? – Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? – Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
    Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
    extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the extended log options.
  18. How do you debug a LoadRunner script? – VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? – Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* <function name>(char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? – The Run Time Settings that we make are: a) Pacing – It has iteration count. b) Log – Under this we have Disable Logging Standard Log and c) Extended Think Time – In think time we have two options like Ignore think time and Replay think time. d) General – Under general tab we can set the Vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? – We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? – Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? – This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? – VuGen provides the facility to use multithreading. This enables more Vusers to be run per generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? – The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status “Stopped”. For this to take effect, we have to first uncheck the .Continue on error. Option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? – The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? – The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? – Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? – The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? – Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? – By running Database monitor and help of Data Resource Graph. We can find database related issues. E.g. you can specify the resource you want to measure on before running the controller and then you can see database related issues.
  32. Explain all the web recording options?Here you specify HTML or URL base recording.
  33. What is the difference between Overlay graph and Correlate graph?Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph shows the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
  34. How did you plan the Load? What are the Criteria? – Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? – Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? – Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? –   Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the VuGen.
  38. What is the difference between standard log and extended log? – The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution, Data returned by the server, advanced trace.
  39. Explain the following functions:lr_debug_message – The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message – The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message – The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt – The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch – The lrd_fetch function fetches the next row from the result set.
  40. ThroughputIf the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered.
  41. Types of Goals in Goal-Oriented Scenario – Load Runner provides you with five different types of goals in a goal oriented scenario: 1)The number of concurrent Vusers, 2)The number of hits per second, 3)The number of transactions per second, 4)The number of pages per minute, 5)The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? – Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? – Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation.  Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web Vuser script? – Web_reg_save_param function saves dynamic data information to a parameter.

Analysis

Web application performance analysis plays an important role in quantifying Web application’s quality of service and designing the server’s configuration. Performance analysis methods based on comprehensive load testing by running a series of load testing on the client side while resource consuming is also performed on the server side. A performance model is then built based on these test results, which gives us a method to quantify and analysis system performance. We can apply this method on a typical Web application system, by comparing the results of load testing.

To prepare final report of performance analysis, we collect data from all the servers related to capability and quality measures. Performance of the Web application depends on two factors, quality of web product development and the server hardware configuration on which the application is deployed.

To measure Web application’s performance we focus on following components.

1.       Processor

2.       Memory

3.       Hard drives

4.       Network Connections

After collecting all the performance related data of above components, we create graphs to represent data in more illustrated manner. Here you can find out the bottlenecks by comparing the graphs.

Most of the time bottlenecks are present where the waiting queue length is more or there is more resource utilization.