Monday, March 31, 2008

WebSphere Admin Console

Many a thanks to Satya Dinesh Babu Manne, one of our customers who had found a new way to troubleshoot websphere problem. The solution [What he has basically tried was instead of trying to reuse any existing ports which seem to be having some conflicts, he has defined some new ports and transport chains] is given below:

1) In WebSphere Admin Console, Navigate to Application Servers -> Server Name -> Web Container Settings -> Web Container Transport Chains
2) In this view which shows current transport chains, click on New Button
3) In the resulting wizard at step 1, Give a new name to this chain (I gave it WC_CacheMonitor_Inbound) , and from the template Drop Down box select Webcontainer (Chain 1) and click on Next

4) In Step 2 , give this a new port name to identify it , and the host , port values, For the Port I gave 9030 when creating on instance 1 and 9032 when creating on instance 2. Click on Next.
5) In Step 3, Click on Finish button.
6) Repeat the above steps for each server in Cluster (I got 4 servers)
7) Save Configuration Changes.
Navigate to Environment -> Virtual Hosts, Click on New button
9) In the Wizard, give a new name and click on OK button.
10) In the resulting window click on the new Virtual Host created and click on Host Aliases for that Virtual Host.

11) Add the Virtual Host by making sure to reflect the Host and Port numbers (like 9030, 9032 etc) which have been already been created in the previous steps for Web Container Transport chains.
12) Save the Configuration Changes.
13) Navigate to Applications -> Enterprise Applications -> perfServletApp –> Map virtual hosts for Web modules
14) Select the newly created Virtual Host from the Drop Down.
15) Save the Configuration Changes, and restart all Servers.
16) The perfservlet is now accessible though ports 9030 and 9032 against the hosts configured

I was able to configure and test a websphere monitor after making these changes.

Oracle Management

Take Control of Oracle Monitoring



Most business critical applications are database driven. The Oracle database management capability helps database administrators to seamlessly detect, diagnose and resolve Oracle performance issues and monitor Oracle 24X7. The database server monitoring tool is an agentless monitoring software that provides out-of-the-box performance metrics and helps you visualize the health and availability of an Oracle Database server farm. Database administrators can login to the web client and visualize the status and Oracle performance metrics.


Applications Manager also provides out-of-the-box reports that help analyze the database server usage, Oracle database availability and database server health.

Additionally the grouping capability helps group your databases based on the business process supported and helps the operations team to prioritize alerts as they are received.

Some of the components that are monitored in Oracle database are:

Response Time
User Activity
Status
Table Space Usage
Table Space Details
Table Space Status
SGA Performance
SGA Details
SGA Status
Performance of Data Files
Session Details
Session Waits
Buffer Gets
Disk Reads
Rollback Segment


Note: Oracle Application Server performance monitoring is also possible in Applications Manager.
Oracle Management Capabilities
Out-of-the-box management of Oracle availability and performance.
Monitors performance statistics such as user activity, status, table space, SGA performance, session details, etc. Alerts can be configured for these parameters.
Based on the thresholds configured, notifications and alerts are generated. Actions are executed automatically based on configurations.
Performance graphs and reports are available instantly. Reports can be grouped and displayed based on availability, health, and connection time.
Delivers both historical and current Oracle performance metrics, delivering insight into the performance over a period of time.

Database Monitoring

Database Management - Made Easy

Applications Manager is a Database Server monitoring tool that can help monitor a heterogeneous database server environment that may consist of Oracle database, MS SQL, Sybase, IBM DB2 and MySQL databases. It also helps database administrators (DBAs) and system administrators by notifying about potential database performance problems. For database server monitoring, Applications Manager connects to the database and ensures it is up. Applications Manager is also an agentless monitoring tool that executes database queries to collect performance statistics and send alerts, if the database performance crosses a given threshold. With out-of-the box reports, DBAs can plan inventory requirements and troubleshoot incidents quickly.

Database Server Monitoring Software Needs to
ensure high availability of database servers
keep tab on the database size, buffer cache size, database connection time
analyze the number of user connections to the databases at various time
analyze usage trends
help take actions proactively before critical incidents occur.
Applications Manager supports the monitoring of the following databases out-of-the-box:

Oracle Management

MySQL Management

Sybase Management

MS SQL Management

DB2 Management


Oracle Management

Oracle Monitoring includes efficient and complete monitoring of performance, availability, and usage statistics for Oracle databases. It also includes instant notifications of errors and corrective actions. Provides comprehensive reports and graphs. More on Oracle Management >>

MySQL Management

MySQL is the most popular open source relational database system. Applications Manager MySQL Monitoring includes managing MySQL as part of your IT infrastructure, by diagnosing performance problems in real time. More on MySQL Management >>

MS SQL Server Management

Microsoft SQL Server is the enterprise database solution used most commonly on Windows. Applications Manager manages MS SQL Server databases through native Windows performance management interfaces. This ensures optimal and complete access to all the metrics that MS SQL Server exposes. More on MS SQL Management>>

DB2 Management

DB2 Monitoring includes effective monitoring of availability and performance of DB2 Databases with ease. Applications Manager facilitates automated and on-demand monitoring tasks, which will help manage DB2 databases running at its highest levels of performance. More on DB2 Management>>

Sybase Management

Availability and Performance of Sybase ASE Database servers are monitored by Applications Manager. Performance Metrics such as memory usage, connection statistics, etc. are monitored More on Sybase Management>>

WebSphere Monitoring

Take Control of WebSphere Management


WebSphere Server is one of the leading J2EE™application servers in today’s marketplace. Applications Manager, a tool for monitoring the performance and availability of applications and servers helps in IBM WebSphere Management.


Applications Manager automatically diagnoses, notifies, and corrects performance and availability problems not only with WebSphere Servers, but also with the servers and applications in the entire IT infrastructure.

WebSphere monitoring involves delivering comprehensive fault management and proactive alert notifications, checking for impending problems, triggering appropriate actions, and gathering performance data for planning, analysis, and reporting.

Some of the components that can be monitored in WebSphere are:

JVM Memory Usage
Server Response Time
CPU Utilization
Metrics of all web applications
User Sessions and Details
Enterprise JavaBeans (EJBs)
Thread Pools
Java Database Connectivity (JDBC) Pools
Custom Application MBeans (JMX) attributes
WebSphere Management Capabilities
Out-of-the-box management of WebSphere availability and performance - checks if it is running and executing requests.
WebSphere Monitoring in Network Deployment mode is provided
Monitors performance statistics such as database connection pool, JVM memory usage, user sessions, etc. Alerts can be configured for these parameters.
Based on the thresholds configured, notifications and alerts are generated. Actions are executed automatically based on configurations.
Performance graphs and reports are available instantly. Grouping of reports, customized reports and graphs based on date is available.

IBM AIX Monitoring

Monitoring AIX Made Easy

Applications Manager monitors the performance of IBM AIX Systems. First, Applications Manager discovers each AIX machine and then monitors the CPU activity, complete memory utilization, and local and remote system statistics.


The AIX Management feature optimizes AIX system performance, delivers comprehensive management reports and ensures availability through automated event detection and correction. Applications Manager also monitors processes running in the AIX system.

Some of the components that are monitored in IBM AIX are:

CPU Utilization Monitor CPU usage - check if CPUs are running at full capacity or are they being underutilized.
Memory Utilization Avoid the problem of your windows system running out of memory. Get notified when the memory usage is high (or memory is dangerously low).
Disk I/O Stats specifies read/writes per second, transfers per second, for each device.
Disk Utilization Maintain a margin of available disk space. Get notified when the disk space falls below the margin. You can also run your own programs/scripts to clear disk clutter when thresholds are crossed.
Process Monitoring Monitor critical processes running in your system. Get notified when a particular process fails.

IBM AIX Monitoring Capabilities
Out-of-the-box management of IBM AIX availability and performance.
Monitors performance statistics such as CPU utilization, memory utilization, disk utilization, Disk I/O Stats and response time.
Mode of monitoring includes Telnet and SSH.
Monitors processes running in AIX systems.
Based on the thresholds configured, notifications and alerts are generated if the AIX system or any specified attribute within the system has problems. Actions are executed automatically based on configurations.
Performance graphs and reports are available instantly. Reports can be grouped and displayed based on availability, health, and connection time.
Delivers both historical and current AIX performance metrics, delivering insight into the performance over a period of time.
Monitors memory usage and detects top consumers of memory.
For more information, refer to IBM AIX Monitoring Online Help.

Saturday, March 29, 2008

WPAR and LPAR comparison

IBM has taken a leadership role in innovation, over the past fourteen years and
has been number one in the patent technology race. Out of this has come a
plethora of new and innovative products. In 2001 IBM announced the LPAR
feature on IBM eServer pSeries and then in 2004 Advanced Power Virtualization
provided the micropartitioning feature. In 2007, IBM announces WPAR Mobility.
WPARs are not a replacement to LPARs. These two technologies are both key
components of IBM's virtualization strategy. The two technologies are
complementary, and can be used together to extend their individual values.
Providing both LPAR and WPAR technology offers a broad range of virtualization
choices to meet the ever changing needs in the IT world. Table 2-2 compares
and contrasts the benefits of the two technologies.
Table 2-2 Comparing WPAR and LPAR
Workload Partitions Logical Partitions
WPARs share OS images LPARs execute OS images
Finer-grained resource management,
per-workload
Resource management per LPAR
Capacity on demand
Security isolation Stronger security isolation
Easily shared files and applications Supports multiple OSes, Tunable to
applications
Lower administrative costs:
1 OS to manage
Easy create/destroy/configure
Integrated management tools
OS Fault isolation
Chapter 2. Understanding and Planning for WPARs 37
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
Figure 2-7 shows how LPAR and WPARs can be combined within the same
physical server, which also hosts the WPAR Manager and NFS server required to
support partition mobility.
Important: When considering the information in Table 2-2 you should keep in
mind the following guidelines:
In general, when compared to WPARs, LPARs will provide a greater
amount of flexibility in supporting your system virtualization strategies.
Once you have designed an optimal LPAR resourcing strategy, then within
that strategy you design your WPAR strategy to further optimize your
overall system virtualization strategy in support of AIX6 applications. See
Figure 2-7 for an example of this strategy, where multiple LPARs are
defined to support different OS and application hosting requirements, while
a subset of those LPARs running AIX6 are setup specifically to provide a
global environment for hosting WPARs.
Because LPAR provisioning is hardware/firmware based you should
consider LPARs as a more secure starting point for meeting system
isolation requirements than WPARs.
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
38 Workload Partitions in IBM AIX Version 6.1

WPAR mobility.

Live Application Mobility is the newest virtualization technology from IBM. This is
a software approach that enhances the current line of technology. Live
Application Mobility is a complement to IBM’s line of virtualization package. The
premise is to allow for planed migrations of workloads from one system to
another whilst the application is not interrupted. This could be used for example
to perform a planned firmware installation on the server. Most workload do not
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
32 Workload Partitions in IBM AIX Version 6.1
need to be aware of the WPAR relocation. But proper planning and testing are
always recommended before moving anything into a production environment.
WPAR mobility, also referred to as relocation, applies to both type of WPAR:
application and system. The relocation of a WPAR consists in moving its
executable code from one LPAR to another one, while keeping the application
data on the same storage devices. It is therefore mandatory that these storage
devices are accessible from both the source and target LPARs hosting the
WPAR.
In the initial version of AIX 6, this dual access to the storage area is provided
thanks to NFS. As mentioned previously, the hosting global environment hides
the physical and logical device implementations from the hosted WPARs. The
WPAR only deals with data storage at filesystem level. All files that needs to be
written by the application must be hosted on an NFS filesystem. All other files,
including the AIX operating systems files can be stored in filesystems local to the
hosting global environment. Table 2-1 helps planning the creation of the
filesystems for an application that requires WPAR mobility, when hosted in an
application or system workload partition, for an application which only writes in
filesystems dedicated to the application.
Table 2-1 Default filesystem location to enable partition mobility
Figure 2-5 on page 33 shows an example of a complete environment in which to
deploy LPARs and WPARs on two p595 systems.
The first global environment, called saturn, and is hosted in an LPAR of the first
p595. It is a client of the NFS server as well as titian, the system WPAR inside of
it. The second system is also a p595, but could be any of the same class of
systems from the p505 on up. One of its LPARs hosts a global environment
called jupiter, which is also a client of the NFS server.
Filesystem Application WPAR System WPAR
/ Global environment NFS mounted
/tmp Global environment NFS mounted
/home Global environment NFS mounted
/var Global environment NFS mounted
/usr Global environment Global environment
/opt Global environment Global environment
application specific NFS mounted NFS mounted
Chapter 2. Understanding and Planning for WPARs 33
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
There a utility server and for the example it is a p550. On this system there is an
NFS server, a NIM server and a WPAR Manager for AIX to provide the single
management point need for all the WPARs. The NIM server is in the picture to
represent how to load AIX images into the frame which could have a large
number of LPARs. The NFS server is for providing an outside the box filesystem
solution to the WPARs and provide the vehicle to move them on the fly from one
system to another with out disrupting the application.
Figure 2-5 Overview of the topology requirements in a mobile WPAR solution
The NFS server is a standard configuration and is utilizing either NFS protocols
version 3 or version 4. Command line editing or the use of SMIT can be used to
configure the /etc/exports.
Figure 2-6 is a representation of the relationship between the different views of
the same filesystems as seen:
from the NFS server where they are physically located,
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
34 Workload Partitions in IBM AIX Version 6.1
from the global environments on which they are NFS-mounted, and
from the system WPAR that uses them.
In the WPAR, the /opt, /proc and /usr are setup as namefs with read-only
permissions (exception: /proc is always read-write) mapping on the global
environment /opt, /proc and /usr. The rest of the filesystems (/, /home, /tmp and
/var) are setup as standard NFS. The /etc/exports file on the NFS server must
have permissions set for both the global environment (jupiter) and system WPAR
(ganymede) for the mobility to work.
Important: The NFS server must provide access to both the global
environment and the WPAR in order for the WPAR to work at all. In a mobility
scenario, access must be provided to the WPAR and all global environments
to which the WPAR may be moved. Furthermore, any time /, /var, /usr, or /opt
are configured as NFS mounts, the NFS server must provide root access (e.g.
via the -r option to mknfsexp) to all of the relevant hostnames.
Chapter 2. Understanding and Planning for WPARs 35
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
Figure 2-6 Filesystems from the NFS for a Mobile System WPAR
Using the df command as in shows that the global environment jupiter has its
own filesystems hosted on locally attached disks as well as NFS filesystems
mounted from the gracyfarms NFS server, for use by the for ganymede system
WPAR.
Example 2-5 NFS server mountpoints for ganymede WPAR
root: jupiter:/wpars/ganymede --> df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 131072 66376 50% 1858 6% /
/dev/hd2 3801088 646624 83% 32033 7% /usr
/dev/hd9var 524288 155432 71% 4933 8% /var
/dev/hd3 917504 233904 75% 476 1% /tmp
/dev/hd1 2621440 2145648 19% 263 1% /home
/proc - - - - - /proc
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
36 Workload Partitions in IBM AIX Version 6.1
/dev/hd10opt 1572864 254888 84% 7510 4% /opt
gracyfarms:/big/ganymede/root 131072 81528 38% 1631 16% /wpars/ganymede
gracyfarms:/big/ganymede/home 131072 128312 3% 5 1% /wpars/ganymede/home
/opt 1572864 254888 84% 7510 4% /wpars/ganymede/opt
/proc - - - - - /wpars/ganymede/proc
gracyfarms:/big/ganymede/tmp 262144 256832 3% 12 1% /wpars/ganymede/tmp
/usr 3801088 646624 83% 32033 7% /wpars/ganymede/usr
gracyfarms:/big/ganymede/var 262144 229496 13% 1216 5% /wpars/ganymede/var

Application WPARs

There are two different types of workload partitions. The simplest is application
WPAR. It can be viewed as a shell which spawns an application and can be
launched from the global environment. This is a light weight application resource:
It does not provide remote login capabilities for end users. It only contain a small
number of processes, all related to the application, and uses services of the
global environment daemons and processes.
It shares the operating system filesystems with the global environment. It can be
setup to receive its application filesystem resources from disks owned by the
hosting AIX instance, or from a NFS server.
Figure 2-3 on page 28 shows the relationship of an application WPAR
filesystems to the default global environment filesystems. The filesystems that
are visible to processes executing within the application WPAR are the global
environment filesystems shown by the relationships in the figure.
If an application WPAR accesses data on an NFS mounted filesystem, this
filesystem must be mounted in the global environment directory tree. The mount
point is the same, when viewed from within the WPAR than when viewed from
the global environment. The system administrator of the NFS server must
configure the /etc/exports file so that filesystems are exported to both the global
environment IP address and to the application WPAR IP address.
Processes executing with an application WPAR can only see processes that are
executing within the same WPAR. In other words, the use of Inter Process
Communication (IPC) by application software is limited to the set of processes
within the boundary of the WPAR.
Applications WPARs are temporary objects. The life-span of an application
WPAR is the life-span of the application it hosts. An application WPAR is created
at the time the application process is instantiated. The application WPAR is
destroyed when the last process running within the application partition exits. An
application WPAR is candidate for mobility. It can be started in one LPAR, and
relocating to other LPARs during the life of its hosted application process.
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
28 Workload Partitions in IBM AIX Version 6.1
Figure 2-3 File system relationships from the global environment to the Application WPAR
2.5 System WPARs
The second type of WPAR is a system WPAR. A system WPAR provides a typical
AIX environment for executing applications, with some restrictions. A system
WPAR has its own runtime resources. It contains an init process that can spawn
daemons. For example, it has its own inetd daemon to provide networking
services, and own System Resource Control (SRC).
Every system WPAR has its own unique set of users, groups and network
interface addresses. The users and groups defined within a system WPAR are
completely independent from the users and groups defined at the global
environment level. In particular, the root user of the WPAR only has superuser
privileges within this WPAR, and has no privilege in the global environment (In
fact, the root and other users defined in within the WPAR cannot even access the
global environment). In the case of a system partition hosting a database server,
the DB administrator can for example be given root privilege within the DB
WPARs, without giving him any global environment privilege.
The environment provided by a system WPAR to its hosted application and
processes is a chroot complete AIX environment, with access to all AIX systems
Chapter 2. Understanding and Planning for WPARs 29
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
files that are available in a native AIX environment. The creation of a system
WPAR includes the creation of a base directory, referred to as the base directory
in the WPAR documentation. This base directory is the root of the chroot system
WPAR environment. By default, the path to this base directory is
/wpars/ in the global environment.
By default, the base directory contains 7 filesystems:
/, /home, /tmp and /var are real filesystems, dedicated to the system partition
use.
/opt and /usr are read-only namefs mounts over the global environment’s /usr
and /opt.
the /proc pseudo-filesystem maps to the global environment /proc
pseudo-filesystem (/proc in a WPAR only makes available process
information for that WPAR).
Figure 2-4 depicts an overview of these filesystems, viewed from the global
environment and from within the system WPAR. In this example, a WPAR called
titian is hosted in an LPAR called saturn. Although the diagram shows the global
environment utilizing VIOs with two vscsi adapter along with virtual disk and
using AIX native MPIO for a highly available rootvg. The system could be setup
and supported with physical adapters and disk.
Figure 2-4 Filesystems relationship from the Global Environment to the System WPAR
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
30 Workload Partitions in IBM AIX Version 6.1
In this figure, box with a white background symbolize real filesystems, while box
with orange backgrounds symbolize links. The gray box labeled titian shows the
pathname of the filesystems as they appear to processes executing within the
system WPAR. The grey box labeled saturn shows the pathname to the
filesystems used within the global environments, as well as the basedir mount
point below which the system WPAR partition are created.
Example 2-1 shows the /wpars created within a global environment to host base
directory of WPARs created with the global environment.
Example 2-1 Listing files in the global environment
root: saturn:/ --> ls -ald /wpars
drwx------ 5 root system 256 May 15 14:40 /wpars
root: saturn:/ -->
Then when looking inside the directory of /wpars there is now the directory of
titian as show in Example 2-2.
Example 2-2 Listing /wpars in the global environment
root: saturn:/wpars --> ls -al /wpars
drwx------ 3 root system 512 May 1 16:36 .
drwxr-xr-x 23 root system 1024 May 3 18:06 ..
drwxr-xr-x 17 root system 4096 May 3 18:01 titian
In Example 2-3, we see the mount points for the filesystem of the operating
system of titian as created from saturn to generate this system WPAR.
Example 2-3 Listing the contents of /wpars/titian in the global environment
root: epp182:/wpars/titian --> ls -al /wpars/titian
drwxr-xr-x 17 root system 4096 May 3 18:01 .
drwx------ 3 root system 512 May 1 16:36 ..
-rw------- 1 root system 654 May 3 18:18 .sh_history
drwxr-x--- 2 root audit 256 Mar 28 17:52 audit
lrwxrwxrwx 1 bin bin 8 Apr 30 21:20 bin -> /usr/bin
drwxrwxr-x 5 root system 4096 May 3 16:41 dev
drwxr-xr-x 28 root system 8192 May 2 23:26 etc
drwxr-xr-x 4 bin bin 256 Apr 30 21:20 home
lrwxrwxrwx 1 bin bin 8 Apr 30 21:20 lib -> /usr/lib
drwx------ 2 root system 256 Apr 30 21:20 lost+found
drwxr-xr-x 142 bin bin 8192 Apr 30 21:23 lpp
drwxr-xr-x 2 bin bin 256 Mar 28 17:52 mnt
drwxr-xr-x 14 root system 512 Apr 10 20:22 opt
dr-xr-xr-x 1 root system 0 May 7 14:46 proc
drwxr-xr-x 3 bin bin 256 Mar 28 17:52 sbin
drwxrwxr-x 2 root system 256 Apr 30 21:22 tftpboot
Chapter 2. Understanding and Planning for WPARs 31
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
drwxrwxrwt 3 bin bin 4096 May 7 14:30 tmp
lrwxrwxrwx 1 bin bin 5 Apr 30 21:20 u -> /home
lrwxrwxrwx 1 root system 21 May 2 23:26 unix -> /usr/lib/boot/unix_64
drwxr-xr-x 43 bin bin 1024 Apr 27 14:31 usr
drwxr-xr-x 24 bin bin 4096 Apr 30 21:24 var
drwxr-xr-x 2 root system 256 Apr 30 21:20 wpars
Example 2-4 shows the output of the df executed from the saturn global
environment. It shows that one system WPAR is hosted within saturn, with its
filesystems mounted under the /wpars/titian base directory. The example shows
that the /, /home/ /tmp and /var filesystems of the system WPAR are created on
logical volumes of the global environments. It also shows that the /opt and /usr
filesystems of the WPAR are namefs mounts over the global environment /opt
and /usr.
Example 2-4 Listing mounted filesystem in the global environment
root: saturn:/wpars/titan --> df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 131072 66376 50% 1858 6% /
/dev/hd2 3801088 646624 83% 32033 7% /usr
/dev/hd9var 524288 155432 71% 4933 8% /var
/dev/hd3 917504 233904 75% 476 1% /tmp
/dev/hd1 2621440 2145648 19% 263 1% /home
/proc - - - - - /proc
/dev/hd10opt 1572864 254888 84% 7510 4% /opt
glear.austin.ibm.com:/demofs/sfs 2097152 1489272 29% 551 1% /sfs
/dev/fslv00 131072 81528 38% 1631 16% /wpars/titian
/dev/fslv01 131072 128312 3% 5 1% /wpars/titian/home
/opt 1572864 254888 84% 7510 4% /wpars/titian/opt
/proc - - - - - /wpars/titian/proc
/dev/fslv02 262144 256832 3% 12 1% /wpars/titian/tmp
/usr 3801088 646624 83% 32033 7% /wpars/titian/usr
/dev/fslv03 262144 229496 13% 1216 5% /wpars/titian/var

Understanding and Planning

This chapter describes what WPAR technology is and how it can be implemented
to work in your IT environment. The chapter is designed to provide system
architects and system administrators the level of knowledge required to plan the
deployment of WPARs in their IT infrastructure.
This chapter will discusses high level positing of WPARs and how it complements
and works with the powerful suite of products for virtualization, high availability
and server consolidation in System p, while helping to provide a higher level of
service to the applications and ultimately the end user of these ever growing and
changing environments. This chapter includes the following sections:
2.1, “High-level planning information” on page 20
2.2, “General considerations” on page 21
2.3, “Global environment considerations” on page 26
2.4, “Application WPARs” on page 27
2.5, “System WPARs” on page 28
2.6, “WPAR mobility.” on page 31
2.7, “WPAR and LPAR comparison” on page 36
2
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
20 Workload Partitions in IBM AIX Version 6.1
2.1 High-level planning information
The WPAR technology is purely software based. It can therefore be deployed on
any hardware platform that supports AIX 6:
IBM eServer™ pSeries with POWER4 processors
IBM System p
IBM BladeCenter® JS21 with PPC 970 processors
The WPAR offering consists of two parts:
1. IBM AIX Version 6.1 contains the base support for WPAR technology. This
includes creation and management of both application and system workload
partitions within the LPAR where AIX 6 is installed. AIX provide WPAR
support and management through the AIX command line interface and SMIT
menus.
2. IBM Workload Partitions Manager for AIX is a optional separately installable
licensed program product that supports more advanced features:
– Graphical User Interface including wizards for most management activity.
– Management of multiple WPARs on multiple servers from a single point of
control
– Enablement for WPAR mobility
– Automated and policy-based WPAR mobility.
The decision to use WPAR technology depends on the potential benefits that this
technology can yield to a specific user environment. These benefits have been
described in Section 1.5, “When to use workload partitions” on page 13.
Once the decision to use WPAR has been taken, the planning activity consists
then in deciding:
which is the best suited workload partitions type: application to system
WPARS?
if application mobility will be required?
The answer to these questions have technical consequences that are described
in the following sections.
Chapter 2. Understanding and Planning for WPARs 21
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
2.2 General considerations
The WPAR provides isolation of software services, applications and
administration utilizing flexible software-defined boundaries within a single
instance of the AIX 6.1 operating system (Global). When building a WPAR from
the command line you can configure and start it within a few minutes.
This technology presents system administrators with new planning challenges
concerning network, filesystems and OS versions. Networks aliases, shared or
NFS filesystems, and kernel unicity require a different approach to planning of
applications deployment.
2.2.1 Networking
When planning for networks, one must understand how to get the most of this
technology. Using alias decreases the number of adapters needed for
communications but implies a careful planning of bandwidth utilization, since
several WPARs may share the same adapter.
NFS is a prerequisite to the WPAR mobility functionality. Three components are
involved in NFS communications for WPAR mobility:
the name and IP address of the global environment,
the name and IP address of the WPAR
and the name and IP address of the NFS server.
Since they all play a role in this communication they all must know each other.
Preferably put them all in the same subnet. For more detailed explanations check
Chapter 6, “IBM WPAR Manager for AIX” on page 125.
2.2.2 Deployment of the WPAR manager
The WPAR Manager provides a central systems management solution by
providing a set of web-based systems management tools and tasks simplifying
the management of a customer server and WPAR infrastructure.
The WPAR Manager offers infrastructure resource optimization, service
performance optimization and service availability optimization tools. Specific
features of the Workload Partition Manager include, centralized, single point of
administrative control for managing both system and application WPARs,
browser-based GUI for dynamic resource management, system and application
level workload management through WPARs, provide Role-based views and
tasks, dynamic allocation/reallocation and configuration of virtual servers,
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
22 Workload Partitions in IBM AIX Version 6.1
storage, network, and non-interruptive maintenance zero downtime for server
fixes and upgrades through virtual server relocation
Using the WPAR Manager involves three roles:
– A WPAR Management Server which is a java application running in an AIX
server. This can be a stand alone server or a an LPAR in a shared physical
server. (dedicated or micropartition).
– The WPAR Management clients which are installed in each LPAR where
WPAR are planned to be deployed and communicates with the WPAR
Management Server
– The WPAR Manager User Interface is a light-weight browser-supported
interface to the WPAR Management Server. The interface can be provided
by any web browser with an IP connection to the WPAR Management
Server. The UI allows for display of information that has been collected
through the agents, and also provides management capability such as
creation, deletion, relocation of WPARs.
Figure 2-1 shows where the components of the WPAR manager execute.
Chapter 2. Understanding and Planning for WPARs 23
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
Figure 2-1 Workload Partitions (WPARs) running in POWER4, 5, 5+, or 6 nodes
When planning the deployment of WPAR Manager components on different
LPARS and workstation, the network firewalls must be configured to allow traffic
to the specific ports listed on Figure 2-2.
Ports 14080 and 14443 are used for communication between the system
administrator workstation and the WPAR Manager.
Note: This figure contains the default value of ports used by the WPAR
Manager. The system administrator can modify these values when configuring
the WPAR Manager.
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
24 Workload Partitions in IBM AIX Version 6.1
Ports 9510, 9511, 9512 and 9513 are used for communications between the
agents and managers.
Figure 2-2 TCPIP ports to configure on firewall to use the WPAR manager
2.2.3 Software prerequisites
Having a single instance of AIX simplifies the installation and general
administration of the WPARs. Software is installed once and used many times in
many WPARs. Although totally isolated from each other, these WPARs use the
same AIX kernel instance. This means that all WPARs use the exact same level
of AIX. When planning for WPARs, one must make sure that all applications
software support the level of AIX of the global environment. More important, plan
for the future. Updating or upgrading AIX in the global environment means
Chapter 2. Understanding and Planning for WPARs 25
Draft Document for Review August 6, 2007 12:52 pm 7431CH_TECHPLANNING.fm
updating or upgrading AIX in all hosted WPARs environment. If you have an
application that needs a specific version of AIX and cannot be updated, move it
to a different LPAR so that it does not prevent the other WPARs from updating.
2.2.4 File system considerations
System WPARs created with the default options have shared read-only /usr and
/opt filesystems. This speeds up the creation, installation and updating of WPARs
and also prevents the accidental removal of system software shared with other
WPARs. Having the read-only shared /usr and /opt filesystem may not suit every
applications: some applications are designed to write in the /usr or /opt
filesystems. One solution is to defined the needed application’s writable directory
as a different filesystem and link it to the mountpoint the application
needs.Chapter 5.3.3, “Shared /usr with writable filesystem” on page 98 explains
how a WPAR can have a writable directory under a read-only /usr or /opt.
Another solution is for the application to not use the global environment shared
/usr or /opt filesystems. This solution requires extra disk space because it
duplicates the global environment’s /usr or /opt to WPAR’s private and fully
writable filesystems.
Consolidating many applications within one global environment changes the way
the system administrator manages filesystems. Instead of managing multiple
LPARs, each with a few filesystems, he now manages only one LPAR with many
filesystems. In both case, the overall number of filesystems remains in the same
order of magnitude (although using WPARs slightly reduces this number), but
they are controlled within a single system. By default, a system WPAR has 4
dedicated filesystems, and 2 shared read-only and the /proc filesystem. For
example, deploying 200 system WPARs in one global environment will result by
default in a global environment with 800 separate filesystems, and 1200 mount
points in the /proc pseudo-filesystems. The WPAR technology provides an option
to reduce this number of filesystems. Instead of using the default filesystem
creation option, the system administrator can choose to create one single
filesystem per WPAR, as described in Chapter 5.1.4, “Creating WPARs:
advanced options” on page 75. Basically, this solution creates only one real
filesystem (the root “/” filesystem) for the WPAR, and subtrees /var, /tmp, /home
are just created as subdirectories of the “/” filesystem, instead of real filesystems
as they usually are in AIX instances and default system WPAR
Filesystems of each system WPAR are created in the global environment
directory tree and are mounted under the WPAR base directory. One base
directory is defined per WPAR. The default path of the base directory is
/wpars/. When planning to deploy several system partitions, the
system administrator may want to organize base directory in a different directory
tree organization.
7431CH_TECHPLANNING.fm Draft Document for Review August 6, 2007 12:52 pm
26 Workload Partitions in IBM AIX Version 6.1
Section 2.4 to 2.6 explains in more details filesystems considerations for
respectively application WPARs, system WPARs and when mobility is used.

When to use workload partitions

Workload Partitions offer new possibilities for managing AIX environments. They
complement other virtualization solutions available for System p6 platforms. The
following scenarios are examples of when you could benefit from using WPARs.
1.5.1 Improvement of Service Level Agreements
Hardware components of an IT infrastructure may need to undergo maintenance
operations requiring the component to be powered off. If an application is not
part of a cluster of servers providing continuous availability, either for technical,
organizational or cost reasons, then WPARs can help to reduce the application
downtime. Using the live partition mobility feature, the applications that are
executing on a physical server can be temporarily moved to another server,
without an application blackout period during the period of time required to
perform the server physical maintenance operations.
Long running jobs can take advantage of the checkpoint/restart feature of
WPARs. It can be used to protect them against a failure which would require
restarting all computation from scratch. The checkpoint feature can be used to
regularly capture a snapshot of the application runtime environment, without
having to instrument the code. In the case where the job would need to be
stopped before reaching completion of the computation, the job can be resumed
in the state it was when the last checkpoint was saved.
The checkpoint/restart feature can also be used to execute long lasting batch
jobs on a system with limited resources. This job can be run at night time, be
paused during the daytime, when the computer resources have to be dedicated
to other applications, such as transaction handling or web serving, and then
resumed at the beginning of the next night.
The workload partition technology can also help in an environment where an
application needs to be started often, on-demand, and quickly. This may apply,
for example, in test environments where resources are too scarce to keep
multiple application executing concurrently when not in use. Using WPARs, many
applications can be defined on a server, but not activated. Activation of the
workload partitions executing each of these application can be performed only
when needed for a test.
1.5.2 Protection of legacy hardware investment
Although customers using POWER4 IBM Eserver pSeries® servers cannot
take advantage of physical or hypervisor based virtualization technology, the
7431CH_INTRODUCTION.fm Draft Document for Review August 6, 2007 12:52 pm
14 Workload Partitions in IBM AIX Version 6.1
WPAR technology relies only on IBM AIX Version 6.1, with no dependency on the
underlying hardware. It can be used on POWER4, POWER5 and POWER6
based servers.
Customers having many applications, each running an a dedicated POWER
based server or dedicated partition, and requiring only a fraction of the available
processing power can, thanks to the WPAR technology, consolidate these
application within one LPAR. Each application can be executed within one
WPAR, providing a dedicated environment isolated from the other applications
environments, while all WPARs share the physical resource of one LPAR.
1.5.3 Optimization of resource usage
The IBM System p family offers many ways to optimize resource utilization
through virtualization technologies, such as LPARs, DLPARs, and
micropartitions. The WPAR technology complements the existing solution
offerings thanks to its unique characteristics.
The WPAR technology gives you additional flexibility in system capacity planning
as part of a strategy for maximizing system utilization and provisioning efficiency.
Due to the static allocation of partitions in physical servers, in a typical IT
environment, each server is sized with spare capacity to allow for resource
consumption increase of all applications executing within this server. Thanks to
the mobility feature of WPARs, the server sizing and planning can be based on
the overall resources of a group of servers, rather than being performed server
per server. It is possible to allocate applications to one server up to 100% of its
resources. When an application grows and requires resources that can no longer
be provided by the server, the application can be moved to a different server with
spare capacity.
The same mobility feature, combined with the policy based relocation functions
of the WPAR Manager allows to size a set of servers to handle the peak load,
based on the overall resource capacity of the set of server, and not for each
server. In a classical environment, each server must be able to support the peak
load of all partitions hosted within this server. Thanks to the WPAR mobility, it is
possible to take advantage of free resources in one physical server to offload
another physical server hosting applications that require more resources than
locally available.
AIX 6 provides very fine grained control of CPU and memory resource allocation
to workload partitions (down to 0.01% increments). This technology is therefore
suitable for server consolidation of very small workloads. This could be
particularly interesting for the replacement of old servers, for which even 10% of
one POWER5 or POWER6 processor, (the smallest micropartition) exceeds the
application needs.
Chapter 1. Introduction to Workload Partitions (WPAR) Technology in AIX 6 15
Draft Document for Review August 6, 2007 12:52 pm 7431CH_INTRODUCTION.fm
The theoretical upper limit on the number of workload partitions that can be
executed within one LPAR is 8192. In actual practice, your application
environment will probably require far less than 8192 WPARs running within a
single LPAR. And in practice we would expect you would encounter other AIX
system limitations preventing you from actually approaching this theoretical limit.
1.5.4 Fine grain control of resource allocation
When multiple applications are executing within the same AIX instance, the
system administrator may want to control how much cpu and memory resources
are used by each application. One way to perform this control is to set up the
Workload Manager (WLM) functions which is part of standard AIX features.
The WPAR technology provides a new way to perform this resource control. The
WPAR resource control is reusing the WLM technology, but encapsulates it in a
way that WLM is not visible to the system administrator. There is no need for the
system administrator to know about WLM. The resource control is available
through options of the WPAR command line and SMIT interfaces.
The WPAR resource control feature allows the system administrator to arbitrate
between applications competing for CPU and memory resources. This
guarantees that each application receives a share of the cpu and memory
resource available from the global environment. These resources are separate
from the requirements of the other applications executing in WPARs within the
same operating system instance.
1.5.5 Control of security and privilege command
In large AIX environments, where a partition hosts many applications, it is not
unusual to have multiple people acting as system administrators. However, all of
them may not have the need for root or superuser privileges in all domains of
system administration. They can be specialized for activities such as for example
user administration, network control, storage control, or software maintenance.
The WPAR technology supports this specialization of roles, and can help restrict
the privileges given to one person to just the scope he needs to control. System
workload partitions have their own user set, independent from the user set
defined at the global environment level. An individual who is using root within a
Note: In practice, the number of WPARs which could be created and made
active in an LPAR depends upon the capacity of the system, the configuration
of the WPARs, and the characteristics of the applications being run in those
WPARs.
7431CH_INTRODUCTION.fm Draft Document for Review August 6, 2007 12:52 pm
16 Workload Partitions in IBM AIX Version 6.1
system workload partition only has superuser privileges for the resources visible
within this WPAR. He cannot control global environment resources, such as
network adapter or physical devices, and he cannot act on resources belonging
to other workload partitions. Many application have the need for the application
administrator to use the root user to control the application, even if this person
does not need to manage the operating system. The WPAR technology allows to
delegate the superuser privileges to one individual, and limit them to an
application environment, without jeopardizing the global environment.
The separation of user sets (or security domains) between different system
workload partitions also enables the system administrators to isolate groups of
users logging on in AIX environments according to their application access
control requirements. Users defined in one system WPAR are unaware of the
applications executing in the global environment or in other WPARs. They can’t
see the list of users or processes outside their WPAR.
IBM AIX Version 6.1 provides improvement over the previous AIX 5L™ Version
5.3 for role based control of user privileges. This feature is known as Role Based
Access Control (RBAC). An exhaustive description of these new features is
available in IBM AIX V6.1 Security Enhancements, SG24-7430.
WPAR integrates the use of RBAC features for controlling privileges. A default
RBAC setting is provided with each WPAR, but the system administrator can also
further customize the RBAC configuration used in a WPAR context.
1.5.6 Simplified handling of software stack
The WPAR technology can help the system administrator simplify the way he
maintains the operating systems and application software stacks.
A traditional approach to application deployment has been for a long time to
dedicate one server to one application. With the advent of virtualization and
partitioning technologies, it has been possible to host multiple applications within
partitions of a physical server. But this solution still implies that the system
administrator needs to maintain one operating system instance for each
application. The WPAR technology allows to share an AIX instance between
multiple applications, while still running each application within its own
environment, providing isolation between application. In this case, the more
applications that are consolidated within one AIX instance, the less the system
administrator has to perform OS fix applications, backups, migration, and other
OS maintenance tasks. However, it must be noted that such a consolidation
requires that all applications can run under the same version and maintenance
level of the OS.
Chapter 1. Introduction to Workload Partitions (WPAR) Technology in AIX 6 17
Draft Document for Review August 6, 2007 12:52 pm 7431CH_INTRODUCTION.fm
In addition to sharing the operating system, the system administrator can take
advantage of the WPAR technology to share application code. In a traditional AIX
environment, if several Apache web servers are needed, they would each be
deployed in a dedicated server or LPAR. In a WPAR environment, it is possible to
install Apache in one LPAR, and then execute multiple instances of the Apache
server within this LPAR, by starting multiple WPARs. Each WPAR runs its own
Apache server, with its own data in dedicated disk space, but shares the Apache
code with all other WPARs. Such a configuration optimizes memory utilization by
eliminating duplication of code, and reduces administration maintenance of the
Apache code, which only needs to be updated once for all server instances.
IBM AIX Version 6.1 introduces a new concept in software installation and
management: relocatable software packages. A relocatable application is an
application where the files can be installed relative to a base directory which is
different from the / root directory of the AIX environment. Using this feature, it is
possible to deploy multiple versions of the same application within one AIX
instance. The system administrator can take advantage of relocatable
application, by starting each version of the application in a specific WPARs,
therefore providing multiple servers with different server code version, from one
LPAR.
1.5.7 Simplified handling of application OS environment.
The workload partition configuration can be stored in human-readable
specification files. These specification files can be generated by the operating
system from already existing workload partitions, or can be edited, created or
modified by hand. In an environment where a system administrator has to
manage several application environments, the WPAR technology can help him
quickly clone and define new application environments. These specification files
can be used as input to WPAR creation commands, allowing the system
administrator to automate through scripts and programs the startup and handling
of multiple workload partitions.
1.5.8 Business continuity: disaster/failure recovery:
The WPAR technology can be integrated as one element of a solution to provide
a business continuity plan.
The checkpointing feature of WPAR allows to capture a snapshot of an executing
application without having to instrument the code. The application checkpoint
image is then saved to a file that can later be used to resume execution of an
application. Combined with a backup of the application data, the WPAR
checkpoint feature can provide an alternate disaster or failure recovery solution
7431CH_INTRODUCTION.fm Draft Document for Review August 6, 2007 12:52 pm
18 Workload Partitions in IBM AIX Version 6.1
for application that do not use other solutions such as HACMP™ or server
clusters.
1.5.9 Supporting “Green” computing strategies
Using WPAR relocation features for live application mobility means you have the
flexibility to consolidate workloads during periods of low usage onto smaller
numbers of operating server platforms. In this strategy you still provide
continuous application availability, but you do so using a smaller number of
powered up servers. As you approach normal high usage periods you could then
power up additional peak demand server resources and relocate cyclic
workloads back to those machines during those peak demand periods. For
example, if your data center peak workload periods are 12 hrs per day, 5 days
per week, then peak load systems would only need to be powered up ~%35 of
the time.

Live application mobility

Both types of workload partitions, the system WPAR and the Application WPAR,
are capable of being configured to support mobility, or relocation.
The ability to move one WPAR from one LPAR to another, possibly from one
physical system to another, can be executed on active partitions. In this case the
application undergoes active relocation (it is hot-migrated), without stopping the
Note: In 2007, IBM’s System p6 and AIX 6 have 2 features that seem similar,
but are different: WPAR mobility & live partition mobility
– WPAR mobility, which is discussed in this book, is a feature of AIX 6 and
WPAR Manager. It is available on POWER4, POWER5, and POWER6
systems.
– Live partition mobility relies on the POWER6 hardware and hypervisor
technology (Advance Power Virtualization). It is available on POWER6
systems only. This feature is also available to AIX 5.3 LPARs.
7431CH_INTRODUCTION.fm Draft Document for Review August 6, 2007 12:52 pm
12 Workload Partitions in IBM AIX Version 6.1
application. The only visible effect for a user of the application is a slightly longer
response time while the application is migrating.
Workload partition mobility uses checkpoint & restart features to move workload
partitions. The checkpoint saves the current status of the application and then
restarts it on a new system or OS instance at the previously saved state.
Partition mobility is not a replacement for a High Availabilty solution. The premise
is to allow for planned migrations of workloads from one system to another while
the application is not interrupted. This could be the case for hardware
maintenance or a firmware installation on the server. The workload does not
need to be aware of the migration for the most part. But proper planning and
testing are always recommended before moving anything into a production
environment
Figure 1-4 depicts the use of WPAR relocation for workload balancing where two
applications are moved between two server to balance the load of these servers.
This figure also introduce the concept of WPAR manager that will be described in
Chapter 2, “Understanding and Planning for WPARs” on page 19.
Figure 1-4 WPAR migration
Important: Workload partition mobility is a software solution that is dependent
on AIX 6 for execution. When used for the migration of a WPAR from one
LPAR to another or between physical systems, then hardware and software
compatibility is required.

Application WPARs

If an application or group of applications can be started with one command of the
AIX command line interface, it is a candidate to be hosted by an application
WPAR. This command is passed as an argument to the wparexec command, that
will create an application WPAR. As soon as the passed command exits, the
workload partition is terminated.
An application partition shares the file system of the global environment. It does
not own any dedicated storage.
An application partition can run daemons. But application partitions will not run
any of the system service daemons, such as inetd, srcmstr, etc. It is not possible
to remotely log into an application partition, or remotely execute an action into an
application WPAR.

System WPARs

A system WPAR is similar to a typical AIX environment. Each System WPAR has
dedicated writable file systems, although it can share the global environment /usr
Chapter 1. Introduction to Workload Partitions (WPAR) Technology in AIX 6 11
Draft Document for Review August 6, 2007 12:52 pm 7431CH_INTRODUCTION.fm
and /opt filesystems in read only mode). When a system WPAR is started, an init
process is created for this WPAR, which in turns spawns other processes and
daemons. For example, a system WPAR contains an inetd daemon to allow
complete networking capacity, making it possible to remotely log into a system
WPAR. It also runs a cron daemon, so that execution of processes can be
scheduled.

WPARs

To most applications the WPAR appears as a booted instance of AIX. In general,
applications can run without modification in a WPAR.
Inside the WPAR, the applications
– have private execution environments
– are isolated from other processes outside the WPAR, signals and file
systems (file system isolation only applies to system WPARs)
– may have dedicated network addresses
– have interprocess communication which is restricted to processes
executing in the same workload partition.
There are two types of workload partitions that can reside in a global
environment.
– System WPAR - almost a full AIX environment.
– Application WPAR - light environment suitable for execution of one or more
processes.

Global Environment in an LPAR

As mentioned earlier, workload partitions are created within standard AIX 6
instances, and the global environment is the part of an AIX 6 instance which
does not belong to any workload partition. The global environment is therefore
7431CH_INTRODUCTION.fm Draft Document for Review August 6, 2007 12:52 pm
10 Workload Partitions in IBM AIX Version 6.1
similar to the operating system environment of earlier versions of AIX. This global
environment can be hosted within a dedicated LPAR or a micropartition.
A system administrator must be logged into the global environment to create,
activate, and manage workload partitions. Workload partitions cannot be created
within other workload partitions.
The global environment owns all physical resources of the LPAR: network
adapters, disks adapters, disks, processors, memory. It allocates CPU and
memory resources to the workload partitions. It provides them access to the
network and storage devices.
The global environment has visibility into the workload partitions. It is possible
from the global environment to see (and control) the processes executing within
the WPARs, and to see the filesystems used by the WPARs.
Most performance monitoring and tuning activities are performed from the global
environment.

What is a Workload Partition?

A WPAR is a software created, virtualized OS environment within a single AIX 6
image. Each workload partition is a secure and isolated environment for the
application it hosts. The application in a WPAR thinks it is being executed in its
own dedicated AIX instance.
Figure 1-3 is a graphical overview of workload partitions in an AIX 6 environment.
Workload partitions can be created within an AIX6 LPAR. At this point of the
book, WPARs can be considered as a boundary around a set of AIX processes.
The term global environment is introduced in the AIX terminology to refer to the
part of the AIX operating system that hosts workload partitions. Creating WPARs
within an LPAR does not restrict the use of the hosting AIX instance. It is possible
to log into the global environment, to launch program in the global environment
and to perform exactly the same actions than on any AIX instance that does not
host WPARs.
Note: Throughout this whole book, we use the term LPAR to refer indifferently
to a micropartition or dedicated partition of a POWER™ based server, or to a
full physical server that is not partitioned. (also known as full-system partition
in POWER4 terminology).
Chapter 1. Introduction to Workload Partitions (WPAR) Technology in AIX 6 9
Draft Document for Review August 6, 2007 12:52 pm 7431CH_INTRODUCTION.fm
Figure 1-3 .Global environment , System and Application WPARs
Figure 1-3 introduces new concepts such as application workload partitions or
system workload partitions.
An important feature of workload partitions is their ability to be relocated from
LPAR to LPAR, whether these LPARs are hosted on the same physical server or
on different physical servers. The most important new concepts described in the
following sections include the following:
the global environment
the differences between the two types of WPARs: Application and System
live application mobility (also referred to as workload partition mobility or
workload partition relocation).

AIX 6 and WPAR based system virtualization

With the release of AIX 6.1 in the last part of 2007, IBM will introduce a new
virtualization capabilty called workload partition (WPAR). WPAR is a purely
software partitioning solution that is provided by the Operating System. It has no
dependencies on hardware features.
AIX 6 is available for POWER4, POWER5, POWER5+, and POWER6. WPAR
can be created in all these hardware environments.
WPAR provides a solution for partitioning one AIX operating instance in multiple
environments: each environment, called a workload partition, can host
applications, and isolate them from applications executing within other WPARs.
Figure 1-2 shows that workload partitions can be created within multiple AIX
instances of the same physical server, whether they execute in dedicated LPARs
or micropartitions.

Figure 1-2 shows that workload partitions can be created within multiple AIX
instances of the same physical server, whether they execute in dedicated LPARs
or micropartitions.

Introduction to Workload Partitions (WPAR) Technology in AIX 6

Overview of partitioning and virtualization
capabilities prior to AIX6
Today’s competitive corporate environment requires nimble IT departments with
the abilty to respond quickly to changes in the capacity and the use of innovative
methods to reduce time to market for new applications and systems. Escalating
costs of power, raised floor capacity and administrative costs also drive the need
to utilize technology in new ways to maximize a company’s IT investment.
Figure 1-1 on page 5 presents the various partitioning and virtualization
technologies that have been integrated within AIX. Of the different technologies
presented in Figure 1-1, Workload Manager (WLM) is the only one that is
software based.

AIX Workload Manager
Within AIX, Workload Manager has been part of the Operating System since
version 4.3. It allows multiple workloads to run under one AIX instance. The
system administrator builds rules based upon a user, process, or workload.
Based upon these rules, shares of CPU and/or memory would adjust to the
workload with peak demand.
If you have used Workload Manager in the past, then Chapter 7, “Resource
Control” on page 233 will be of interest to you to see the relationship between
Workload Manager and Workload Partitions.
Logical Partitions
With AIX 5.1 and POWER4™ technology IBM announced Logical Partitions
(LPARs) as a way to provide greater flexibility and better utilization of large
systems. Now systems could run AIX and Linux® in separate partitions starting
at a minimum of 1 CPU, 1 GB of memory and 1 ethernet adapter. However, a
reboot was required to move resources between LPARs.
Dynamic Logical Partitions
AIX 5.2 added more systems flexibility by being able to move CPU, I/O adapters
& memory dynamically without rebooting the LPARs. The combination of
firmware, hypervisor, and AIX are the technologies that together supported this
innovation. It allowed IT environments to become more adaptable to their
customers’ needs.
Advanced POWER Virtualization
AIX 5.3’s and POWER5™’s ability to virtualize CPUs, share ethernet adapters,
and virtually slice disks for provisioning to client LPARs has IT environments
impressing their customers and upper management. Virtualization is an excellent
vehicle to address business needs while controlling costs and IBM’s System p
Advanced Power Virtualization (APV) offers advanced technology to facilitate
server consolidation, reduce costs, provide redundancy, and adapt capacity to
quickly meet demand. APV can be used to reduce the need for static adapters,
can rapidly respond to increasing capacity demands and generally allows
companies to utilize their purchasing dollars more effectively.
From this history of improving flexibility of system resources, we now have AIX 6.
AIX 6 is capable of running on POWER4, POWER5, POWER5+, PPC970, and
POWER6™-based servers.

Use the Performance Advisor in Tivoli Performance

Viewer

Overview
The Performance Advisor in Tivoli Performance Viewer (TPV) provides advice to help tune systems for optimal performance and gives recommendations on inefficient settings by using collected Performance Monitoring Infrastructure (PMI) data. Advice is obtained by selecting the Performance Advisor icon in TPV. The Performance Advisor in TPV provides more extensive advice than the Runtime Performance Advisor. For example, TPV provides advice on setting the dynamic cache size, setting the JVM heap size and using the DB2 Performance Configuration Wizard.
1. Enable PMI services in the appserver
To monitor performance data through the PMI interfaces, you must first enable the performance monitoring service through the administrative console before restarting the server. If running in Network Deployment, you need to enable PMI services on both the server and on the node agent before restarting the server and the node agent.
2. Enable PMI services in Node Agent.
If running Network Deployment, you must enable PMI service on both the server and on the node agent, and restart the server and node agent.
3. Enable data collection.
The monitoring levels that determine which data counters are enabled can be set dynamically, without restarting the server. These monitoring levels and the data selected determine the type of advice you obtain. The Performance Advisor in TPV uses the standard monitoring level; however, the Performance Advisor in TPV can use a few of the more expensive counters (to provide additional advice) and provide advice on which counters can be enabled. This action can be completed in one of the following ways:
a. Setting performance monitoring levels .
b. Enabling performance monitoring services using the command line.
4. Start the Tivoli Performance Viewer.
5. Simulate a production level load.
Simulate a realistic production load for your application, if you are using the Performance Advisor in a test environment, or doing any other performance tuning. The application should run this load without errors. This simulation includes numbers of concurrent users typical of peak periods, and drives system resources such as CPU and memory to the levels expected in production. The Performance Advisor only provides advice when CPU utilization exceeds a sufficiently high level. For a list of IBM business partners providing tools to drive this type of load, see the article, Performance: Resources for learning in the sub-section of Monitoring performance with third party tools.
6. (Optional) Store data to a log file.
7. (Optional) Replay a performance data log file.
8. Refresh data.
Clicking refresh with server selected under the viewer icon causes TPV to:
o Query the server for new PMI and product configuration information.
Clicking refresh with server selected under the advisor icon causes TPV to:
o Refresh advice that is provided in a single instant in time.
o Not query the server for new PMI and product configuration information.
9. Tuning advice appears when the Advisor icon is chosen in the TPV Performance Advisor. Double-click an individual message for details. Since PMI data is taken over an interval of time and averaged to provide advice, details within the advice message appear as averages.
If the Refresh Rate is adjusted, the Buffer Size should also be adjusted to allow sufficient data to be collected for performing average calculations. Currently 2 minutes of data is required. Read more about adjusting the Refresh Rate or Buffer Size at:
o Change the display buffer size.
o Change the display buffer size.
10. Update the product configuration for improved performance, based on advice. Since Tivoli Performance Viewer refreshes advice at a single instant in time, take the advice from the peak load time.
Although the performance advisors attempt to distinguish between loaded and idle conditions, misleading advice might be issued if the advisor is enabled while the system is ramping up or down. This result is especially likely when running short tests. Although the advice helps in most configurations, there might be situations where the advice hinders performance. Due to these conditions, advice is not guaranteed. Therefore, test the environment with the updated configuration to ensure it functions and performs well.
Over a period of time the advisor may issue differing advice. This is due to load fluctuations and runtime state. When differing advice is received the user should look at all advice and the time period over which it was issued. Advice should be taken during the time that most closely represents peak production load.
Performance tuning is an iterative process. After applying advice, simulate a production load, update the configuration based on the advice, and retest for improved performance. This procedure should be continued until optimal performance is achieved.
11. Clearing values from tables and charts.
12. Resetting counters to zero.

View and modifying performance chart data

Overview
The View Chart tab displays a graph with time as the x-axis and the performance value as the y-axis.
1. Click a resource in the Resource Selection panel.The Resource Selection panel, located on the left side, provides a hierarchical (tree) view of resources and the types of performance data available for those resources. Use this panel to select which resources to monitor and to start and stop data retrieval for those resources. See Tivoli Performance Viewer features for information on the Resource Selection panel.
2. Click the View Chart tab in the Data Monitoring panel. The Data Monitoring panel, located on the right side, enables the selection of multiple counters and displays the resulting performance data for the currently selected resource. It contains two panels: the Viewing Counter panel above and the Counter Selection panel below. If necessary, you can set the scaling factors by typing directly in the scale field. See Scaling the performance data chart display for more information.












Refreshing data

Overview
The refresh operation is a local, not global, operation that applies only to selected resources. The refresh operation is recursive; all subordinate or children resources refresh when a selected resource refreshes. To refresh data:
1. Click one or more resources in the Resource Selection panel.
2. Click File > Refresh. Alternatively, click the Refresh icon or right-click the resource and select Refresh.Clicking refresh with server selected under the viewer icon causes TPV to query the server for new PMI and product configuration information. Clicking refresh with server selected under the advisor icon causes TPV to refresh the advice provided, but will not refresh PMI or product configuration information.
Clearing values from tables and charts

Overview
Selecting Clear Values removes remaining data from a table or chart. You can then begin populating the table or chart with new data.

Overview
To clear the values currently displayed:
1. Click one or more resources in the Resource Selection panel.
2. Click Setting > Clear Buffer. Alternatively, right-click the resource and select Clear Buffer

Resetting counters to zero

Overview
Some counters report relative values based on how much the value has changed since the counter was enabled. The Reset to Zero operation resets those counters so that they will report changes in values since the reset operation. This operation will also clear the buffer for the selected resources. See "Clearing values from tables and charts" in Related Links for more information about clearing the buffer for selected resources. Counters based on absolute values can not be reset and will not be affected by the Reset to Zero operation.
To reset the start time for calculating relative counters:
1. Click one or more resources in the Resource Selection panel.
2. Click Setting > Reset to Zero. Alternatively, right-click the resource and click Reset to Zero.

Storing data to a log file

Overview
You can save all data reported by the Tivoli Performance Viewer in a log file and write the data in binary format (serialized Java objects) or XML format.
To start recording data:
1. Click Logging > On or click the Logging icon.
2. Specify the name, location, and format type of the log file in the Save dialog box. The Files of type field allows an extension of *.perf for binary files or *.xml for XML format.
Note: The *.perf files may not be compatible between fix levels.
3. Click OK.

What to do next
To stop logging, click Logging > Off or click the Logging icon.














Replaying a performance data log file

Overview
You can replay both binary and XML logs by using the Tivoli Performance Viewer.

Overview
To replay a log file, do the following:
1. Click Data Collection in the navigation tree.
2. Click the Log radio button in the Performance data from field.
3. Click Browse to locate the file that you want to replay or type the file path name in the Log field.
4. Click Apply.
5. Play the log by using the Play icon or click Setting > Log Replay > Play.

Results
By default, the data replays at the same rate it was collected or written to the log. You can choose Fast Forward mode in which the log replays without simulating the refresh interval. To Fast Forward, use the button in the tool bar or click Setting > Log Replay > FF.
To rewind a log file, click Setting > Log Replay > Rewind or use the Rewind icon in the toolbar.
While replaying the log, you can choose different groups to view by selecting them in the Resource Selection pane. You can also view the data in either of the views available in the tabbed Data Monitoring panel.
You can stop and resume the log at any point. However, you cannot replay data in reverse.

View summary reports

Overview
Summary reports are available for each appserver. Before viewing reports, make sure data counters are enabled and monitoring levels are set properly. See Setting performance monitoring levels .
The standard monitoring level enables all reports except the report on Enterprise JavaBeans (EJB) methods. To enable an EJB methods report, use the custom monitoring setting and set the monitoring level to Max for the EJB module.

Overview
Tivoli Performance Viewer provides the following summary reports for each appserver:
Enterprise beans
Enterprise beans show the total number of method calls, average response time, and multiplication of total method calls by average response time for all the enterprise beans in a table. Enterprise beans provide a sorting feature to help you find which enterprise bean is the slowest or fastest and which enterprise bean is called most frequently.
EJB Methods
EJB Methods show the total number of method calls, average response time, and multiplication of total method calls by average response time for the individual EJB methods in a table. EJB Methods provide a sorting feature to help you find which EJB method is the slowest or fastest and which EJB method is called most frequently.
Servlets
Servlets show the total number of requests, average response time, and multiplication of total requests by average response time for all the servlets in a table. Servlets provide a sorting feature to help you find which servlet is the slowest or fastest and which servlet is called most frequently.
Web Container Pool
Web Container Pool shows charts of pool size, active threads, average response time, and throughput in the Web container thread pool
Object Request Broker (ORB) Thread Pool
ORB Thread Pool shows charts of pool size, active threads, average response time, and throughput in the ORB thread pool.
Connection Pool
Connection Pool shows a chart of pool size and pool in use for each data source.
1. Click the appserver icon in the navigation tree.
2. Click the appropriate column header to sort the columns in the report.

Results
If the instrumentation level excludes a counter, that counter does not appear in the tables and charts of the performance viewer. For example, when the instrumentation level is set to low, the thread pool size is not displayed because that counter requires a level of high.
Note that monitoring levels can also be set through the console

Set performance monitoring levels

Overview
The monitoring settings determine which counters are enabled. Changes made to the settings from Tivoli Performance Viewer affect all applications that use the Performance Monitoring Infrastructure (PMI) data.
Overview
To view monitoring settings:
1. Choose the Data Collection icon on the Resource Selection panel.This selection provides two options on the Counter Selection panel. Choose the Current Activity option to view and change monitoring settings. Alternatively, use File> Current Activity to view the monitoring settings.
2. Set monitoring levels by choosing one of the following options:
None:
Provides no data collection
Standard:
Enables data collection for all modules with monitoring level set to high
Custom:
Allows customized settings for each module
3. These options apply to an entire appserver.
4. (Optional) Fine tune the monitoring level settings.
a. Click Specify. This sets the monitoring level to custom.
b. Select a monitoring level.For each resource, choose a monitoring level of None, Low, Medium, High or Maximum. The dial icon will change to represent this level. Note: The instrumentation level is set recursively to all elements below the selected resource. You can override this by setting the levels for children AFTER setting their parents.
5. Click OK Apply

SOAP

SOAP is a specification for exchange of structured information in a decentralized, distributed environment. As such, it represents the main way of communication between the three key actors in a service oriented architecture (SOA): service provider, service requestor and service broker. Then main goal of its design is to be simple and extensible. A SOAP message is used to request a Web service.
WebSphere Application Server V5.0.2, 5.1 and 5.1.1 follow the standards outlined in SOAP 1.1.
SOAP was submitted to the World Wide Web Consortium (W3C) as the basis of the eXtensible Markup Language (XML) Protocol Working Group by several companies, including IBM and Lotus.
SOAP is an XML-based protocol that consists of three parts: an envelope that defines a framework for describing message content and process instructions, a set of encoding rules for expressing instances of application-defined data types, and a convention for representing remote procedure calls and responses.
SOAP is transport protocol-independent and can be used in combination with a variety of protocols. In Web services that are developed and implemented for use with WebSphere Application Server, SOAP is used in combination with HyperText Transport Protocol (HTTP), HTTP extension framework, and Java Messaging Service (JMS). SOAP is also operating system independent and not tied to any programming language or component technology.
Due to these characteristics, it does not matter what technology is used to implement the client, as long as the client can issue XML messages. Similarly, the service can be implemented in any language, as long as it can process XML messages. Also, both server and client sides can reside on any suitable platform.

Start the Tivoli Performance Viewer

1. Start the Tivoli Performance Viewer.
2. tperfviewer.[batsh] host_name port_number connector_type
For example:
tperfviewer.bat localhost 8879 SOAP
Connector_type can be either SOAP or RMI. The port numbers for SOAP/RMI connector can be configured in the Administrative Console under...
Servers Application Servers server_name End Points
If you are connecting to WebSphere Application Server, use the appserver host and connector port. If additional servers have been created, then use the appropriate server port for which data is required. Tivoli Performance Viewer will only display data from one server at a time when connecting to WebSphere Application Server.
If you are connecting to WebSphere Application Server Network Deployment, use the deployment manager host and connector port. Tivoli Performance Viewer will display data from all the servers in the cell. Tivoli Performance Viewer cannot connect to an individual server in WebSphere Application Server Network Deployment.
Default Port
Description
8879
SOAP connector port for WebSphere Application Server Network Deployment.
8880
SOAP connector port for WebSphere Application Server.
9809
RMI connector port for WebSphere Application Server Network Deployment.
2809
RMI connector port for WebSphere Application Server.
You can also start the Tivoli Performance Viewer with security enabled.
On iSeries, you can connect the Tivoli Performance Viewer to an iSeries instance from either a Windows, an AIX, or a UNIX client as described above. To discover the RMI or SOAP port for the iSeries instance, start Qshell and enter the following command:
WAS_HOME/bin/dspwasinst -instance myInstance
3. Click...
Start Programs IBM WebSphere Application Server v.50 Tivoli Performance Viewer
Tivoli Performance Viewer detects which package of WebSphere Application Server you are using and connects using the default SOAP connector port. If the connection fails, a dialog is displayed to provide new connection parameters.
You can connect to a remote host or a different port number, by using the command line to start the performance viewer.
4. Adjust the data collection settings.

Monitoring performance with Tivoli Performance

Viewer

Overview
The Resource Analyzer has been renamed Tivoli Performance Viewer.
Tivoli Performance Viewer (which is shipped with WebSphere) is a Graphical User Interface (GUI) performance monitor for WebSphere Application Server. Tivoli Performance Viewer can connect to a local or to a remote host. Connecting to a remote host will minimize performance impact to the appserver environment.
Monitor and analyze the data with Tivoli Performance Viewer with these tasks:
1. Start the Tivoli Performance Viewer.
2. Set performance monitoring levels .
3. View summary reports.
4. (Optional) Store data to a log file.
5. (Optional) Replay a performance data log file.
6. (Optional) View and modifying performance chart data.
7. (Optional) Scale the performance data chart display.
8. (Optional) Refresh data.
9. (Optional) Clear values from tables and charts.
10. (Optional) Reset counters to zero.

Mirror Write Consistency

Mirror Write Consistency (MWC) ensures data consistency on logical volumes in case asystem crash occurs during mirrored writes. The active method achieves this by loggingwhen a write occurs. LVM makes an update to the MWC log that identifies what areas ofthe disk are being updated before performing the write of the data. Records of the last 62distinct logical transfer groups (LTG) written to disk are kept in memory and also written toa separate checkpoint area on disk (MWC log). This results in a performance degradationduring random writes.With AIX V5.1 and later, there are now two ways of handling MWC:• Active, the existing method• Passive, the new method

IBM System p 570 with POWER 6

* Advanced IBM POWER6™ processor cores for enhanced performance and reliability* Building block architecture delivers flexible scalability and modular growth* Advanced virtualization features facilitate highly efficient systems utilization* Enhanced RAS features enable improved application availabilityThe IBM POWER6 processor-based System p™ 570 mid-range server delivers outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades and innovative virtualization technologies. This powerful 19-inch rack-mount system, which can handle up to 16 POWER6 cores, can be used for database and application serving, as well as server consolidation. The modular p570 is designed to continue the tradition of its predecessor, the IBM POWER5+™ processor-based System p5™ 570 server, for resource optimization, secure and dependable performance and the flexibility to change with business needs. Clients have the ability to upgrade their current p5-570 servers and know that their investment in IBM Power Architecture™ technology has again been rewarded.The p570 is the first server designed with POWER6 processors, resulting in performance and price/performance advantages while ushering in a new era in the virtualization and availability of UNIX® and Linux® data centers. POWER6 processors can run 64-bit applications, while concurrently supporting 32-bit applications to enhance flexibility. They feature simultaneous multithreading,1 allowing two application “threads” to be run at the same time, which can significantly reduce the time to complete tasks.The p570 system is more than an evolution of technology wrapped into a familiar package; it is the result of “thinking outside the box.” IBM’s modular symmetric multiprocessor (SMP) architecture means that the system is constructed using 4-core building blocks. This design allows clients to start with what they need and grow by adding additional building blocks, all without disruption to the base system.2 Optional Capacity on Demand features allow the activation of dormant processor power for times as short as one minute. Clients may start small and grow with systems designed for continuous application availability.Specifically, the System p 570 server provides:Common features Hardware summary* 19-inch rack-mount packaging* 2- to 16-core SMP design with building block architecture* 64-bit 3.5, 4.2 or 4.7 GHz POWER6 processor cores* Mainframe-inspired RAS features* Dynamic LPAR support* Advanced POWER Virtualization1 (option)o IBM Micro-Partitioning™ (up to 160 micro-partitions)o Shared processor poolo Virtual I/O Servero Partition Mobility2* Up to 32 optional I/O drawers* IBM HACMP™ software support for near continuous operation** Supported by AIX 5L (V5.2 or later) and Linux® distributions from Red Hat (RHEL 4 Update 5 or later) and SUSE Linux (SLES 10 SP1 or later) operating systems* 4U 19-inch rack-mount packaging* One to four building blocks* Two, four, eight, 12 or 16 3.5 GHz, 4.2 GHz or 4.7 GHz 64-bit POWER6 processor cores* L2 cache: 8 MB to 64 MB (2- to 16-core)* L3 cache: 32 MB to 256 MB (2- to 16-core)* 2 GB to 192 GB of 667 MHz buffered DDR2 or 16 GB to 384 GB of 533 MHz buffered DDR2 or 32 GB to 768 GB of 400 MHz buffered DDR2 memory3* Four hot-plug, blind-swap PCI Express 8x and two hot-plug, blind-swap PCI-X DDR adapter slots per building block* Six hot-swappable SAS disk bays per building block provide up to 7.2 TB of internal disk storage* Optional I/O drawers may add up to an additional 188 PCI-X slots and up to 240 disk bays (72 TB additional)4* One SAS disk controller per building block (internal)* One integrated dual-port Gigabit Ethernet per building block standard; One quad-port Gigabit Ethernet per building block available as optional upgrade; One dual-port 10 Gigabit Ethernet per building block available as optional upgrade* Two GX I/O expansion adapter slots* One dual-port USB per building block* Two HMC ports (maximum of two), two SPCN ports per building block* One optional hot-plug media bay per building block* Redundant service processor for multiple building block systems2