A SERVICE OF

logo

Chapter 14. Data migration in zSeries environments 297
see on a single 3390-3. Or, to put it differently, we may see on a single volume as many
concurrent I/Os as we see on nine 3390-3 volumes. Despite the PAV support, it still might be
necessary to balance disk storage activities across disk storage server images.
With the DS8000 you will have the flexibility to define LSSs of exactly the size you desire
rather then being constrained by RAID rank topology. This means that you define the number
of PAVs or alias devices that you need, rather than a number dictated by the RAID rank size.
Assuming you decide to create an LSS with 256 devices each, then the volume size you
decide upon determines how many alias devices to configure for WLM management.
Table 14-1 provides a proposal for a configuration with batch and transaction workload using
about 50 percent of the total disk space. It further assumes that all volumes are evenly and
horizontally spread across all LSSs and that all these volumes are system-managed.
Table 14-1 Suggested numbers of base and alias devices in an LSS with 256 devices
In a FICON environment this is supposed to be a conservative ratio between base and alias
volumes, which has the potential to provide a perfect compromise between utilized device
numbers and minimizing IOS queueing time. IOS queueing time is usually an indication of
volume contention due to more than one I/O request at a time to the very same volume.
Note that the configured LSS capacity is determined by the number of base devices chosen
and has no link any more to the actual rank size. The numbers in Table 14-1 are rounded.
Note also that the number of devices per LSS is still limited to a maximum of 256.
14.1.3 Keep source and target volume size at the current size
When the number of volumes does not reach the current zSeries limit of 64K volumes or is
significantly below this limit, you might stay with 3390-3 as a standard and avoid the additional
migration effort at this time. Introducing solutions based on Remote Mirror and Copy, the
number of devices to plan for can quickly reach the limits of number of devices supported in
current zSeries servers.
Volume consolidation is still a bit painful because it requires logical data set movement to
properly maintain catalog entries for most of the data. Only the first volume can be copied
through a full volume operation to a larger target volume. After that full copy operation, the
VTOC on the target volume needs to be adjusted to hold many more entries than the first
source volume. Another consideration for the first full volume copy operation is that the
volume names must be maintained on the new volume because full physical volume
operations do not maintain catalog entries. Otherwise you would not be able to locate the
data sets any more in a system-managed environment, which always goes through the
catalog to locate data sets and orients data set location solely on volume serial numbers.
Referring to certain volumes or volume lists in JCL may need changes to the JCL to modify or
remove these volumes or lists of volumes. Independent of whether these volumes are
system-managed with the guaranteed space attribute or non-managed volumes, JCL most
likely needs to be adjusted to reflect the new volume names.
Volume size in cyl Number base dev Number alias dev Capacity/LSS
3,339 (3390-3) 170 - 192 86 - 64 550 - 480 GB
10,017 (3390-9) 128 - 170 128 - 86 1 - 1.5 TB
30,051 (3390-9+) 86 - 128 170 - 128 2.3 - 3.4 TB
30.051 (3390-9+) 86 - 128 170 - 128 2.3 - 3.4 TB