Home Product Support Operating Systems Linux Red Hat Installation What are the Red Hat Enterprise Linux limits?

PrKB11417 : What are the Red Hat Enterprise Linux limits?

The table below summarizes the various limits which exists in the recent Red Hat Enterprise Linux distributions.

 

Server Version architecture support comparison chart

RHEL version  3  4  5
Architecture
AS
ES AS ES Advanced Platform base server
x86, Intel64, Itanium2 Yes Yes Yes Yes Yes Yes
IBM POWER Yes No Yes No Yes Yes

 

Server Support limits as defined by RHEL Product Subscription

RHEL version

Architecture
 3  4  5
AS
ES AS ES Advanced Platform base server
Maximum physical CPUs/sockets
Unlimited 2 Unlimited 2 Unlimited 2
Maximum memory Unlimited 8GB Unlimited 16GB Unlimited Unlimited
Maximum virtualized guests/instances N/A N/A N/A N/A Unlimited 4
Storage virtualization (with Red Hat GFS and Cluster Suite) N/A N/A N/A N/A Yes No

Red Hat defines physical CPUs equivalently to sockets, so a multi-core and/or hyperthreading CPU is counted as a single socket when determining which subscription to purchase

 

Technology capabilities and limits

Note: Two numbers indicate the certified and theoretical (certified[/theoretical]) limits.

Certified limits: reflect the current state of system testing by Red Hat and its partners, and set the upper limit of support provided by any Red Hat Enterprise Linux subscription if not otherwise limited explicitly by the subscription terms. Certified limits are subject to change as on-going testing completes.

 

Architecture / Maximum logical CPUs
RHEL version
 3  4  5
x86 16 32 32
Itanium2 8 64/512 64/1024
Intel64 8 64/64 64/255
Power 8 64/128 128/128

Note: Red Hat defines a logical CPU as any schedulable entity. So every core/thread in a multi-core/thread processor is a logical CPU

 

Architecture / Maximum memory
RHEL version  3  4  5
X86 64GB 64GB 16GB
Itanium2 128GB 256GB/1024TB 1TB/1024TB
Intel64 128GB 256GB/1TB 256GB/1TB
Power 64GB 128GB/1TB 512GB/1TB

Note: with RHEL3 or RHEL4 and i386 architecture:

  • standard kernel supports 4 GB of physical memory
  • SMP kernel supports 16 GB of physical memory
  • hugemem kernel supports 64 GB of physical memory

Additionally, in certain workload scenarios it may be advantageous to use the "hugemem" kernel on systems with more than 12GB of main memory.

 

Note: The x86 "Hugemem" kernel is not provided in Red Hat Enterprise Linux 5.

Miscellaneous
RHEL version  3  4  5
Maximum filesize (Ext3) 2TB 2TB 2TB
Maximum filesystem size (Ext3) 2TB 8TB 8TB/16TB
Maximum filesize (GFS) 2TB 16TB/8EB 16TB/8EB
Maximum filesystem size (GFS) 2TB 16TB/8EB 16TB/8EB
Maximum x86 per-process virtual address space Approx 4GB Approx 4GB Approx 3GB

Note: The maximum capacity of the EXT3 is 16TB (increased from 8TB) in RHEL 5. This enhancement was originally included in RHEL 5 as a Technology Preview, and is fully supported in update 5.1. The following articles give more information how to do it:

 

Note: If there are any 32-bit machines in the cluster, the maximum gfs file system size is 16TB. If all machines in the cluster are 64-bit, the maximum size is 8EB.

 

Kernel and OS features
RHEL version  3  4  5
Kernel foundation Linux 2.4.21 Linux 2.6.9 Linux 2.6.18
Compiler/toolchain GCC 3.2 GCC 3.4 GCC 4.1
Languages supported 10 15 19
NIAP/CC certified Yes - 3+ Yes - 4+ Yes - 4+
Compatibility libraries V2.1 V2.1 and V3 V3 and V4
Common Operating Environment (COE) compliant Yes Yes N/A
LSB compliant Yes - 1.3 Yes -3 Yes -3.1

 

RHEL 32 bits memory management

PAE

In computing, Physical Address Extension (PAE) is a feature of some x86 and x86-64 processors that enables the use of more than 4 gigabytes of physical memory to be used in 32-bit systems - given appropriate operating system support. PAE is provided by Intel Pentium Pro (and above) CPUs.

x86 processor hardware architecture is augmented with additional address lines used to select the additional memory, so physical address size is increased from 32 bits to 36 bits. This, theoretically, increases maximum physical memory size from 4 GB to 64 GB. The 32-bit size of the virtual address is not changed, so regular application software continues to use instructions with 32-bit addresses and (in a flat memory model) is limited to 4 gigabytes of virtual address space. The operating system uses page tables to map this 4 GB address space into the 64 GB of virtual memory. The mapping is typically applied differently for each process. In this way, the extra memory is useful even though no single regular application can access it all simultaneously.

Hugemem

RHEL 4 includes a kernel known as the hugemem kernel. This kernel supports a 4GB per-process user space (versus 3GB for the other kernels), and a 4GB direct kernel space. Using this kernel allows RHEL to run on systems with up to 64GB of main memory. The hugemem kernel is required in order to use all the memory in system configurations containing more than 16GB of memory. The hugemem kernel can also benefit configurations running with less memory (if running an application that could benefit from the larger per-process user space, for example.)

Note: To provide a 4GB address space for both kernel and user space, the kernel must maintain two separate virtual memory address mappings. This introduces overhead when transferring from user to kernel space; for example, in the case of system calls and interrupts. The impact of this overhead on overall performance is highly application dependent.

RHEL 5 situation

The original i686 only had 32-bits with which to address the memory in a machine.  Because 2^32 == 4GB, the original i686 can only address up to 4GB of memory.

Sometime later on, Intel introduced the PAE extensions for i686.  This means that although the processor still only runs 32-bit code, it can use up to 36-bits to address memory (they added more bits later on, but that's not super-relevant).  So newer i686 processor could address 2^36 bits of memory, or 64GB.

However, the linux kernel has limitations that makes it essentially impossible to run with 64GB of memory reliably. Therefore, while running the linux kernel on an i686 PAE machine, you can really only use up to about 16GB of memory reliably.

To work around that limitation, RHEL-4 (and RHEL-3) had a new mode called "hugemem".  Among other things, it allowed the kernel to reliably address all 64GB of memory.

The problem is that the hugemem patches were never accepted upstream.  Because of that, and because of the fact that 64-bit machines were so prevalent by the time RHEL-5 was released, Red Hat decided to drop the hugemem variant from RHEL-5.
So that leads us to the following situation:

RHEL-4 kernels:

  • i686 - no PAE, no hugemem patches, can address up to 4GB memory
  • i686-smp - PAE, no hugemem patches, can reliably run with around 16GB
  • i686-hugemem - PAE, hugemem patches, can reliably run with 64GB

RHEL-5 kernels:

  • i686 - no PAE, no hugemem patches, can address up to 4GB of memory
  • i686-PAE - PAE, no hugemem patches, can reliably run with around 16GB

So, in summary, if customers need to use > 16GB of memory, the absolute best suggestion is to use RHEL-5 x86_64, which suffers from none of these limitations.  A secondary idea is to use RHEL-4 i686 hugemem, but as RHEL-4 gets closer to end-of-life, this becomes less and less a good idea.

 Memory size supported for RHEL 5 ia32 kernels

The support situation is correctly described in the pages you referenced at http://www.redhat.com/rhel/compare ie, RHEL5 32bit (and xen) are officially only supported with up to 16GB of memory.

Whilst it's possible for the non-xen kernel to access up to 32GB, ultimately the system will start hitting LOWMEM exhaustion problems once it is under load. This is because the kernel uses 32bytes of LOWMEM to track the status of each page in the mem_map array. At 16GB of RAM, the kernel needs 128MB of memory just to track what it is doing. Whilst a PAE enabled kernel can handle up to 64GB of physical address space, to manage and track that amount of memory the kernel needs 512MB, or half of the available LOWMEM.

Whilst this will "work" in the sense that the system can boot up and idle without much trouble, once you start applying load, it will quickly run out of LOWMEM space and possibly need to OOM-KILL processes to recover. This was worked around in RHEL3 and RHEL4 with the HUGEMEM kernel to provide more LOWMEM space (with the side effect of performance hits due to paging with each system call) but the overhead and effort of maintaining that solution is "expensive" when it is less "expensive" (and now almost unavoidable) to use 64bit hardware, where it isn't a problem.

The decision to not include the HUGEMEM patches in RHEL5 is one that Red Hat Product Management took a number of years ago in the lead up to development of RHEL5. As for the reasons, there are many considerations - 64bit hardware was becoming the norm, an opportunity to encourage customers to move to x86_64 instead of staying on x86 to simplify long term support goals, and also because the required patches were never accepted upstream which lead to them being difficult and 'risky' to maintain.

As for alternatives, a viable and supported solution is to run 32bit applications directly on a 64bit OS making use of the appropriate 32bit libs. Otherwise, hosting the application in a 32bit RHEL4 or RHEL5 virtualization guest on a 64bit host OS is also supported.

 

LUN number supported

The maximum number of LUNs depends on a few factors:
  • the SCSI kernel module
  • the adapter module
If you run the command "modinfo scsi_mod" you will see the following output: (For current values, check out the files under /sys/module/scsi_mod/parameters.)
  • parm: max_luns: Last SCSI LUN (should be between 1 and 2^32-1) (int) This works out at 4,294,967,295, which is large.
It also depends on the driver being used, as the limit can also be set by the HBA driver, ie, most drivers such as qla2xxx and lpfc have module parameters that allow you to support large numbers of devices: (check the command "modinfo lpfc")
  • parm: lpcf_max_luns: Maximum allowed LUN (int)
# modinfo lpfc
These values can range from 8, for older parallel SCSI drivers, up to the maximum number of supported SCSI devices for Fibre Channel.

 

More information

Look at the Red Hat web site https://www.redhat.com/rhel/compare/

for more information.

 

FAQ Article

Assistance request
Create and track
Bull Search