bullx blade system

Chassis bullx bladeThe bullx blade chassis can host up to 18 compute blades in 7U. It also contains the first-level interconnect, a management unit and all components necessary to power and cool the blades, interconnect and management unit.

It is the ideal foundation to build a medium to large-scale HPC cluster, with bullx B500 compute blades combined with R423 E2 service nodes.

The bullx blade chassis provides a peak performance of up to 1.69 Tflops with Nehalem-EP processors (Intel® Xeon® 5500 series processors) and  up to 2.53 Tflops with Westmere-EP processors (Intel® Xeon® 5600 series processors). It also offers a fully non-blocking InfiniBand QDR interconnect.


bullx B500 Nehalem Compute Blades (NCB)


bullx B505 Accelerator Blades (GPGPU) 


bullx B510 SandyBridge Compute Blades (SCB) 


bullx B515 Accelerator Blades (I3GPU)


bullx B520 Haswell Compute Blades (HCB) 
Assistance request
Create and track
Bull Search