The open archive for STFC research publications

You may experience service outages on ePubs over the coming days due to work being carried out to enhance our network infrastructure. The service should be considered at risk from 23/11 - 03/12.

Full Record Details

Persistent URL http://purl.org/net/epubs/work/40650651
Record Status Checked
Record Id 40650651
Title On the challenges of deploying an unusual high performance hybrid object/file parallel storage system in JASMIN
Abstract The demand for increasing data volume and velocity in environmental sciences continues to increase with higher data-rate Earth Observation platforms coming on line, such as ESA’s Sentinel satellites [1], and high-resolution climate modelling, such as the CMIP6 programmes [2], being just two examples. JASMIN [3], the UK’s Super-data-cluster hybrid HPC/cloud platform for environmental data analysis, is at the forefront of this deluge, with increasing demands from its over-10,000 users and its supported science communities. Since its inception in 2012, JASMIN has supported successfully the volume and velocity with just two tiers of storage: a single high-performance parallel file system (reaching ~20Pbytes capacity to date) and tape (of around 80Pbytes capacity). However, for the JASMIN phase 4 upgrade installed in the first half of 2018, we have needed to increase the number of tiers to straddle the capacity/performance/cost space more easily. For this, we have extended out from our well-known HPC parallel file system storage to add innovative software-defined hybrid object/file storage platforms of unusual choice and features sets for HPC systems. In this WIP session, we briefly discuss our 42Pbyte Quobyte [4] software-defined storage system, which is the largest capacity deployment in the world and the first of its kind installed in an HPC context. We will discuss specifically some of the challenges in constructing a novel 5-tier CLOS non-blocking low latency routed Ethernet network of sufficient bandwidth (20’s Tb/s) and scale (1,000 100/50/40 Gb ports and 1,000 10Gb ports) to support JASMIN’s growth. Preliminary results will be presented, showing our first WIP benchmarking output reaching >200Gbyte/s, and we hope to be able to report a first ever IO500 [5] benchmark result for Quobyte at SC18.
Keywords JASMIN , Storage systems , HPC
Funding Information
Related Research Object(s):
Licence Information:
Language English (EN)
Type Details URI(s) Local file(s) Year
Presentation Presented at 3rd Joint International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (pdsw-DISCS 2018), Dallas, TX, 12 Nov 2018. doi:10.13140/RG.2.2.10395.31527 2018