Hierarchical Collective I/O Scheduling for High-Performance Computing

Jialin Liu, Yu Zhuang, Yong Chen

Research output: Contribution to journalArticle

7 Scopus citations

Abstract

The non-contiguous access pattern of many scientific applications results in a large number of I/O requests, which can seriously limit the data-access performance. Collective I/O has been widely used to address this issue. However, the performance of collective I/O could be dramatically degraded in today's high-performance computing systems due to the increasing shuffle cost caused by highly concurrent data accesses. This situation tends to be even worse as many applications become more and more data intensive. Previous research has primarily focused on optimizing I/O access cost in collective I/O but largely ignored the shuffle cost involved. Previous works assume that the lowest average response time leads to the best QoS and performance, while that is not always true for collective requests when considering the additional shuffle cost. In this study, we propose a new hierarchical I/O scheduling (HIO) algorithm to address the increasing shuffle cost in collective I/O. The fundamental idea is to schedule applications' I/O requests based on a shuffle cost analysis to achieve the optimal overall performance, instead of achieving optimal I/O accesses only. The algorithm is currently evaluated with the MPICH3 and PVFS2. Both theoretical analysis and experimental tests show that the proposed hierarchical I/O scheduling has a potential in addressing the degraded performance issue of collective I/O with highly concurrent accesses.

Original languageEnglish
Pages (from-to)117-126
Number of pages10
JournalBig Data Research
Volume2
Issue number3
DOIs
StatePublished - Sep 1 2015

Keywords

  • Big data
  • Collective I/O
  • Data intensive computing
  • High-performance computing
  • Scheduling

Fingerprint Dive into the research topics of 'Hierarchical Collective I/O Scheduling for High-Performance Computing'. Together they form a unique fingerprint.

Cite this