Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorSONG, WONJUN-
dc.contributor.authorKIM, GWANGSUN-
dc.contributor.authorJUNG, HYUNGJOON-
dc.contributor.authorCHUNG, JONGWOOK-
dc.contributor.authorAHN, JUNG HO-
dc.contributor.authorLEE, JAE W.-
dc.contributor.authorKIM, JOHN-
dc.date.accessioned2018-12-04T02:57:00Z-
dc.date.available2018-12-04T02:57:00Z-
dc.date.created2018-11-12-
dc.date.issued2017-04-12-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/94445-
dc.description.abstractNUMA (non-uniform memory access) servers are commonly used in high-performance computing and datacenters. Within each server, a processor-interconnect (e.g., Intel QPI, AMD HyperTransport) is used to communicate between the different sockets or nodes. In this work, we explore the impact of the processor-interconnect on overall performance -- in particular, the performance un- fairness caused by processor-interconnect arbitration. It is well known that locally-fair arbitration does not guarantee globally-fair bandwidth sharing as closer nodes receive more bandwidth in a multi-hop network. However, this work demonstrates that the opposite can occur in a commodity NUMA server where remote nodes receive higher bandwidth (and perform better). We analyze this problem and iden- tify that this occurs because of external concentration used in router micro-architectures for processor-interconnects without globally-aware arbitration. While accessing remote memory can occur in any NUMA system, performance un- fairness (or performance variation) is more critical in cloud computing and virtual machines with shared resources. We demonstrate how this unfairness creates significant performance variation when a workload is executed on the Xen virtualization platform. We then provide analysis using synthetic workloads to better understand the source of unfair- ness and eliminate the impact of other shared resources, including the shared last-level cache and main memory. To provide fairness, we propose a novel, history-based arbitration that tracks the history of arbitration grants made in the previous history window. A weighted arbitration is done based on the history to provide global fairness. Through simulations, we show our proposed history-based arbitration can provide global fairness and minimize the processor- interconnect performance unfairness at low cost.-
dc.languageEnglish-
dc.publisherACM-
dc.relation.isPartOf2017 Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems-
dc.relation.isPartOfProceedings of 2017 Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems-
dc.titleHistory-Based Arbitration for Fairness in Processor-Interconnect of NUMA Servers-
dc.typeConference-
dc.type.rimsCONF-
dc.identifier.bibliographicCitation2017 Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, pp.765 - 777-
dc.citation.conferenceDate2017-04-08-
dc.citation.conferencePlaceCC-
dc.citation.conferencePlaceXi'an, China-
dc.citation.endPage777-
dc.citation.startPage765-
dc.citation.title2017 Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems-
dc.contributor.affiliatedAuthorKIM, GWANGSUN-
dc.description.journalClass1-
dc.description.journalClass1-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse