Item Infomation

Full metadata record
DC FieldValueLanguage
dc.contributor.authorHaoran, Wang-
dc.contributor.authorThibaut, Tachon-
dc.contributor.authorChong, Li-
dc.date.accessioned2023-04-24T02:02:17Z-
dc.date.available2023-04-24T02:02:17Z-
dc.date.issued2022-
dc.identifier.urihttps://link.springer.com/article/10.1007/s10766-022-00741-6-
dc.identifier.urihttps://dlib.phenikaa-uni.edu.vn/handle/PNK/8234-
dc.descriptionCC BYvi
dc.description.abstractThe increasing size of deep neural networks (DNNs) raises a high demand for distributed training. An expert could find good hybrid parallelism strategies, but designing suitable strategies is time and labor-consuming. Therefore, automating parallelism strategy generation is crucial and desirable for DNN designers. Some automatic searching approaches have recently been studied to free the experts from the heavy parallel strategy conception. However, these approaches all rely on a numerical cost model, which requires heavy profiling results that lack portability. These profiling-based approaches cannot lighten the strategy generation work due to the non-reusable profiling value. Our intuition is that there is no need to estimate the actual execution time of the distributed training but to compare the relative cost of different strategies.vi
dc.language.isoenvi
dc.publisherSpringervi
dc.subjectDNNsvi
dc.subjectDNN designersvi
dc.titleSMSG: Profiling-Free Parallelism Modeling for Distributed Training of DNNvi
dc.typeBookvi
Appears in Collections
OER - Công nghệ thông tin

Files in This Item: