Thông tin tài liệu
Thông tin siêu dữ liệu biểu ghi
Trường DC | Giá trị | Ngôn ngữ |
---|---|---|
dc.contributor.author | Manuel F., Dolz | - |
dc.contributor.author | Sergio, Barrachina | - |
dc.contributor.author | Héctor, Martínez | - |
dc.date.accessioned | 2023-04-25T01:26:40Z | - |
dc.date.available | 2023-04-25T01:26:40Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | https://link.springer.com/article/10.1007/s11227-023-05050-4 | - |
dc.identifier.uri | https://dlib.phenikaa-uni.edu.vn/handle/PNK/8247 | - |
dc.description | CC BY | vi |
dc.description.abstract | In this work, we assess the performance and energy efficiency of high-performance codes for the convolution operator, based on the direct, explicit/implicit lowering and Winograd algorithms used for deep learning (DL) inference on a series of ARM-based processor architectures. Specifically, we evaluate the NVIDIA Denver2 and Carmel processors, as well as the ARM Cortex-A57 and Cortex-A78AE CPUs as part of a recent set of NVIDIA Jetson platforms. The performance–energy evaluation is carried out using the ResNet-50 v1.5 convolutional neural network (CNN) on varying configurations of convolution algorithms, number of threads/cores, and operating frequencies on the tested processor cores. The results demonstrate that the best throughput is obtained on all platforms with the Winograd convolution operator running on all the cores at their highest frequency. However, if the goal is to reduce the energy footprint, there is no rule of thumb for the optimal configuration. | vi |
dc.language.iso | en | vi |
dc.publisher | Springer | vi |
dc.subject | ARM-based processor architectures | vi |
dc.subject | NVIDIA Denver2 | vi |
dc.title | Performance–energy trade-offs of deep learning convolution algorithms on ARM processors | vi |
dc.type | Book | vi |
Bộ sưu tập | ||
OER - Công nghệ thông tin |
Danh sách tệp tin đính kèm: