Search

Author

Subject

Date issued

Has File(s)

Search Results

Results 1331-1340 of 2278 (Search time: 0.007 seconds).
  • Authors: Gizem, Yalcin; Erlis, Themeli; Evert, Stamhuis;  Advisor: -;  Co-Author: - (2022)

    Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they trust human judges more and have greater intentions to go to the court when a human (vs. an algorithmic) judge adjudicates.

  • Authors: Andrea, Tonon; Fabio, Vandin;  Advisor: -;  Co-Author: - (2023)

    The mining of time series data has applications in several domains, and in many cases the data are generated by networks, with time series representing paths on such networks. In this work, we consider the scenario in which the dataset, i.e., a collection of time series, is generated by an unknown underlying network, and we study the problem of mining statistically significant paths, which are paths whose number of observed occurrences in the dataset is unexpected given the distribution defined by some features of the underlying network. A major challenge in such a problem is that the underlying network is unknown, and, thus, one cannot directly identify such paths. We then propose CASPITA, an algorithm to mine statistically significant paths in time series data generated by an unkn...

  • Authors: Nina, Herrmann; Herbert, Kuchen;  Advisor: -;  Co-Author: - (2023)

    Contemporary HPC hardware typically provides several levels of parallelism, e.g. multiple nodes, each having multiple cores (possibly with vectorization) and accelerators. Efficiently programming such systems usually requires skills in combining several low-level frameworks such as MPI, OpenMP, and CUDA. This overburdens programmers without substantial parallel programming skills. One way to overcome this problem and to abstract from details of parallel programming is to use algorithmic skeletons. In the present paper, we evaluate the multi-node, multi-CPU and multi-GPU implementation of the most essential skeletons Map, Reduce, and Zip. Our main contribution is a discussion of the efficiency of using multiple parallelization levels and the consideration of which fine-tune settings ...

  • Authors: Dominik, Raab; Andreas, Theissler; Myra, Spiliopoulou;  Advisor: -;  Co-Author: - (2022)

    In clinical practice, algorithmic predictions may seriously jeopardise patients’ health and thus are required to be validated by medical experts before a final clinical decision is met. Towards that aim, there is need to incorporate explainable artificial intelligence techniques into medical research. In the specific field of epileptic seizure detection there are several machine learning algorithms but less methods on explaining them in an interpretable way. Therefore, we introduce XAI4EEG: an application-aware approach for an explainable and hybrid deep learning-based detection of seizures in multivariate EEG time series. In XAI4EEG, we combine deep learning models and domain knowledge on seizure detection, namely (a) frequency bands, (b) location of EEG leads and (c) temporal char...

  • Authors: Luca, Guarnera; Oliver, Giudice; Salvatore, Livatino;  Advisor: -;  Co-Author: - (2022)

    A crime scene can provide valuable evidence critical to explain reason and modality of the occurred crime, and it can also lead to the arrest of criminals. The type of evidence collected by crime scene investigators or by law enforcement may accordingly effective involved cases. Bullets and cartridge cases examination is of paramount importance in forensic science because they may contain traces of microscopic striations, impressions and markings, which are unique and reproducible as “ballistic fingerprints”. The analysis of bullets and cartridge cases is a complicated and challenging process, typically based on optical comparison, leading to the identification of the employed firearm. New methods have recently been proposed for more accurate comparisons, which rely on three-dimensi...

  • Authors: Vincenzo Eduardo, Padulano; Pablo Oliver, Cortés; Pedro, Alonso-Jordá;  Advisor: -;  Co-Author: - (2023)

    CERN (Centre Europeen pour la Recherce Nucleaire) is the largest research centre for high energy physics (HEP). It offers unique computational challenges as a result of the large amount of data generated by the large hadron collider. CERN has developed and supports a software called ROOT, which is the de facto standard for HEP data analysis. This framework offers a high-level and easy-to-use interface called RDataFrame, which allows managing and processing large data sets. In recent years, its functionality has been extended to take advantage of distributed computing capabilities. Thanks to its declarative programming model, the user-facing API can be decoupled from the actual execution backend. This decoupling allows physical analysis to scale automatically to thousands of computat...

  • Authors: Sanghyub John, Lee; JongYoon, Lim; Leo, Paas;  Advisor: -;  Co-Author: - (2023)

    Abstract Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors’ self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize.

  • Authors: Fabian, Knorr; Peter, Thoman; Thomas, Fahringer;  Advisor: -;  Co-Author: - (2022)

    Runtime systems can significantly reduce the cognitive complexity of scientific applications, narrowing the gap between systems engineering and domain science in HPC. One of the most important angles in this is automating data migration in a cluster. Traditional approaches require the application developer to model communication explicitly, for example through MPI primitives. Celerity, a runtime system for accelerator clusters heavily inspired by the SYCL programming model, instead provides a purely declarative approach focused around access patterns. In addition to eliminating the need for explicit data transfer operations, it provides a basis for efficient and dynamic scheduling at runtime.

  • Authors: Manuel F., Dolz; Sergio, Barrachina; Héctor, Martínez;  Advisor: -;  Co-Author: - (2023)

    In this work, we assess the performance and energy efficiency of high-performance codes for the convolution operator, based on the direct, explicit/implicit lowering and Winograd algorithms used for deep learning (DL) inference on a series of ARM-based processor architectures. Specifically, we evaluate the NVIDIA Denver2 and Carmel processors, as well as the ARM Cortex-A57 and Cortex-A78AE CPUs as part of a recent set of NVIDIA Jetson platforms. The performance–energy evaluation is carried out using the ResNet-50 v1.5 convolutional neural network (CNN) on varying configurations of convolution algorithms, number of threads/cores, and operating frequencies on the tested processor cores. The results demonstrate that the best throughput is obtained on all platforms with the Winograd con...

  • Authors: Adriano, Vogel; Gabriele, Mencagli; Dalvan, Griebler;  Advisor: -;  Co-Author: - (2021)

    Several real-world parallel applications are becoming more dynamic and long-running, demanding online (at run-time) adaptations. Stream processing is a representative scenario that computes data items arriving in real-time and where parallel executions are necessary. However, it is challenging for humans to monitor and manually self-optimize complex and long-running parallel executions continuously. Moreover, although high-level and structured parallel programming aims to facilitate parallelism, several issues still need to be addressed for improving the existing abstractions. In this paper, we extend self-adaptiveness for supporting autonomous and online changes of the parallel pattern compositions.