Search

Author

Subject

Date issued

Has File(s)

Search Results

Results 1391-1400 of 2278 (Search time: 0.007 seconds).
  • Authors: Islam S., Fathi; Mohamed Ali, Ahmed; M. A., Makhlouf;  Advisor: -;  Co-Author: - (2022)

    Remote Healthcare Monitoring Systems (RHMs) that employ fetal phonocardiography (fPCG) signals are highly efficient technologies for monitoring continuous and long-term fetal heart rate. Wearable devices used in RHMs still face a challenge that decreases their efficacy in terms of energy consumption because these devices have limited storage and are powered by batteries. This paper proposes an effective fPCG compression algorithm to reduce RHM energy consumption. In the proposed algorithm, the Discrete Orthogonal Charlier Moment (DOCMs) is used to extract features of the signal. The householder orthonormalization method (HOM) is used with the Charlier Moment to overcome the propagation of numerical errors that occur when computing high-order Charlier polynomials.

  • Authors: Gianluigi, Folino; Massimo, Guarascio; Francesco, Chiaravalloti;  Advisor: -;  Co-Author: - (2023)

    Accurate rainfall estimation is crucial to adequately assess the risk associated with extreme events capable of triggering floods and landslides. Data gathered from Rain Gauges (RGs), sensors devoted to measuring the intensity of the rain at individual points, are commonly used to feed interpolation methods (e.g., the Kriging geostatistical approach) and estimate the precipitation field over an area of interest. However, the information provided by RGs could be insufficient to model complex phenomena, and computationally expensive interpolation methods could not be used in real-time environments. Integrating additional data sources (e.g., radar and geostationary satellites) is an effective solution for improving the quality of the estimate, but it needs to cope with Big Data issues....

  • Authors: Gianira N., Alfarano; Karan, Khathuria; Violetta, Weger;  Advisor: -;  Co-Author: - (2021)

    In this paper, we present a new perspective of single server private information retrieval (PIR) schemes by using the notion of linear error-correcting codes. Many of the known single server schemes are based on taking linear combinations between database elements and the query elements. Using the theory of linear codes, we develop a generic framework that formalizes all such PIR schemes. This generic framework provides an appropriate setup to analyze the security of such PIR schemes. In fact, we describe some known PIR schemes with respect to this code-based framework, and present the weaknesses of the broken PIR schemes in a unified point of view.

  • Authors: Guillermo, Iglesias; Edgar, Talavera; Ángel, González-Prieto;  Advisor: -;  Co-Author: - (2023)

    With the latest advances in deep learning-based generative models, it has not taken long to take advantage of their remarkable performance in the area of time series. Deep neural networks used to work with time series heavily depend on the size and consistency of the datasets used in training. These features are not usually abundant in the real world, where they are usually limited and often have constraints that must be guaranteed. Therefore, an effective way to increase the amount of data is by using data augmentation techniques, either by adding noise or permutations and by generating new synthetic data.

  • Authors: Agus, Sudjianto; Jinwen, Qiu; Miaoqi, Li;  Advisor: -;  Co-Author: - (2023)

    A new ensemble framework for an interpretable model called linear iterative feature embedding (LIFE) has been developed to achieve high prediction accuracy, easy interpretation, and efficient computation simultaneously. The LIFE algorithm is able to fit a wide single-hidden-layer neural network (NN) accurately with three steps: defining the subsets of a dataset by the linear projections of neural nodes, creating the features from multiple narrow single-hidden-layer NNs trained on the different subsets of the data, combining the features with a linear model. The theoretical rationale behind LIFE is also provided by the connection to the loss ambiguity decomposition of stack ensemble methods. Both simulation and empirical experiments confirm that LIFE consistently outperforms directly...

  • Authors: Octavian, Machidon; Jani, Asprov; Tine, Fajfar;  Advisor: -;  Co-Author: - (2022)

    While the evolution of mobile computing is experiencing considerable growth, it is at the same time seriously threatened by the limitations of battery technology, which does not keep pace with the evergrowing increase in energy requirements of mobile applications. Yet, with the limits of human perception and the diversity of requirements that individuals may have, a question arises of whether the effort should be made to always deliver the highest quality result to a mobile user? In this work we investigate how a user’s physical activity, the spatial/temporal properties of the video, and the user’s personality traits interact and jointly influence the minimal acceptable playback resolution. We conduct two studies with 45 participants in total and find out that the minimal acceptable...

  • Authors: Sebastian, Buschjäger; Katharina, Morik;  Advisor: -;  Co-Author: - (2023)

    Ensembles are among the state-of-the-art in many machine learning applications. With the ongoing integration of ML models into everyday life, e.g., in the form of the Internet of Things, the deployment and continuous application of models become more and more an important issue. Therefore, small models that offer good predictive performance and use small amounts of memory are required. Ensemble pruning is a standard technique for removing unnecessary classifiers from a large ensemble that reduces the overall resource consumption and sometimes improves the performance of the original ensemble. Similarly, leaf-refinement is a technique that improves the performance of a tree ensemble by jointly re-learning the probability estimates in the leaf nodes of the trees, thereby allowing for ...

  • Authors: Katharina, Hoedt; Verena, Praher; Arthur, Flexer;  Advisor: -;  Co-Author: - (2022)

    Given the rise of deep learning and its inherent black-box nature, the desire to interpret these systems and explain their behaviour became increasingly more prominent. The main idea of so-called explainers is to identify which features of particular samples have the most influence on a classifier’s prediction, and present them as explanations. Evaluating explainers, however, is difficult, due to reasons such as a lack of ground truth. In this work, we construct adversarial examples to check the plausibility of explanations, perturbing input deliberately to change a classifier’s prediction. This allows us to investigate whether explainers are able to detect these perturbed regions as the parts of an input that strongly influence a particular classification. Our results from the audi...

  • Authors: M.A., Hernández-Verón; Nisha, Yadav; Eulalia, Martínez;  Advisor: -;  Co-Author: - (2022)

    We consider a generic type of nonlinear Hammerstein-type integral equations with the particularity of having non-differentiable kernel of Nemystkii type. So, in order to solve it we consider a uniparametric family of iterative processes derivative free, with the main advantage that for a special value of the involved parameter the iterative method obtained coincides with Newton’s method, that is due to the fact of evaluating the divided difference operator when the two values are the same. We perform a qualitative convergence study by choosing an auxiliary point, that allow us to obtain the existence and separation of solutions of the given equation, that is, local and semilocal convergence balls can be obtained.