Search

Author

Subject

Date issued

Has File(s)

Search Results

Results 271-280 of 324 (Search time: 0.008 seconds).
  • Authors: Noelia, Rico; Pedro, Alonso; Irene, Díaz;  Advisor: -;  Co-Author: - (2023)

    Ranking aggregation, studied in the field of social choice theory, focuses on the combination of information with the aim of determining a winning ranking among some alternatives when the preferences of the voters are expressed by ordering the possible alternatives from most to least preferred. One of the most famous ranking aggregation methods can be traced back to 1959, when Kemeny introduces a measure of distance between a ranking and the opinion of the voters gathered in a profile of rankings. Using this, he proposed to elect as winning ranking of the election the one that minimizes the distance to the profile. This is factorial on the number of alternatives, posing a handicap in the runtime of the algorithms developed to find the winning ranking, which prevents its use in real ...

  • Authors: Marcello, Zanardelli; Fabrizio, Guerrini; Riccardo, Leonardi;  Advisor: -;  Co-Author: - (2022)

    In the last years, due to the availability and easy of use of image editing tools, a large amount of fake and altered images have been produced and spread through the media and the Web. A lot of different approaches have been proposed in order to assess the authenticity of an image and in some cases to localize the altered (forged) areas. In this paper, we conduct a survey of some of the most recent image forgery detection methods that are specifically designed upon Deep Learning (DL) techniques, focusing on commonly found copy-move and splicing attacks. DeepFake generated content is also addressed insofar as its application is aimed at images, achieving the same effect as splicing. This survey is especially timely because deep learning powered techniques appear to be the most relev...

  • Authors: Esraa, Hassan; Mahmoud Y., Shams; Noha A., Hikal;  Advisor: -;  Co-Author: - (2023)

    Optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles involved in the learning process. Some of these strategies have been considered in this study to learn more about their complexities. It is crucial to analyse and summarise optimization techniques methodically from a machine learning standpoint since this can provide direction for future work in both machine learning and optimization. The approaches under consideration include the Stochastic Gradient Descent (SGD), Stochastic Optimization Descent with Momentum, Rung Kutta, Adaptive Learning Rate, Root Mean Square Propagation, Adaptive Moment Estimation, Deep Ensembles, Feed...

  • Authors: Felix, Kastner; Andreas, Rößler;  Advisor: -;  Co-Author: - (2023)

    For the approximation and simulation of twofold iterated stochastic integrals and the corresponding Lévy areas w.r.t. a multi-dimensional Wiener process, we review four algorithms based on a Fourier series approach. Especially, the very efficient algorithm due to Wiktorsson and a newly proposed algorithm due to Mrongowius and Rößler are considered. To put recent advances into context, we analyse the four Fourier-based algorithms in a unified framework to highlight differences and similarities in their derivation. A comparison of theoretical properties is complemented by a numerical simulation that reveals the order of convergence for each algorithm.

  • Authors: François Le, Gall; Saeed, Seddighin;  Advisor: -;  Co-Author: - (2022)

    Longest common substring (LCS), longest palindrome substring (LPS), and Ulam distance (UL) are three fundamental string problems that can be classically solved in near linear time. In this work, we present sublinear time quantum algorithms for these problems along with quantum lower bounds. Our results shed light on a very surprising fact: Although the classic solutions for LCS and LPS are almost identical (via suffix trees), their quantum computational complexities are different. While we give an exact O~(n−−√) time algorithm for LPS, we prove that LCS needs at least time Ω~(n2/3) even for 0/1 strings.

  • Authors: Gábor, Szűcs; Richárd, Kiss;  Advisor: -;  Co-Author: - (2022)

    The fast improvement of deep learning methods resulted in breakthroughs in image classification, however, these models are sensitive to adversarial perturbations, which can cause serious problems. Adversarial attacks try to change the model output by adding noise to the input, in our research we propose a combined defense method against it. Two defense approaches have been evolved in the literature, one robustizes the attacked model for higher accuracy, and the other approach detects the adversarial examples. Only very few papers discuss both approaches, thus our aim was to combine them to obtain a more robust model and to examine the combination, in particular the filtering capability of the detector. Our contribution was that the filtering based on the decision of the detector is ...

  • Authors: Islam S., Fathi; Mohamed Ali, Ahmed; M. A., Makhlouf;  Advisor: -;  Co-Author: - (2022)

    Remote Healthcare Monitoring Systems (RHMs) that employ fetal phonocardiography (fPCG) signals are highly efficient technologies for monitoring continuous and long-term fetal heart rate. Wearable devices used in RHMs still face a challenge that decreases their efficacy in terms of energy consumption because these devices have limited storage and are powered by batteries. This paper proposes an effective fPCG compression algorithm to reduce RHM energy consumption. In the proposed algorithm, the Discrete Orthogonal Charlier Moment (DOCMs) is used to extract features of the signal. The householder orthonormalization method (HOM) is used with the Charlier Moment to overcome the propagation of numerical errors that occur when computing high-order Charlier polynomials.

  • Authors: Gianluigi, Folino; Massimo, Guarascio; Francesco, Chiaravalloti;  Advisor: -;  Co-Author: - (2023)

    Accurate rainfall estimation is crucial to adequately assess the risk associated with extreme events capable of triggering floods and landslides. Data gathered from Rain Gauges (RGs), sensors devoted to measuring the intensity of the rain at individual points, are commonly used to feed interpolation methods (e.g., the Kriging geostatistical approach) and estimate the precipitation field over an area of interest. However, the information provided by RGs could be insufficient to model complex phenomena, and computationally expensive interpolation methods could not be used in real-time environments. Integrating additional data sources (e.g., radar and geostationary satellites) is an effective solution for improving the quality of the estimate, but it needs to cope with Big Data issues....

  • Authors: Gianira N., Alfarano; Karan, Khathuria; Violetta, Weger;  Advisor: -;  Co-Author: - (2021)

    In this paper, we present a new perspective of single server private information retrieval (PIR) schemes by using the notion of linear error-correcting codes. Many of the known single server schemes are based on taking linear combinations between database elements and the query elements. Using the theory of linear codes, we develop a generic framework that formalizes all such PIR schemes. This generic framework provides an appropriate setup to analyze the security of such PIR schemes. In fact, we describe some known PIR schemes with respect to this code-based framework, and present the weaknesses of the broken PIR schemes in a unified point of view.

  • Authors: Guillermo, Iglesias; Edgar, Talavera; Ángel, González-Prieto;  Advisor: -;  Co-Author: - (2023)

    With the latest advances in deep learning-based generative models, it has not taken long to take advantage of their remarkable performance in the area of time series. Deep neural networks used to work with time series heavily depend on the size and consistency of the datasets used in training. These features are not usually abundant in the real world, where they are usually limited and often have constraints that must be guaranteed. Therefore, an effective way to increase the amount of data is by using data augmentation techniques, either by adding noise or permutations and by generating new synthetic data.