Abstract

Hidden Markov Models (HMMs) play a pivotal role in fields such as speech recognition, spam detection, and autonomous vehicles, where reliable predictive capabilities are essential. However, the rapid adoption of HMMs has heightened their susceptibility to adversarial attacks. This research investigates inherent weaknesses in traditional HMMs by examining how adversarial manipulation of observable data impacts model performance. We address three core questions: How does varying HMM parameters influence a path attraction problem? Which metaheuristic methods most effectively optimize these attacks in a white-box scenario? What key vulnerabilities emerge in HMMs under adversarial manipulation? To explore these questions, we design HMMs with varying number of hidden and observable states, aiming to maximize shifts in predicted hidden states, while minimizing alterations to observable data. Using the Viterbi Algorithm to measure shifts in hidden states, we employ genetic algorithms, simulated annealing, and tabu search to identify optimal attacks within the vast solution space. Preliminary findings indicate that specific model configurations and attack methods can severely degrade HMM performance, underscoring the need for more resilient models. This research highlights the critical importance of developing robust HMM frameworks—such as scaling model complexity or introducing noise to data—that can withstand adversarial interference, especially in critical applications where security and reliability are essential.

Advisor

Rajeev Bukralia

Committee Member

William Caballero

Committee Member

Hongxia Yin

Date of Degree

2024

Language

english

Document Type

Thesis

Program of Study

Data Science

Department

Mathematics and Statistics

College

Science, Engineering and Technology

Included in

Data Science Commons

Share

COinS
 

Rights Statement

In Copyright