Weak artificial intelligence uses encoded functions of rules to process information. This kind of intelligence is competent, but lacks consciousness, and therefore cannot comprehend what it is doing. In another view, strong artificial intelligence has a mind of its own that resembles a human mind. Many of the bots on Twitter are only following a set of encoded rules. Previous studies have created machine learning algorithms to determine whether a Twitter account was being run by a human or a bot. Twitter bots are improving and some are even fooling humans. Creating a machine learning algorithm that differentiates a bot from a human gets harder as bots behave more like humans. The goal of the present study focuses on the interaction between humans and computers, and how some Twitter bots seem to trick people into thinking they are human. This thesis evaluates and compares decision trees, random forests of decision trees, and naïve Bayes classifiers to determine which of these machine learning approaches yields the best performance. Each algorithm uses features available to users on Twitter. Sentiment analysis of a tweet is used determine whether or not bots are displaying any emotion, which is used as a feature in each algorithm. Results show some of the most useful features for classifying bots versus humans are the total number of favorites a tweet had, total number of tweets from the account, and whether or not the tweet contained a link. Random forests are able to detect bots 100% of the time on one of the datasets used in this work. Random forest gave the best overall performance of the machine learning approaches used; user-based features were identified as being the most important; and bots did not display high intensities of emotions.


Rebecca Bates

Committee Member

Dean Kelley

Committee Member

Julie Wulfmeyer

Date of Degree




Document Type



Cognitive Science


Bachelor of Science (BS)


Integrated Engineering


Social and Behavioral Sciences