Machine Learning

From Living Building Science

Revision as of 13:47, 3 May 2021 by Sshah638 (talk | contribs)

Welcome to the Machine Learning Team Page! One of our past goals last semester was to create a bee recognition and classification project to complement our finished flower recognition model and eventually integrate both of these models with the bee classification project of the previous ML team as well as the BeeKeeper GO app. Once successfully integrated, our models could potentially be used to enhance the experience of the app and make it more educational for users, as the classification/identification of flowers will allow users to learn while taking pictures. In addition, another goal of last semester was to use data analysis and computer vision to detect the occurrence of swarms in bee hives and the contributing factors/warning signs that will lead to a swarm occurring. Our current goals include improving and expanding our past projects as well as potentially creating machine learning and analysis tools to assist the other sub-teams in Living Building Science with their goals.

Spring 2021 Semester Overview

We started off this semester by doing the LBS vip. As we were originally in Beesnap, our projects have been geared toward machine learning applications with the goal of integrating with the Beekeeper GO application. However since that is no longer the case, we have shifted our focus to more relevant issues for this VIP. With the biodiversity team's focus on birds and Dr. Weigel's bird sound data that she has, we decided that analyzing and classifying birds would be helpful. We initially reached our to the biodiversity team but they said that there would probably not be anything for us to collaborate on, so we decided to do the bird classifications on our own. Additionally, we are expanding the bee hive analysis to include varroa mite detection with the intention to monitor the health of the hives on campus.

Hive Analysis

In Fall 2020, we created a script and tracking algorithm to draw the paths of bee flight in a given video. In order to use this data with to forecast bee swarming, we modified this algorithm to track the distance traveled by each bee in each frame and append that to a CSV file. This CSV file is to be used as a feature to analyze the bee movements on campus and determine their relative speeds and active points. We decided not to further improve this CSV tracking after we decided to cut down on the projects this semester.


The code for this project can be found under the bee repository linked below in the Spring 2021 Work Section.

Swarm Time Series Analysis

This semester, we also worked on forecasting future hive weights for various hives using time series models from the ARIMA family. We used three primary models, ARIMA, ARIMAX, and SARIMAX, in order to make predictions on future hive weights based on current hive weight data.

Overview

ARIMA is a type of time series model that stands for Auto-Regressive Integrated Moving Average. ARIMA is commonly used for time series analysis because it is good at using information in the past lagged values of a time series (which is simply data mapped over time) to predict future values​. As the name implies, the model consists of three primary components. The first component, the Auto-Regressive (AR) component, involves regressing the time series data onto a previous version of itself using lags, or prior values. The AR component is measured by a parameter p, which is the number of lag observations in the model. The second component, the Integrated (I) component, involves differencing the raw data in order to make the time series stationary. A time series is stationary if there are no trend or seasonal effects, and the overall statistical properties (such as mean and variance) are constant over time. The I component is measured by a parameter d, which is the number of times raw observations need to be differenced to become stationary. The final component, the Moving Average (MA) component, involves using the previous lagged errors to model the current error of the time series. The MA component is measured by a parameter q, which is the size of the moving average window.

Each ARIMA model is uniquely determined by its p, d, and q values. In order to determine which ARIMA (p, d, q) is best for forecasting hive weights using our data, we conducted various statistical tests such as the Ad-fuller (ADF) test and we observed various plots such as the auto-correlation (ACF) and partial auto-correlation (PACF) plots.

In addition to ARIMA, we also used two additional models, ARIMAX and SARIMAX, to forecast hive weights. Both ARIMAX and SARIMAX involve using exogenous (X) variables, which are essentially other variables in the time series that may be used to assist in forecasting the original variable. For our exogenous variables, we used hive temperature, hive humidity, ambient temperature, ambient humidity, and ambient rain. SARIMAX differs from ARIMAX in the sense that it takes seasonality (S) into account, as it is often used on datasets that have seasonal cycles.

Results

After forecasting hive weights using our three models, we analyzed the error for all three sets of predictions using Mean Absolute Percentage Error (MAPE). We used MAPE as our error metric because it is fairly intuitive in the sense that it is simply an averaged measure of the difference between actual and forecast values, and it also adjusts for the volume of data. We found that the mean absolute percentage error for each model was 4.041 (for ARIMA), 4.049 (for ARIMAX), and 4.039 (for SARIMAX). The improvement from ARIMA to SARIMAX was therefore only 0.06%, which is very nominal. Because of this, we determined that the best approach going forward would be to use the ARIMA model, since it is still fairly accurate and it only requires weight data, making it more feasible for applying this model to Kendeda hive data, which is our ultimate goal for this sub-project.

Application

In order to put this model into production, we decided to serialize the model and connect it to an AWS EC2 instance, which would allow other users to access a basic web application where they could input a hive weight dataset and receive predictions for the next 5 days' worth of hive weight values. In order to serialize the model, we used a Python package called pickle. This serialized model would then be loaded in a Flask app, where it would be fit to input data in order to make predictions on that data. This flask app was then finally connected to the AWS EC2 instance, making the prediction process accessible to all users.

Bird Classification

Background and Purpose

This semester we decided to also focus on birds and classifying birds. A lot of bird sound data is collected on campus and there is also video footage and to make use of this data, we decided to construct two models, one to classify birds based on images and one to classify birds based on sound with the intention of applying these to the campus data.


Methods

The first step was to collect the data. For the bird images, this was a simple task since we simply found a dataset on kaggle that would meet our needs. For bird sound data, this was a tougher task. Dr. Weigel suggested a few websites for us to look into but that did not work out since you could only download sounds one at a time. For a machine learning model we would need a lot more data, so we went to kaggle and found a dataset, but it was too large for our uses and occupied around 25 GB of space (this would lead to very slow upload/download times). We decided to use DGX (the supercomputer on campus) to download these files and extract only 10 bird species worth of sound data so that we can train the model.

Next we had to actually construct the models. With the image data we could preprocess the data and feed it into a convolutional neural network (CNN) to train and validate with testing data. We separated the test data and training data by a ratio of 0.25 and fit them into the VGG model, ReduceLROnPlateau, and Mobilenet's. This takes around an hour to run and we would have to tune the model based on accuracy. For the sound data, we had a bit more work to do since sound data is not really suited well for a CNN. So we decided to map all of our MP3 files to spectrograms by using the Python Librosa module. After research, we found that we cannot preprocess spectrograms like traditional image data, so we had to process the sound frequencies and we decided to remove the low frequency sounds since bird songs are high frequency. We then stored both the unprocessed and the processed spectrograms as PNG image files and fed them into CNNs.


This is an example of a spectrogram created with Python's Librosa module


This is an example of a processed spectrogram by removing the low frequency sounds


Above are example images of the spectrograms created after we converted a given audio file. These images were used in the CNN model as training and test data.

Results

The bird image classification model had around 96% training accuracy and 89% test accuracy.

The bird sound classification model had around 25% accuracy with the processed spectrograms and a 71.8% accuracy with the unprocessed spectrograms.

Discussion

When analyzing the images, we could tell that the processed spectrograms did not have many distinguishing features, but in the unprocessed spectrograms there is a clear contrast between the black and the purple/yellow bars. This may be why the model had a significantly higher accuracy since it is able to distinguish the sound waves a lot clearer.

Future Work

For the bird sound model, we trained the model with only 10 species so we would have to work on collecting more data and increasing the number of species and retraining the model. Once that is completed, the next step would be feed the raw data collected on campus into our model and see how the predictions are.

Spring 2021 Work

Link to GitHub Repos: Hive Analysis, Varroa, Birds


Link to Team Drive: https://drive.google.com/drive/folders/1rBKT9Ntk3zafMeJoSVKFDgdkGeOm0VeH?usp=sharing

Week 1: Introduction to Living Building Science and new members

Week 2: Assignment to sub-teams

Week 4: Slides for 2/9/2021

Week 6: Slides for 2/23/2021

Week 8: Slides for 3/9/2021

Week 9: Wellness Day

Week 11: Slides for 3/30/2021

Week 13: Slides for 4/13/2021

Week 15: Final Presentation

Past Semester Projects

Fall 2020 Semester Poster

Spring 2020 Semester Poster


Team Members

Name Major Years Active
Sukhesh Nuthalapati Computer Science Spring 2020 - Present
Rishab Solanki Computer Science Spring 2020 - Present
Sneh Shah Computer Science Spring 2020 - Present
Daniel Tan Computer Science Fall 2020 - Present
Quynh Trinh Computer Science Fall 2020 - Present
Jonathan Wu Computer Science Spring 2020 - Present