This repository contains the work for my 2019 "Improving LSTM Neural Networks for Better Short-Term Wind Power Predictions" research project. Note: my work started before I created this repository
tensorflow vs. 1.4 numpy pickle (included) os (included) a lot of other crap
Ok, so the deal is that this repository was more meant for my own uses, so it's kinda convoluted. Let me try my best to make sense of the clutter.
This is where I run the...drumroll please...diagnostics programs checkfreeze is responsible for verifying that the frozen models are in good shape datafeeder_test_program will test the dataset generation library that I made The rest of the files are to make step plots, which are an important part of my research
save_v4 is for the non-weather forecast models save_v6 is for the weather forecast models they both serve to convert a trained model into a .pb frozen model for testing and deployment
forecast_hyp_opt is the genetic algorithm that optimizes hyperparmeters for all my models all the csv's are the optimal hyperparameters, as generated by forecast_hyp_opt
This is where all the performance analyses and trained models are stored. Each folder contains GRAPHS and models, as well as a frozen .pb file GRAPHS is where performance metrics like training curves, accuracy analysis, and naive ratios are kept models is where the trained (but not frozen) models are stored
This is where the code for each of my models are stored
This is where my dataset maker libraries are kept data_feeder+SOMETHING files are low-level data parser programs. These are responsible for grabbing the data from the spreadsheet (kinda like a rudimental sql server) dataset_maker+SOMETHING files are higher-level dataset maker programs that use data_feeder. The training programs in the Models folder rely on the dataset_maker programs There are some other files here, but they're not that important.
This folder stores the stuff needed to maintain the dataset. normalize_data will temporally normalize the dataset.
run_frozen_LSTM_v3 will run the non-forecast frozen model run_frozen_LSTM_v4 will run the forecast frozen model the other two files are meant to run data from a foreign location
This is where the training data is located, after being extracted from spinning disk on a dataserver (the actual dataset is around 4tb in size, and I've downloaded it to my own server (aka repurposed desktop running linux). This server is responsible for extracting the right data for the location given.
This directory contains the code and data needed for my dataserver to extract the right data at the locations that I tested my model on.