Skip to content
/ stocker Public

Stock prices prediction using multimodal(speech + transcript) data from board review meetings

Notifications You must be signed in to change notification settings

abtExp/stocker

Repository files navigation

Stock Price Prediction Using Multi-Modal Features (Audio + Text Transcripts)

Made as part of the Interview Assignment For RA Position at IIIT-Delhi MIDAS Labs. (I got selected, but due to some unfortunate circumstances at my then workplace, i wasn't able to continue.)

Data

The data consists of the audio recordings of quarterly board performance meetings where the stock performance of publically traded companies is announced and predictions for future performance are made. The data also consists of the text transcripts of the same meetings. The target is the stock prices for the next 30 days.

  • The audio and text data is the dataset mentioned in the paper, consisting of a total of 575 meetings with subsequent audio and text transcripts.

  • The paper mentions the use of CSRP data for the stock prices as the target, but i used the data from the yahoo finance website. The adjusted closing prices is used as the target.

  • The data contains audio files for each sentence in the transcript.

Approach

The problem statement is a multimodal approach for stock prices prediction, Relying on the correlation between, text and audio data. The main study is of taking the sentiment queues from the audio data and the contextual information from the text and learning a joint representation for accurate prediction of stock prices.

  • To obtain the certain speech features from the audio, i used a pretrained speech sentiment classifier and took the last hidden layer activations as the input feature representation for the audio. The speech sentiment classifier uses MFCC features as input and uses convolutional filters to learn the embeddings for speech.
  • To obtain the word embeddings for the text data, i used a pretrained BERT model. Instead of using pretrained glove encodings, i went for BERT because of it's ability to learn context information within the embeddings.
  • The sentences from the text transcript are padded to be of same length and are converted to sequences through the BERT embeddings. These embedded sentence sequences serve as the input features for the text data.
  • Each of the text and audio input features are passed through a BiLSTM Layer with an attention head to produce the within modality encodings.
  • To learn the correlation between modalities, both encodings are concatenated and are passed through a BiLSTM layer which produces the combined encoding of both modalities.
  • These encodings are then passed through a feed forward layer with a dropout regularizer.
  • Finally this is passed through a regression layer that predicts the next 30 days of stock prices.
  • I compare 3-day, 7-day, 15-day and 30-day activations for generating the scores.

The scores are generated according to the formula for stock volatility mentioned in the paper :

${MSE = \frac{1}{M}\sum_{i=1}^{M}(f(X_{i}) - y_{i})^{2}}$

Where $f(X_i)$ is the predicted volatility and $y_i$ is the tru volatility for example $i$.

And volatility is defined as : $v_{[t-\tau, t]} = \ln(\sqrt{\frac{\sum_{i=0}^{\tau}(r_{t-i} - \bar r)^2}{\tau}})$

where $r_t$ is the return price at day $t$ and $\bar r$ is the mean of the return price over the period of day $t − τ$ to day $t$.

and return price is defined as $r_t = \frac{P_t}{P_{t-1}}-1$

where $P_t$ is the closing price on day $t$

Model

model architecture

Further improvements
  1. Instead of directly concatenating the audio and text encodings, use the approach similar to the paper.
  2. Use a better speech encoder like pase.
  3. The documents contain 3 different types of statements, generatl statements, past or current performance, future ambitions and predictions, these statements can be modelled separately to learn weighted features from each type of statement and utilize it to perform prediction and to get better insight how the speech queues affect the prediction.
References
Usage
  1. Clone the repo.
  2. Make a data folder in the root.
  3. Download the data from the link mentioned in the paper.
  4. Extract the data and move it to the features folder in the data folder.
  5. Run the script prepare_data.py : it'll create train, validation and test folders, convert the mp3 audios to wav files, and download the yahoo finance data.
  6. Run the training using train.py
  7. To predict, pass the directory where the data is stored and run test.py
Results
MSE Scores
Model 3-days 7-days 15-days 30-days
Paper 1.371 0.420 0.300 0.217
Past Volatility 1.389 0.517 0.292 0.254
Text only 1.879 0.503 0.373 0.279
Audio only 4.389 9.138 11.242 12.256
Multimodal TBD TBD TBD TBD

About

Stock prices prediction using multimodal(speech + transcript) data from board review meetings

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages