*This project has been paused with no estimated date or resume due to an internal data loss on 3/8/23
Originally designed to evolve into an academic paper, Albert (not to be confused with alBERT or ony other BERT family models) is a dually trained seq2seq NLP machine learning model. In its first stage of training, Albert would learn the basics of English grammar and speech, and excel at holding intelligent conversation. In its second stage of training, Albert would use its conversations with larger models, such as OpenAI's ChatGPT model, as input data to acquire general and specific knowledge. GPT models provide excellent conversation reinforcement as well as information that renders code "smart".
In essence this project sought to understand the transfer of communication knowledge from model to model, demonstrating the possibility of model "families" as an extension of the collective understanding of machine learning.