Skip to content

anutkk/RambaNet

Repository files navigation

RambaNet

The goal of the project is to perform authorship attribution on medieval Jewish Thought books using a modern Deep Learning approach. This is a classification problem, when the challenge is the modeling of authorship characteristics.

The use of Neural Networks and semantically-inclined embeddings may allow to take into account not only stylography but also content and ideas.

Dataset

We use the JSON Hebrew version of the dataset Sefaria-Export, which is graciously provided by Sefaria. The raw dataset is not included in the repository due to its size.

Data exploration and vizualizations of the Sefaria dataset may be found in the folder data_exploration.

Background and Methodoloy

Classifier

There are comparatively few papers researching neural networks for the purpose of authorship attribution. The existing literature (a list can be found in the References below) can be divided into two main approachs:

  • Convolutional Neural Networks (CNN).
  • Recurrent Neural Networks (RNN) such as LSTM and GRU, the latter giving consistently slightly better results.

An extensive litterature review (see reference) seems to lead to the conclusion that CNNs are better suited for the task. The explanation may be that RNNs are great at learning temporal connections between words, which is appropriate for predicting the next word in a sentence. Unlike CNNs, they are not designed to capture semantic or stylistic information.

Embeddings

Generally in NLP embeddings are created for words or sentences. However, recent studies [Zhang 2015] seem to suggest that character-level embeddings provide competetitive results while being less complex, requiring less parameters and training time. In this model we choose to research character embeddings, for two main reasons.

Firstly, an important drawback of word embeddings is that the vocabulary must be defined in advance, and out-of-vocabulary and less common words are left out of the embedding. However, for authorship attribution purposes the study of hapax legomena (unique words) or just uncommon expressions bears a lot of information.

Second, the method of embedding words was crafted with English (or other Roman or Anglo-Saxon languages) in mind. In English,prepositions, linking words, determiners and possessive adjectives are always separate words and have low semantic meaning. Medieval Hebrew possesses properties which may challenge word embedding methods like GloVe and word2vec. Prepositions, linking words, determiners and possessive adjectives are often part of other words and not independent words. Moreover, homonyms and homographs are frequent in Hebrew, much more than in English (see this paper, pp. 3-4 and that paper pp. 9-10). The number of byforms (alternative spellings of words) is astoninishgly high (more than 30% of Biblical Hebrew according to that paper).

Recent research in Bengali [Khatun 2020] suggests that character-level embeddings lead to better accuracy. Bengali shares some characteristics with Hebrew [SOURCE?].

Results

Character-Level CNN

Requirements

  • Python 3.7
  • PyTables
  • TensorFlow

References

  1. Sebastian Ruder, Parsa Ghaffari, John G. Breslin, "Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution", arXiv:1609.06686 (2016).
  2. Chen Qian, Tianchang He, Rao Zhang, "Deep Learning based Authorship Identification", Stanford University (2017).
  3. Xiang Zhang, Junbo Zhao, Yann LeCun, "Character-level Convolutional Networks for Text Classification", arXiv:1509.01626 (2015).
  4. Aisha Khatun, Anisur Rahman, Md. Saiful Islam, Marium-E-Jannat, "Authorship Attribution in Bangla literature using Character-level CNN", arXiv:2001.05316 (2020).

About

Authorship attribution of ancient books using CNN (under active development).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published