Skip to content

Egoluback/Egoluback

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 

Repository files navigation

Hi there, I'm George Kokush

Computer science student, Intern ML-engineer from Russia 🇷🇺

Kaggle Badge

I'm George Kokush, 19 y. o. HSE student and novice ML/DL-engineer from Russia.

Contact me on: Telegram VK
CV

PyTorch NumPy Pandas Sklearn

Python C++

My last projects

  • Hierarchical Transformer research [ Presentation ] [ Repo ]
    • My task was to implement Hierarchical Transformer architecture for music domain and to experiment with different parts of it
    • Basic implementation was similar with Music Transformer (baseline) in terms of quality, but x2 better in terms of speed
    • I experimented with various hierarchical transformer features, such as shortening/upsample functions, transformer depth, amount of layers, etc
    • I also came up with several special losses (for HT architecture), some of them look promising
  • "Semantically-Informed Regressive Encoder Score" submission for WMT23 Shared task workshop [ Paper ] [ Repo ]
    • Our task was to develop NN-based metric for text evaluation(machine translation)
    • We improved our developments from AIRI research project trying different approaches(including use of additional vector representations and contrastive learning)
    • Our approach was on 5th place in chinese-english and hebrew-english language pairs and 11th on english-german language pair
    • Our paper was reviewed and we were invited to EMNLP 23 conference
  • Team submission for Eval4NLP Shared task workshop [ Paper ] [ Repo ]
    • Our task was to develop metric for text evaluation(MT&Summarization) only using prompt-engineering techniques and approaches
    • We tried the new approach based on AutoMQM work
    • Our paper was reviewed and we were invited to IJCNLP-AACL 23 conference
  • "Efficient LLM-based metrics for NLG" research project for AIRI Summer School [ Presentation ] [ Repo ]
    • Our task was to develop NN-based metric for text evaluation(machine translation)
    • We tried to beat GPT4-based GEMBA metric by fine-tuning LLMs for translation evaluation
    • I implemented LLM Encoder+MLP decoder architecture which got the best quality
  • "Multimodality in image2text tasks" research project for 1st year of HSE [ Poster ] [ Repo ]
    • Our task was to develop image2text model for russian language
    • We implemented the BLIP-2 architecture and tested it on various configurations
    • We adapted architecture for russian language and achieved tolerable quality
  • NTI ML contest, 2021 [ Repo ]
    • I used lots of classic ML algorithms(linear and logistic regression, trees, boostings, etc), web-scrapping for data extraction and grid-search for hyperparams search
    • We achieved one of the best scores in final rating
  • Toxic detector bot, pet project [ Repo ]
    • I trained CatBoostClassifiers for toxicity prediction using word2vec embeddings
  • Other pet-projects

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published