Computer science graduate with over three years of experience in developing deep learning and machine learning pipelines.
I aim to make contributions to a dynamic team researching computer vision and/or NLP with a profound interest in using attention mechanisms in cognitive computing, particularly machine vision.
For the past few years, I've been developing and maintaining python libraries (i.e. TransformerX, Emgraph and Bigraph) as well as a few other standalone projects.
-
It is a Python library for building transformer-based models and comes with multiple building blocks and layers you need for creating your model. Currently, it supports Tensorflow and we will add support for Pytorch and JAX soon. A python library for developing, training, and evaluating knowledge graph representation learning. It also comes with a small model zoo, which is primarily used for benchmarking and comparing new models.
If you are working with n-partite graphs, you are familiar with its intricacies when it comes to applying normal graphs' algorithms; that's where Bigraph comes into play and provides researchers with an easy-to-use API. It also computes algorithms on GPU if you have CUDA and graphic drivers installed.
A lightning-fast audio full-text search engine on top of Telegram It allows users to quickly and easily find information that is of genuine interest or value, without the need to wade through numerous irrelevant channels. It provides users with search results that lead to relevant information on high-quality audio files.