Evaluation Code for Text Generation Tasks
-
Updated
Oct 3, 2018 - Python
Evaluation Code for Text Generation Tasks
A package with code from my ML projects that has a potential of being reusable
Official repository of the evaluation script and baselines for the EVALITA Dank Memes 2020 task
This task is majorly focusing on Elasticsearch.It is open source search engine known for text search and analytics.Task is to convert documents into structured index by using information retrieval models and evaluate them.
Practice classification,evaluation and clustering with red-wine dataset.
Annotated corpus of 19th century classical commentaries. Supported tasks: named entity recognition, entity linking and citation mining.
Human-to-Robot Handovers | Evaluation toolkit | 9th Robotic Grasping and Manipulation Competition | IEEE ICRA 2024.
Evaluation of DCGAN
This is my final year project on analysing the performance of different TCP congestion control algorithms based on extensive evaluations. Scripts and my essay on this issue are all provided for your convenience.
Platform to evaluate index selection algorithms
Machine Learning - LEIC @ IST 2022/2023. Homework by Miguel Eleutério and Raquel Cardoso.
Activity and Sequence Detection Performance Measures: A package to evaluate activity detection results, including the sequence of events given multiple activity types.
Meta-segmented cross-validation (Matlab & python implementation)
Scripts useful for various purposes in NLP
collection of python programs for cognac simulation
Training code and public data for the paper "A Legal Approach to Hate Speech - Operationalizing the EU's Legal Framework against the Expression of Hatred as an NLP Task"
Model Evaluation using WANLI Test Set
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."