Skip to content

BrunoScaglione/langtest

 
 

Repository files navigation

LangTest: Deliver Safe & Effective Language Models

Project's WebsiteKey FeaturesHow To UseCommunity SupportContributingMissionLicense

screenshot

Project's Website

Take a look at our official page for user documentation and examples: langtest.org

Key Features

  • Generate and execute more than 50 distinct types of tests only with 1 line of code
  • Test all aspects of model quality: robustness, bias, representation, fairness and accuracy.​
  • Automatically augment training data based on test results (for select models)​
  • Support for popular NLP frameworks for NER, Translation and Text-Classifcation: Spark NLP, Hugging Face & Transformers.
  • Support for testing LLMS ( OpenAI, Cohere, AI21, Hugging Face Inference API and Azure-OpenAI LLMs) for question answering, toxicity, clinical-tests, legal-support, factuality, sycophancy and summarization task.

How To Use

# Install langtest
!pip install langtest[transformers]

# Import and create a Harness object
from langtest import Harness
h = Harness(task='ner', model={"model":'dslim/bert-base-NER', "hub":'huggingface'})

# Generate test cases, run them and view a report
h.generate().run().report()

Note For more extended examples of usage and documentation, head over to langtest.org

Community Support

  • Slack For live discussion with the LangTest community, join the #langtest channel
  • GitHub For bug reports, feature requests, and contributions
  • Discussions To engage with other community members, share ideas, and show off how you use LangTest!

Mission

While there is a lot of talk about the need to train AI models that are safe, robust, and fair - few tools have been made available to data scientists to meet these goals. As a result, the front line of NLP models in production systems reflects a sorry state of affairs.

We propose here an early stage open-source community project that aims to fill this gap, and would love for you to join us on this mission. We aim to build on the foundation laid by previous research such as Ribeiro et al. (2020), Song et al. (2020), Parrish et al. (2021), van Aken et al. (2021) and many others.

John Snow Labs has a full development team allocated to the project and is committed to improving the library for years, as we do with other open-source libraries. Expect frequent releases with new test types, tasks, languages, and platforms to be added regularly. We look forward to working together to make safe, reliable, and responsible NLP an everyday reality.

Comparing Benchmark Datasets: Use Cases and Evaluations

Langtest comes with different datasets to test your models, covering a wide range of use cases and evaluation scenarios.

Dataset Use Case Notebook
BoolQ Evaluate the ability of your model to answer boolean questions (yes/no) based on a given passage or context. Open In Colab
NQ-open Evaluate the ability of your model to answer open-ended questions based on a given passage or context. Open In Colab
TruthfulQA Evaluate the model's capability to answer questions accurately and truthfully based on the provided information. Open In Colab
MMLU Evaluate language understanding models' performance in different domains. It covers 57 subjects across STEM, the humanities, the social sciences, and more. Open In Colab
NarrativeQA Evaluate your model's ability to comprehend and answer questions about long and complex narratives, such as stories or articles. Open In Colab
HellaSwag Evaluate your model's ability in completions of sentences. Open In Colab
Quac Evaluate your model's ability to answer questions given a conversational context, focusing on dialogue-based question-answering. Open In Colab
OpenBookQA Evaluate your model's ability to answer questions that require complex reasoning and inference based on general knowledge, similar to an "open-book" exam. Open In Colab
BBQ Evaluate how your model responds to questions in the presence of social biases against protected classes across various social dimensions. Assess biases in model outputs with both under-informative and adequately informative contexts, aiming to promote fair and unbiased question-answering models. Open In Colab
XSum Evaluate your model's ability to generate concise and informative summaries for long articles with the XSum dataset. It consists of articles and corresponding one-sentence summaries, offering a valuable benchmark for text summarization models. Open In Colab
Real Toxicity Prompts Evaluate your model's accuracy in recognizing and handling toxic language with the Real Toxicity Prompts dataset. It contains real-world prompts from online platforms, ensuring robustness in NLP models to maintain safe environments. Open In Colab
LogiQA Evaluate your model's accuracy on Machine Reading Comprehension with Logical Reasoning questions. Open In Colab
BigBench Abstract narrative understanding Evaluate your model's performance in selecting the most relevant proverb for a given narrative. Open In Colab
BigBench Causal Judgment Evaluate your model's performance in measuring the ability to reason about cause and effect. Open In Colab
BigBench DisambiguationQA Evaluate your model's performance on determining the interpretation of sentences containing ambiguous pronoun references. Open In Colab
BigBench DisflQA Evaluate your model's performance in picking the correct answer span from the context given the disfluent question. Open In Colab
ASDiv Evaluate your model's ability answer questions based on Math Word Problems. Open In Colab
Legal-QA Evaluate your model's performance on legal-qa datasets Open In Colab
CommonsenseQA Evaluate your model's performance on the CommonsenseQA dataset, which demands a diverse range of commonsense knowledge to accurately predict the correct answers in a multiple-choice question answering format. Open In Colab
SIQA Evaluate your model's performance by assessing its accuracy in understanding social situations, inferring the implications of actions, and comparing human-curated and machine-generated answers. Open In Colab
PIQA Evaluate your model's performance on the PIQA dataset, which tests its ability to reason about everyday physical situations through multiple-choice questions, contributing to AI's understanding of real-world interactions. Open In Colab
MultiLexSum Evaluate your model's ability to generate concise and informative summaries for legal case contexts from the Multi-LexSum dataset, with a focus on comprehensively capturing essential themes and key details within the legal narratives. Open In Colab
FIQA Evaluate your model's performance on the FiQA dataset, a comprehensive and specialized resource designed for finance-related question-answering tasks. Open In Colab

Note For usage and documentation, head over to langtest.org

Blog

You can check out the following langtest articles:

Blog Description
Automatically Testing for Demographic Bias in Clinical Treatment Plans Generated by Large Language Models Helps in understanding and testing demographic bias in clinical treatment plans generated by LLM.
LangTest: Unveiling & Fixing Biases with End-to-End NLP Pipelines The end-to-end language pipeline in LangTest empowers NLP practitioners to tackle biases in language models with a comprehensive, data-driven, and iterative approach.
Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest While accuracy is undoubtedly crucial, robustness testing takes natural language processing (NLP) models evaluation to the next level by ensuring that models can perform reliably and consistently across a wide array of real-world conditions.
Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance In this article, we discuss how automated data augmentation may supercharge your NLP models and improve their performance and how we do that using LangTest.
Mitigating Gender-Occupational Stereotypes in AI: Evaluating Models with the Wino Bias Test through Langtest Library In this article, we discuss how we can test the "Wino Bias” using LangTest. It specifically refers to testing biases arising from gender-occupational stereotypes.
Automating Responsible AI: Integrating Hugging Face and LangTest for More Robust Models In this article, we have explored the integration between Hugging Face, your go-to source for state-of-the-art NLP models and datasets, and LangTest, your NLP pipeline’s secret weapon for testing and optimization.
Detecting and Evaluating Sycophancy Bias: An Analysis of LLM and AI Solutions In this blog post, we discuss the pervasive issue of sycophantic AI behavior and the challenges it presents in the world of artificial intelligence. We explore how language models sometimes prioritize agreement over authenticity, hindering meaningful and unbiased conversations. Furthermore, we unveil a potential game-changing solution to this problem, synthetic data, which promises to revolutionize the way AI companions engage in discussions, making them more reliable and accurate across various real-world conditions.
Unmasking Language Model Sensitivity in Negation and Toxicity Evaluations In this blog post, we delve into Language Model Sensitivity, examining how models handle negations and toxicity in language. Through these tests, we gain insights into the models' adaptability and responsiveness, emphasizing the continuous need for improvement in NLP models.
Unveiling Bias in Language Models: Gender, Race, Disability, and Socioeconomic Perspectives In this blog post, we explore bias in Language Models, focusing on gender, race, disability, and socioeconomic factors. We assess this bias using the CrowS-Pairs dataset, designed to measure stereotypical biases. To address these biases, we discuss the importance of tools like LangTest in promoting fairness in NLP systems.
Unmasking the Biases Within AI: How Gender, Ethnicity, Religion, and Economics Shape NLP and Beyond In this blog post, we tackle AI bias on how Gender, Ethnicity, Religion, and Economics Shape NLP systems. We discussed strategies for reducing bias and promoting fairness in AI systems.

Note To checkout all blogs, head over to Blogs

Contributing

We welcome all sorts of contributions:

  • Ideas
  • Feedback
  • Documentation
  • Bug reports
  • Development and testing

Feel free to clone the repo and submit pull-requests! You can also contribute by simply opening an issue or discussion in this repo.

Contributors

We would like to acknowledge all contributors of this open-source community project.

License

LangTest is released under the Apache License 2.0, which guarantees commercial use, modification, distribution, patent use, private use and sets limitations on trademark use, liability and warranty.

About

Deliver safe & effective language models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%