Skip to content

facebookresearch/LASER

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

* feat: converted SPMapply function to use python script

* modified laserTokenizer class to have a seperate function for tokenizing a file

* modified tokenize_file function

* removed instances of Path

* created new function for opening files

* test for LaserTokenizer.tokenize

* tests for normalisation, descape and lower_case

* deleted test dir because of relative import error

* modified test tokenizer function to use the downloaded model before exiting the context manager

* test for tokenize_file

* added test for is_printable

* test for over_write when equal to True and False

* added some type hints for tests

* added type hint for log function

* added header comment

* feat: make LASER pip installable (#239)

* feat: make LASER pip installable

* Added GitHub Actions workflow for tests and linting

* upgraded python version due to node depreciation error

* removed updated python version

* removed poetry

* bug fixes

* removed dependencies install

* updated pyproject and made lint_and_test to install dev and mono dependencies

* removed isort and black

* removed mono dependencies

* removed version from pyproject

* removed duplicate of classifiers

* removed description

* removed dynamic

* added src-layout to discover only laser_encoder

* added build backend

* updated project name

* changed license to BSD

* removed src-layout to test

* added linting to actions

* updated linting to only check the laser_encoders folder

* fixed linting issues

* fixed black linting issues

* added white-space

* Refactor embedder (#241)

* feat: make LASER pip installable

* Added GitHub Actions workflow for tests and linting

* upgraded python version due to node depreciation error

* removed updated python version

* removed poetry

* bug fixes

* removed dependencies install

* updated pyproject and made lint_and_test to install dev and mono dependencies

* removed isort and black

* removed mono dependencies

* removed version from pyproject

* removed duplicate of classifiers

* removed description

* removed dynamic

* added src-layout to discover only laser_encoder

* added build backend

* updated project name

* changed license to BSD

* removed src-layout to test

* added linting to actions

* updated linting to only check the laser_encoders folder

* fixed linting issues

* fixed black linting issues

* added white-space

* refactored emmbeder to work in the laser tokenizer package

* downgraded numpy version to suit the installled python version

* added test for sentence encoder

* added whitespace to test workflow

* restructured test for sentence encoder

* restructured test for sentence encoder

* fixed black issues

* restructured test for sentence encoder

* changed python version because of workflow error

* updated dependencies requirements version

* removed unneccessary print statement

* updated python version

* restructured test_sentence_encoder

* restructured test_sentence encoder

* black linting fixes

* restructure calling of tempile module

* updated workflow to remove pip cache

* removed commented code

* refactored code and added type hints

* fixed black issues

* fixed no module found error by adding Laser environment

* feat: Add Python function to download LASER models (#244)

* feat: make LASER pip installable

* Added GitHub Actions workflow for tests and linting

* upgraded python version due to node depreciation error

* removed updated python version

* removed poetry

* bug fixes

* removed dependencies install

* updated pyproject and made lint_and_test to install dev and mono dependencies

* removed isort and black

* removed mono dependencies

* removed version from pyproject

* removed duplicate of classifiers

* removed description

* removed dynamic

* added src-layout to discover only laser_encoder

* added build backend

* updated project name

* changed license to BSD

* removed src-layout to test

* added linting to actions

* updated linting to only check the laser_encoders folder

* fixed linting issues

* fixed black linting issues

* added white-space

* refactored emmbeder to work in the laser tokenizer package

* downgraded numpy version to suit the installled python version

* added test for sentence encoder

* added whitespace to test workflow

* restructured test for sentence encoder

* restructured test for sentence encoder

* fixed black issues

* restructured test for sentence encoder

* changed python version because of workflow error

* updated dependencies requirements version

* removed unneccessary print statement

* updated python version

* restructured test_sentence_encoder

* restructured test_sentence encoder

* black linting fixes

* restructure calling of tempile module

* updated workflow to remove pip cache

* removed commented code

* refactored code and added type hints

* fixed black issues

* fixed no module found error by adding Laser environment

* feat:created download function for downloading laser models in python

* added language list and made some changes to the download models

* fixed linting issues

* added type hints

* fixed linting issues

* added progress bar for downloading of models

* fixed black issues

* updated code to download laser model based on where the language is found

* fixed black and linting issues

* fixed black issues

* fixed bug in sentence encoder

* black issues and relative import issues

* removed addition of laser path

* fixed isort issues

* refactored the python entrypoint functions

* fixed black issues

* updated laguage list with some laser2 and laser3 languages

* refactor: added option for laser

* added laser2 language list

* added laser3 language list

* fixed black issues

* updated language list

* refactoed download function to display total filesize in MB and also made some changes to raise an error when laser is not passed

* fixed black issues

* refactored download models to move model_dir to the class

* fixed black issues

* refactored laser tokenizer test to use the laser downloader class methods

* documentation for the laser_encoder

* added tokenizer part

* added some docs for tokenize file and download models

* updated readme to include supported flore200 langs

* corrected readme path and license

* added requirements for laser_encoder

* added __main__.py file for running download command easily

* black and isort fixes, updated docs to effect changes due to creation of __main__.py file

* added contributors section

* Revert "added requirements for laser_encoder"

This reverts commit 431780e.

reverting back

* reverting creation of main.py

* fixed isort and black issues

* removed irrelevant comment

* moved pyproject to laser direcory and adjust contributors name

* workflow issues due to removal of pyproject

* pointed workflow to laser_encoders dir

* fixed EOF error

* fixed EOF error

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* debuging

* bug fixes and new implementation of convert_tokens_to_id function

* bug fix

* bug fix

* bug fix

* bug fix

* bug fix

* bug fix

* bug fix

* bug fix

* bug fix

* reverting back because of workflow error

* reverting back because of workflow error

* some extra adjustment

* changed ibo to igbo

* updated doc to effect the ibo to igbo change

* refactore: modified the sentence encoder to tokenize a text before encodingit

* debugging failed test

* added a call method to seperately handle the tokenization before encodding

* added value error for when there is no spm_model

* documentation for the new __call__ method for tokenization with encoder

* docs: Update docs to include reference to laserembeddings (#254)

* Handle Interrupted Model Weight Downloads (#253)

* fix: Fix interrupted downloads issue

* style: Format code using black

* Update download method to use tempfile

* style: Remove unnecessary space

* Fix OSError by using shutil.move for cross-filesystem moves

Using os.rename caused an OSError when trying to move files across different filesystems (e.g., from /tmp to another directory).
By using shutil.move, we gracefully handle such situations,
ensuring files are moved correctly regardless of the source and destination filesystems.

* Refactor `initialize_encoder` to `LaserEncoderPipeline` (#256)

* Remove 'tokenize' argument from initialize_encoder function

* Add LaserEncoderPipeline for streamlined tokenization and encoding

* docs: Update README to show use of LaserEncoderPipeline

* style: Reformat code using black

* refactor: move encoder and tokenizer initialization into repective files

* style: run black

* test: Add test for LaserEncoderPipeline

* test to validate languages

* test to validate languages

* Delete flores directory

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update .gitignore

* added pytest to validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py using mock downloader

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Extend Tokenizer to Support Single Strings and Lists of Strings (#258)

* Handle case for both str and list in tokenizer

* test: Add test for tokenizer call method

* Rename 'sentences' argument to 'text_or_batch' for clarity

* Handle string input in call method

* Update validate_models.py

* Update download_models.py according to 1.

* Update download_models.py

* Update download_models.py

* Update download_models.py

* Enhance LaserTokenizer with Perl Parity, Optional Punctuation Normalization, and Embedding Normalization (#262)

* Introduce pearl compability flag

* Add argument `normalize_punct` to `LaserTokenizer`

* Add normalize_embeddings option to encode_sentences

* Update README on normalize_embeddings option

* style: Run black and isort

* test: Add tests for normalize_embeddings flag in sentence encoder

* style: Run black

* Update validate_models.py

* Update models.py

* Update laser_tokenizer.py

* Update download_models.py

* Update validate_models.py

* Update validate_models.py

* Added slow and fast tests to validate_models.py

* Update validate_models.py

* Update validate_models.py

* Create test_validate_models.py

* Rename test_validate_models.py to test_models_initialization.py

* Update test_models_initialization.py

* Update test_models_initialization.py

* Update download_models.py

* Update test_models_initialization.py

* Update test_models_initialization.py

* Update download_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update validate_models.py

* Update README.md

* Update README.md

* Decrease versions of numpy and torch required by laser-encoders (#264)

* Update requirements to follow fairseq

* Update README

* Update dependencies in toml file

* Remove requirements.txt

* Update laser_encoders README

* resolve parity with MOSES-4.0 release

* update test

* Update the main README file with a mention of `laser_encoders` (#266)

* update the main readme file

* wording changes

* update the example in the readme

* fix readme text

* Update language_list.py (#269)

* Update language_list.py

* Update language_list.py

* Update language_list.py

* Updated laser encoder pipeline

* Update models.py

* Update models.py

* Added warning for using laser2 with a language

* add tests to test_laser_tokenizer.py

* Update test_laser_tokenizer.py

* Update models.py

* Update test_laser_tokenizer.py

* Update test_laser_tokenizer.py

* Update language_list.py

* Update language_list.py

* Update language_list.py

---------

Co-authored-by: CaptainVee <captainvee3@gmail.com>
Co-authored-by: Victor Joseph <53542380+CaptainVee@users.noreply.github.com>
Co-authored-by: Kevin Heffernan <73017975+heffernankevin@users.noreply.github.com>
Co-authored-by: Okewunmi Paul <okewunmipaul@yahoo.com>
Co-authored-by: NIXBLACK11 <Siddharthsinghrana11@gmail.com>
Co-authored-by: Siddharth Singh Rana <91743459+NIXBLACK11@users.noreply.github.com>
Co-authored-by: Kevin Heffernan <kevinheffernan@devfair0713.h2.fair>
a7905b9

Git stats

Files

Permalink
Failed to load latest commit information.

LASER Language-Agnostic SEntence Representations

LASER is a library to calculate and use multilingual sentence embeddings.

NEWS

  • 2023/11/16 Released laser_encoders, a pip-installable package supporting LASER-2 and LASER-3 models
  • 2023/06/26 xSIM++ evaluation pipeline and data released
  • 2022/07/06 Updated LASER models with support for over 200 languages are now available
  • 2022/07/06 Multilingual similarity search (xsim) evaluation pipeline released
  • 2022/05/03 Librivox S2S is available: Speech-to-Speech translations automatically mined in Librivox [9]
  • 2019/11/08 CCMatrix is available: Mining billions of high-quality parallel sentences on the WEB [8]
  • 2019/07/31 Gilles Bodard and Jérémy Rapin provided a Docker environment to use LASER
  • 2019/07/11 WikiMatrix is available: bitext extraction for 1620 language pairs in WikiPedia [7]
  • 2019/03/18 switch to BSD license
  • 2019/02/13 The code to perform bitext mining is now available

CURRENT VERSION:

  • We now provide updated LASER models which support over 200 languages. Please see here for more details including how to download the models and perform inference.

According to our experience, the sentence encoder also supports code-switching, i.e. the same sentences can contain words in several different languages.

We have also some evidence that the encoder can generalize to other languages which have not been seen during training, but which are in a language family which is covered by other languages.

A detailed description of how the multilingual sentence embeddings are trained can be found here, together with an experimental evaluation.

The core sentence embedding package: laser_encoders

We provide a package laser_encoders with minimal dependencies. It supports LASER-2 (a single encoder for the languages listed below) and LASER-3 (147 language-specific encoders described here).

The package can be installed simply with pip install laser_encoders and used as below:

from laser_encoders import LaserEncoderPipeline
encoder = LaserEncoderPipeline(lang="eng_Latn")
embeddings = encoder.encode_sentences(["Hi!", "This is a sentence encoder."])
print(embeddings.shape)  # (2, 1024)

The laser_encoders readme file provides more examples of its installation and usage.

The full LASER kit

Apart from the laser_encoders, we provide support for LASER-1 (the original multilingual encoder) and for various LASER applications listed below.

Dependencies

  • Python >= 3.7
  • PyTorch 1.0
  • NumPy, tested with 1.15.4
  • Cython, needed by Python wrapper of FastBPE, tested with 0.29.6
  • Faiss, for fast similarity search and bitext mining
  • transliterate 1.10.2 (pip install transliterate)
  • jieba 0.39, Chinese segmenter (pip install jieba)
  • mecab 0.996, Japanese segmenter
  • tokenization from the Moses encoder (installed automatically)
  • FastBPE, fast C++ implementation of byte-pair encoding (installed automatically)
  • Fairseq, sequence modeling toolkit (pip install fairseq==0.12.1)
  • tabulate, pretty-print tabular data (pip install tabulate)
  • pandas, data analysis toolkit (pip install pandas)
  • Sentencepiece, subword tokenization (installed automatically)

Installation

  • install the laser_encoders package by e.g. pip install -e . for installing it in the editable mode
  • set the environment variable 'LASER' to the root of the installation, e.g. export LASER="${HOME}/projects/laser"
  • download encoders from Amazon s3 by e.g. bash ./nllb/download_models.sh
  • download third party software by bash ./install_external_tools.sh
  • download the data used in the example tasks (see description for each task)

Applications

We showcase several applications of multilingual sentence embeddings with code to reproduce our results (in the directory "tasks").

For all tasks, we use exactly the same multilingual encoder, without any task specific optimization or fine-tuning.

License

LASER is BSD-licensed, as found in the LICENSE file in the root directory of this source tree.

Supported languages

The original LASER model was trained on the following languages:

Afrikaans, Albanian, Amharic, Arabic, Armenian, Aymara, Azerbaijani, Basque, Belarusian, Bengali, Berber languages, Bosnian, Breton, Bulgarian, Burmese, Catalan, Central/Kadazan Dusun, Central Khmer, Chavacano, Chinese, Coastal Kadazan, Cornish, Croatian, Czech, Danish, Dutch, Eastern Mari, English, Esperanto, Estonian, Finnish, French, Galician, Georgian, German, Greek, Hausa, Hebrew, Hindi, Hungarian, Icelandic, Ido, Indonesian, Interlingua, Interlingue, Irish, Italian, Japanese, Kabyle, Kazakh, Korean, Kurdish, Latvian, Latin, Lingua Franca Nova, Lithuanian, Low German/Saxon, Macedonian, Malagasy, Malay, Malayalam, Maldivian (Divehi), Marathi, Norwegian (Bokmål), Occitan, Persian (Farsi), Polish, Portuguese, Romanian, Russian, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Swahili, Swedish, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Turkish, Uighur, Ukrainian, Urdu, Uzbek, Vietnamese, Wu Chinese and Yue Chinese.

We have also observed that the model seems to generalize well to other (minority) languages or dialects, e.g.

Asturian, Egyptian Arabic, Faroese, Kashubian, North Moluccan Malay, Nynorsk Norwegian, Piedmontese, Sorbian, Swabian, Swiss German or Western Frisian.

LASER3

Updated LASER models referred to as LASER3 supplement the above list with support for 147 languages. The full list of supported languages can be seen here.

References

[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017

[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.

[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018

[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.

[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.

[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.

[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.

[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB

[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining,, NeurIPS 2021, pages 15748-15761.

[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages