This is a simple implementation of the BLEU (bilingual evaluation understudy) algorithm for evaluating quality of machine translated text. Please refer http://www.aclweb.org/anthology/P02-1040.pdf for information on the algorithm.
Command for Python 2.6
python calculate_bleu_score.py candidate.txt reference.txt
candidate.txt is the machine translated text file.
reference.txt is the reference text. Mostly the human translated text file.