Skip to content

Could not build data files #20

felixonmars opened this Issue Aug 22, 2012 · 6 comments

4 participants


I'm trying to:

scons --prefix=/usr

mkdir -p $srcdir/$_gitname-build/raw
cd $srcdir/$_gitname-build/raw
tar xjvf ${srcdir}/
tar xjvf ${srcdir}/dict.utf8.tar.bz2

make -f ../doc/ slm_bin

but I got the following error when trying the genpyt part:

scons: done building targets.
slmbuild -n 3 -w 200000 -c 0,2,2 -d ABS,0.0005 -d ABS -d ABS -b 10 -e 9 \\
        -o lm_sc.3gram
Parameter input_file error

  slmbuild options idngram

  This program generate language model from idngram file.

  -n --ngram     N            # 1 for unigram, 2 for bigram, 3 for trigram...
  -o --out       output       # output file name
  -l --log                    # using -log(pr), default use pr directly
  -w --wordcount N            # Lexicon size, number of different word
  -b --brk       id[,id...]   # set the ids which should be treat as breaker
  -e --exclude   id[,id...]   # set the ids which should not be put into LM
  -c --cut       c1[,c2...]   # k-gram whose freq <= c[k] are droped
  -d --discount  method,param # the k-th -d parm specify the discount method
      for k-gram. Possible values for method/param:
          GT,R,dis  : GT discount for r <= R, r is the freq of a ngram.
                      Linear discount for those r > R, i.e. r'=r*dis
                      0 << dis < 1.0, for example 0.999
          ABS,[dis] : Absolute discount r'=r-dis. And dis is optional
                      0 < dis < cut[k]+1.0, normally dis < 1.0.
          LIN,[dis] : Linear discount r'=r*dis. And dis is optional
                      0 < dis < 1.0

      -n must be given before -c -b. And -c must give right number of cut-off,
  also -d must appear exactly N times specify discount for 1-gram, 2-gram...,
      BREAKER-IDs could be SentenceTokens or ParagraphTokens. Concepturally,
  these ids has no meaning when they appeared in the middle of n-gram.
      EXCLUDE-IDs could be ambiguious-ids. Concepturally, n-grams which
  contain those ids are meaningless.
      We can not erase ngrams according to BREAKER-IDS and EXCLUDE-IDs directly
  from IDNGRAM file, because some low-level information still useful in it.

      Following example read 'all.id3gram' and write trigram model 'all.slm'.
  At 1-gram level, use Good-Turing discount with cut-off 0, R=8, dis=0.9995. At
  2-gram level, use Absolute discount with cut-off 3, dis auto-calc. At 3-gram
  level, use Absolute discount wgenpyt -i dict.utf8 -s  -l pydict3_sc.log -o pyd
ith cut-off 2, dis auto-calc. Word id 10,11,12
  are breakers (sentence/para/paper breaker, etc). Exclude-ID is 9. Lexicon
  contains 200000 words. The result languagme model use -log(pr).

        slmbuild -l -n 3 -o all.slm -w 200000 -c 0,3,2 -d GT,8,0.9995
                 -d ABS -d ABS -b 10,11,12 -e 9 all.id3gram

make: *** [lm_sc.3gram] Error 100
make: *** Waiting for unfinished jobs....
Opening language -l: No such file or directory
make: *** [pydict3_sc.bin] Error 255

One data file corpus.utf8 is missing , so that

mmseg_ids: ${DICT_FILE} ${CORPUS_FILE}
    mmseg -f bin -s 10 -a 9 -d ${DICT_FILE} ${CORPUS_FILE} > ${IDS_FILE}

would just FAIL.


I used strace to find out which sentence FAIL, but I found it to be the genpyt part, and never goes to mmseg.
In addition, where could I get the file corpus.utf8?

sunpinyin developers member
yongsun commented Aug 22, 2012
  1. download the *.tar.bz2 files from open-gram project, and place them under 'raw' folder,
  2. download this Makefile to 'data' folder,
  3. run make under 'data' folder

this is a temporary solution, waiting for to fix this ...

@CasperVector CasperVector was assigned Aug 22, 2012

Two problems:

  1. executable files such as ./genpyt are located in 'src' folder, put Makefile into 'data' then make won't work, but it works correctly if put into 'src'.
  2. there's no install section in the Makefile, so no way to make install the built files. I've put them into '/usr/lib/sunpinyin/data/' as before.

many thanks.

sunpinyin developers member

Fixed in 65be3e1.
The issue is because of make(1)'s dependency mechanism; separating lexicon installation code into another Makefile fixes the issues easily.
Sorry for not testing before committing, and I will try to avoid similar mistakes in the future :(

sunpinyin developers member

BTW, users can now refer to doc/README[.in] for instructions on installation of lexicon data files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.