forked from opencog/link-grammar
-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
1425 lines (1116 loc) · 61.6 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Link Grammar Parser
-------------------
Version 5.3.4
The Link Grammar Parser implements the Sleator/Temperley/Lafferty
theory of natural language parsing. This version of the parser is
an extended, expanded version of the last official CMU release, and
includes many enhancements and fixes created by many different
developers.
This code is released under the LGPL license, making it freely
available for both private and commercial use, with few restrictions.
The terms of the license are given in the LICENSE file included with
this software.
Please see the web page http://www.abisource.com/projects/link-grammar/
for more information. This version is a continuation of the original
parser posted at http://www.link.cs.cmu.edu/link
CONTENTS of this directory:
---------------------------
LICENSE The license describing terms of use
link-grammar/*.c The program. (Written in ANSI-C)
link-grammar/corpus/*.c Optional corpus statistics database.
link-grammar/minisat/* Optional SAT Solver. (Written in C++)
link-grammar/sat-solver Optional SAT Solver. (Written in C++)
link-grammar/viterbi Experimental Viterbi algorithm parser.
bindings/autoit/* Optional AutoIt language bindings.
bindings/java/* Optional Java language bindings.
bindings/lisp/* Optional Common Lisp language bindings.
bindings/ocaml/* Optional OCaML language bindings.
bindings/python/* Optional Python language bindings.
bindings/swig/* SWIG interface file, for other FFI interfaces.
data/en/* English language dictionaries.
data/en/4.0.dict The file containing the dictionary definitions.
data/en/4.0.knowledge The post-processing knowledge file.
data/en/4.0.constituents The constituent knowledge file.
data/en/4.0.affix The affix (prefix/suffix) file.
data/en/4.0.regex Regular expression-based morphology guesser.
data/en/tiny.dict A small example dictionary.
data/en/words/* A directory full of word lists.
data/en/4.0*.batch These files contain sentences (both grammatical
and ungrammatical ones) that are used for
testing the link-parser These can be
run through the parser with the command
"./link-parser < 4.0.*.batch"
data/ru/* A full-fledged Russian dictionary
data/ar/* An Arabic dictionary
data/fa/* A Persian (Farsi) dictionary
data/de/* A small prototype German dictionary
data/lt/* A small prototype Lithuanian dictionary
data/id/* A small prototype Indonesian dictionary
data/he/* An experimental Hebrew dictionary
data/tr/* An experimental Turkish dictionary
morphology/ar An Arabic morphology analyzer
morphology/fa An Persian morphology analyzer
COPYING The license for this code and data
ChangeLog A compendium of recent changes.
configure The GNU configuration script
autogen.sh Developer's configure maintenance tool
msvc9, msvc12 Microsoft Visual-C project files
UNPACKING and signature verification:
-------------------------------------
The system is distributed using the normal tar.gz format; it can be
extracted using the "tar -zxf link-grammar.tar.gz" command at the
command line.
The files have been digitally signed to make sure that there was no
corruption of the dataset during download, and to help ensure that
no malicious changes were made to the code internals by third
parties. The signatures can be checked with the gpg command:
gpg --verify link-grammar-5.3.4.tar.gz.asc
which should generate output identical to (except for the date):
gpg: Signature made Thu 26 Apr 2012 12:45:31 PM CDT using RSA key ID E0C0651C
gpg: Good signature from "Linas Vepstas (Hexagon Architecture Patches) <linas@codeaurora.org>"
gpg: aka "Linas Vepstas (LKML) <linasvepstas@gmail.com>"
Alternately, the md5 check-sums can be verified. These do not provide
cryptographic security, but they can detect simple corruption. To
verify the check-sums, issue "md5sum -c MD5SUM" at the command line.
CREATING the system:
--------------------
To compile the link-grammar shared library and demonstration program,
at the command line, type:
./configure
make
To install, change user to "root" and say
make install
ldconfig
This will install the liblink-grammar.so library into /usr/local/lib,
the header files in /usr/local/include/link-grammar, and the
dictionaries into /usr/local/share/link-grammar. Running 'ldconfig'
will rebuild the shared library cache.
Editline
--------
If libedit-dev is installed, then the arrow keys can be used to edit
the input to the link-parser tool; the up and down arrow keys will
recall previous entries. You want this; it makes testing and
editing much easier. Note, however, most versions of editline are
not UTF8-capable, and so won't work, for example, with the Russian
dictionaries. A UTF8-enabled version of libedit can be found here:
http://www.thrysoee.dk/editline/
If you use the above, be sure to say:
./configure --enable-widec
when building it, otherwise you won't actually get the UTF8 support!
Attention: the above configure is for libedit, not for link-grammar!
(In addition, you will need to uninstall the system default editline
in order to get the above. You may also need to set the environment
variable PKG_CONFIG_PATH to include /usr/local/lib/pkgconfig)
Use of editline in the link-parser can be disabled by saying:
./configure --disable-editline
Note: utf8 support for libedit is still missing in Ubuntu 1404 and
Mint 17 Qiana See https://bugs.launchpad.net/linuxmint/+bug/1389438
https://bugs.launchpad.net/ubuntu/+source/libedit/+bug/1375921
Java Bindings
-------------
By default, the Makefiles attempt to build the Java bindings.
The use of the Java bindings is *OPTIONAL*; you do not need these if
you do not plan to use link-grammar with Java. You can skip building
the Java bindings by disabling as follows:
./configure --disable-java-bindings
If JAVA_HOME isn't set, if jni.h isn't found, or if ant isn't found,
then the java bindings will not be built.
Python Bindings
---------------------
The python bindings are NOT built by default. To enable this, run
configure as follows:
./configure --enable-python-bindings
The use of the Python bindings is *OPTIONAL*; you do not need these if
you do not plan to use link-grammar with python. If you do enable the
python bindings, be sure to install the python-devel package.
The linkgrammar.py module provides a high-level interface in Python.
The example.py script provides a demo. and tests.py runs unit tests.
Install location
----------------
The /usr/local install target can be over-ridden using the
standard GNU configure --prefix option, so for example:
./configure --prefix=/opt/link-grammar
By using pkg-config (see below), non-standard install locations
can be automatically detected.
Configure help
--------------
Additional config options are printed by
./configure --help
The system has been tested and works well on 32 and 64-bit Linux
systems, FreeBSD, MacOSX, as well as on many Microsoft Windows
systems, under various different Windows development environments.
Specific OS-dependent notes follow.
BUILDING on MacOS:
------------------
Plain-vanilla Link Grammar should compile and run on Apple MacOSX
just fine, as described above. At this time, there are no reported
issues.
The language bindings for python and java may require additional
packages to be installed. A working editline is nice, since it
allows you to use the arrow keys in the command-lne client.
See http://www.macports.org/ to find these.
You almost surely do not need a Mac portfile; but you can still
find one here:
http://trac.macports.org/browser/trunk/dports/textproc/link-grammar/Portfile
It does not currently specify any additional steps to perform.
If you do NOT need the java bindings, you should almost surely
configure with
./configure --disable-java-bindings
By default, java requires a 64-bit binary, and not all MacOS systems
have a 64-bit devel environment installed.
If you do want Java bindings, be sure to set the JDK_HOME environment
variable to wherever <Headers/jni.h> is. Set the JAVA_HOME variable
to the location of the java compiler. Make sure you have ant
installed.
BUILDING on Windows
-------------------
There are three different ways in which link-grammar can be compiled
on Windows. One way is to use Cygwin, which provides a Linux
compatibility layer for Windows. Unfortunately, the Cygwin system
is not compatible with Java for Windows. Another way is use the
MSVC system. A third way is to use the MinGW system, which uses the
Gnu toolset to compile windows programs.
Link-grammar requires a working version of POSIX-standard regex
libraries. Since these are not provided by Microsoft, a copy must
be obtained elsewhere. One popular choice is TRE, available at:
http://laurikari.net/tre/
Another popular choice is PCRE, 'Perl-Compatible Regular Expressions',
available at:
http://www.pcre.org/
Recent 32 and 64-bit binaries can be found at:
http://www.airesoft.co.uk/pcre
Older 32-bit binaries are at:
http://gnuwin32.sourceforge.net/packages/regex.htm
See also:
http://ftp.gnome.org/pub/gnome/binaries/win32/dependencies/regex.README
By default, the library is configured to create a DLL. If you want
to instead build a static library, the macro LINK_GRAMMAR_STATIC must
be defined before the inclusion of any header files for both the compiling
of the link-grammar library and for the application that uses it. Other
compiler settings will also have to be changed to create a static library
of course.
The different build methods below are NOT regularly tested, and
some link-grammar versions may have build issues. If you are an
experienced Windows developer who knows how to make things work
in the Microsoft environment, your help would be appreciated!
BUILDING on Windows (Cygwin)
----------------------------
The easiest way to have link-grammar working on MS Windows is to
use Cygwin, a Linux-like environment for Windows making it possible
to port software running on POSIX systems to Windows. Download and
install Cygwin from http://www.cygwin.com/
Unfortunately, the Cygwin system is not compatible with Java, so if
you need the Java bindings, you must use MSVC or MinGW, below.
BUILDING on Windows (MinGW)
---------------------------
Another way to build link-grammar is to use the MinGW/MSYS, which
uses the Gnu toolset to compile Windows programs for Windows. This
is probably the easiest way to obtain workable Java bindings for
Windows. Download and install MinGW, MSYS and MSYS-DTK from
http://mingw.org.
Then build and install link-grammar with
./configure
make
make install
If you used the standard installation paths, the directory /usr/ is
mapped to C:\msys\1.0, so after 'make install', the libraries and
executable will be found at C:\msys\1.0\local\bin and the dictionary
files at C:\msys\1.0\local\share\link-grammar.
In order to use the Java bindings you'll need to build two extra
DLLs, by running the following commands from the link-grammar base
directory:
cd link-grammar
gcc -g -shared -Wall -D_JNI_IMPLEMENTATION_ -Wl,--kill-at \
.libs/analyze-linkage.o .libs/and.o .libs/api.o \
.libs/build-disjuncts.o .libs/constituents.o \
.libs/count.o .libs/disjuncts.o .libs/disjunct-utils.o \
.libs/error.o .libs/expand.o .libs/extract-links.o \
.libs/fast-match.o .libs/idiom.o .libs/massage.o \
.libs/post-process.o .libs/pp_knowledge.o .libs/pp_lexer.o \
.libs/pp_linkset.o .libs/prefix.o .libs/preparation.o \
.libs/print-util.o .libs/print.o .libs/prune.o \
.libs/read-dict.o .libs/read-regex.o .libs/regex-morph.o \
.libs/resources.o .libs/spellcheck-aspell.o \
.libs/spellcheck-hun.o .libs/string-set.o .libs/tokenize.o \
.libs/utilities.o .libs/word-file.o .libs/word-utils.o \
-o /usr/local/bin/link-grammar.dll
gcc -g -shared -Wall -D_JNI_IMPLEMENTATION_ -Wl,--kill-at \
.libs/jni-client.o /usr/local/bin/link-grammar.dll \
-o /usr/local/bin/link-grammar-java.dll
This will create link-grammar.dll and link-grammar-java.dll in the
directory c:\msys\1.0\local\bin . These files, together with
link-grammar-*.jar, will be used by Java programs.
Make sure that this directory is in the %PATH setting, as otherwise,
the DLL's will not be found.
BUILDING on Windows (MSVC)
--------------------------
Microsoft Visual C/C++ project files can be found in the msvc12
and msvc9 directories. MSVC6 build files are also provided;
however, this compiler is deprecated due to the lack of locale
support. In particular, the Russian dictionaries cannot work
with MSVC6!
Please note that the regex package, which includes libraries and
header files, must be separately downloaded and installed, as
described above. The MSVC project files *MUST* be modified to
indicate the correct location of the regex libraries.
The build files make use of two environment variables, GNUREGEX and
JAVA_HOME.
-- GNUREGEX must be pointing to an unzipped gnuwin32-regex
distribution.
-- JAVA_HOME must be pointing to a locally installed JDK.
Those two can be set either as system environment variables (Windows
users are supposed to know how to do this :) or as MSVC12/MSVC9 user
macros. But just in case you don't, here's how:
1) Start > Control Panel > System (Remember, in Vista or Windows 7,
you need to switch to "Classic View" or "Large icons",
respectively to see the System icon).
2) "Advanced system settings" (or "Advanced" tab under XP)
3) On all versions you will see a button with the caption
"Environment Variables", press it.... (ALL REMAINING STEPS
CORRELATE ON XP, VISTA, AND 7)
4) You now see two lists of environment variables... the top one
says "User variables for <yourusernamehere>" and is localized to
your user account, the other says "System variables" and applies
to ALL user accounts on that computer.
5) Press the "New ..." button corresponding to whether or not you
want the variables to be valid on ALL accounts or just your own
(either way the following steps remain the same)
6) In the "Variable name:" box, enter "GNUREGEX".
7) In the "Variable value" box, enter the path to your installation
of GNUREGEX (on my system this is "C:\Program Files (x86)\GnuWin32"
as I am on Windows 7 Ultimate x64) then press "OK"
8) Press the same "New ..." button and this time in the "Variable
name" box enter "JAVA_HOME", and in the "Variable value" box
enter the path to your Java SDK root folder. (IMPORTANT NOTE: On
some systems this variable may already be defined automatically
by the JAVA SDK installation! You should check the variables
lists before creating a new one to avoid any conflict).
9) Press "OK" and close all Windows opened during the above steps.
If you were running MSVC++ or your chosen development environment
whilst performing the above steps, you should restart it! Once
restarted you should be able to build the latest version of the
code.
RUNNING the program:
--------------------
To run the program issue the Unix command:
./link-parser
This starts the program. The program has many user-settable variables
and options. These can be displayed by entering !var at the link-parser
prompt. Entering !help will display some additional commands.
The dictionaries contain some utf-8 punctuation. These may generate
errors for users in a non-utf-8 locale, such as the "C" locale.
The locale can be set, for example, by saying
export LANG=en_US.UTF-8
at the shell prompt.
By default, the parser will use dictionaries at the installed location
(typically in /usr/local/share). Other locations can be specified on
the command line; for example:
link-parser ../path/to-my/modified/data/en
When accessing dictionaries in non-standard locations, the standard
file-names are still assumed (i.e. 4.0.dict, 4.0.affix, etc.)
The Russian dictionaries are in data/ru. Thus, the Russian parser
can be started as:
link-parser data/ru
If you see errors similar to this:
Warning: The word "encyclop" found near line 252 of en/4.0.dict
matches the following words:
encyclop
This word will be ignored.
then your UTF-8 locales are either not installed or not configured.
The shell command `locale -a` should list en_US.utf8 as a locale.
If not, then you need to `dpkg-reconfigure locales` and/or run
`update-locale` or possibly `apt-get install locales`, or
combinations or variants of these, depending on your operating
system.
TESTING the program:
--------------------
The program can run in batch mode for testing the system on a large
number of sentences. The following command runs the parser on
a file called 4.0.batch
./link-parser < 4.0.batch
The line "!batch" near the top of 4.0.batch turns on batch mode. In
this mode sentences labeled with an initial "*" should be rejected
and those not starting with a "*" should be accepted. The current
batch file does report some errors, as do the files "4.0.biolg.batch"
and "4.0.fixes.batch". Work is ongoing to fix these.
The "4.0.fixes.batch" file contains many thousands of sentences that
have been fixed since the original 4.1 release of link-grammar. The
"4.0.biolg.batch" contains biology/medical-text sentences from the
BioLG project.
The following numbers are subject to change, but, at this time, the
number of errors one can expect to observe in each of these files
are as follows:
en/4.0.batch: 61 errors
en/4.0.fixes.batch: 401 errors
lt/4.0.batch: 17 errors
ru/4.0.batch: 31 errors
The bindings/python directory contains a unit test for the python
bindings. It also performs several basic checks that stress the
link-grammar libraries.
USING the parser in your own applications:
------------------------------------------
There is an API (application program interface) to the parser. This
makes it easy to incorporate it into your own applications. The API
is documented on the web site.
USING CMake:
------------
The FindLinkGrammar.cmake file can be used to test for and set up
compilation in CMake-based build environments.
USING pkg-config:
-----------------
To make compiling and linking easier, the current release uses
the pkg-config system. To determine the location of the link-grammar
header files, say `pkg-config --cflags link-grammar` To obtain
the location of the libraries, say `pkg-config --libs link-grammar`
Thus, for example, a typical makefile might include the targets:
.c.o:
cc -O2 -g -Wall -c $< `pkg-config --cflags link-grammar`
$(EXE): $(OBJS)
cc -g -o $@ $^ `pkg-config --libs link-grammar`
JAVA bindings:
--------------
This release includes Java bindings. Their use is optional.
The bindings will be built automatically if jni.h can be found.
Some common java JVM distributions (most notably, the ones from Sun)
place this file in unusual locations, where it cannot be
automatically found. To remedy this, make sure that JAVA_HOME is
set. The configure script looks for jni.h in $JAVA_HOME/Headers
and in $JAVA_HOME/include; it also examines corresponding locations
for $JDK_HOME. If jni.h still cannot be found, specify the location
with the CPPFLAGS variable: so, for example,
export CPPFLAGS="-I/opt/jdk1.5/include/:/opt/jdk1.5/include/linux"
or
export CPPFLAGS="-I/c/java/jdk1.6.0/include/ -I/c/java/jdk1.6.0/include/win32/"
Please note that the use of /opt is non-standard, and most system
tools will fail to find packages installed there.
The building of the Java bindings can be disabled by configuring as
below:
./configure --disable-java-bindings
Using JAVA
----------
This release provides java files that offer three ways of accessing
the parser. The simplest way is to use the org.linkgrammar.LinkGrammar
class; this provides a very simple Java API to the parser.
The second possibility is to use the LGService class. This implements
a TCP/IP network server, providing parse results as JSON messages.
Any JSON-capable client can connect to this server and obtain parsed
text.
The third possibility is to use the org.linkgrammar.LGRemoteClient
class, and in particular, the parse() method. This class is a network
client that connects to the JSON server, and converts the response
back to results accessible via the ParseResult API.
The above-described code will be built if Apache 'ant' is installed.
Using the Network Server
------------------------
The network server can be started by saying:
java -classpath linkgrammar.jar org.linkgrammar.LGService 9000
The above starts the server on port 9000. It the port is omitted,
help text is printed. This server can be contacted directly via
TCP/IP; for example:
telnet localhost 9000
(Alternately, use netcat instead of telnet). After connecting, type
in:
text: this is an example sentence to parse
The returned bytes will be a JSON message providing the parses of
the sentence. By default, the ASCII-art parse of the text is not
transmitted. This can be obtained by sending messages of the form:
storeDiagramString:true, text: this is a test.
Spell Checking:
---------------
The parser will run a spell-checker at an early stage, if it
encounters a word that it does not know, and cannot guess, based on
morphology. The configure script looks for the aspell or hunspell
spell-checkers; if the aspell devel environment is found, then
aspell is used, else hunspell is used.
Spell checking may be disabled at runtime, in the link-parser client
with the !spell flag. Enter !help for more details.
Corpus Statistics:
------------------
The parser now contains some experimental code for using corpus
statistics to provide a parse ranking, and to assign WordNet word
senses to word, based on their grammatical usage. An overview of
the idea is given on the OpenCog blog, here:
http://brainwave.opencog.org/2009/01/12/determining-word-senses-from-grammatical-usage/
It is planned that the Corpus statistics database will be used to
guide the SAT solver.
To enable the corpus statistics, specify
./configure --enable-corpus-stats
prior to compiling. The database itself can be downloaded from
http://www.abisource.com/downloads/link-grammar/sense-dictionary/
or
http://gnucash.org/linas/nlp/data/linkgrammar-wsd/
The data is contained in an sqlite3 database file,
disjuncts.20090430.db.bz2
Unzip this file (using bunzip2) rename it to "disjuncts.db", and
place it in the subdirectory "sql", in the same directory that
contains the "en" directory. For default unix installations, the
final location would be
/usr/local/share/link-grammar/sql/disjuncts.db
where, by comparison, the usual dictionary would be at
/usr/local/share/link-grammar/en/4.0.dict
After this is installed, parse ranking scores should be printed
automatically, as floating-point numbers: for example:
Unique linkage, cost vector = (CORP=4.4257 UNUSED=0 DIS=1 AND=0 LEN=5)
Lower numbers are better. The scores can be interpreted as -log_2
of a certain probability, so the lower the number, the higher the
probability.
The display of disjunct scores can be enabled with the !disjuncts
flag, and senses with the !senses flag, at the link-parser prompt.
Entering !var and !help will show all flags. Multiple parses are
sorted and displayed in order from lowest to highest cost; the sort
of can be set by saying !cost=1 for the traditional sort, and
!cost=2 for corpus-based cost. Output similar to the below should
be printed:
linkparser> !disjunct
Showing of disjunct used turned on.
linkparser> !cost=2
cost set to 2
linkparser> !sense
Showing of word senses turned on.
linkparser> this is a test
Found 1 linkage (1 had no P.P. violations)
Unique linkage, cost vector = (CORP=4.4257 UNUSED=0 DIS=1 AND=0 LEN=5)
+--Ost--+
+-Ss*b+ +-Ds-+
| | | |
this.p is.v a test.n
2 is.v dj=Ss*b- Ost+ sense=be%2:42:02:: score=2.351568
2 is.v dj=Ss*b- Ost+ sense=be%2:42:05:: score=2.143989
2 is.v dj=Ss*b- Ost+ sense=be%2:42:03:: score=1.699292
4 test.n dj=Ost- Ds- sense=test%1:04:00:: score=0.000000
this.p 0.0 0.695 Wd- Ss*b+
is.v 0.0 7.355 Ss*b- Ost+
a 0.0 0.502 Ds+
test.n 1.0 9.151 Ost- Ds-
Note that the sense labels are not terribly accurate; the verb "to be"
is particularly hard to tag correctly.
MULTI-THREADED USE:
-------------------
It is safe to use link-grammar for parsing in multiple threads, once
the dictionaries have been loaded. The dictionary loading itself is
not thread-safe; it is not protected in any way. Thus, link-grammar
should not be used from multiple threads until the dictionary has
been loaded. Different threads may use different dictionaries.
Parse options can be set on a per-thread basis, with the exception
of verbosity, which is a global, shared by all threads. It is the
only global, outside of the Java bindings.
For multi-threaded Java use, a per-thread variable is needed. This
must be enabled during the configure stage:
./configure --enable-pthreads
The following exceptions and special notes apply:
utilities.c -- has global "verbosity". Memory usage code (disabled
by default) also has a global, and so requires
pthreads for tracking memory usage.
jni-client.c - uses per-thread struct. This should somehow be
attached to JNIEnv somehow. A Java JNI expert is
needed.
malloc-dbg.c - not thread safe, not normally used;
only for debugging.
prefix.c - not thread-safe, but doesn't need to be; used only
during initialization, and only if binreloc turned
on. But binreloc is never used by anyone !?
pp_lexer.c -- autogened code, original lex sources lost.
This is only used when reading dictionaries,
during initialization, and so doesn't need
to be thread safe.
BioLG merger:
-------------
As of version 4.5.0 (April 2009), the most important parts of the
BioLG project have been merged. The current version of link-grammar
has superior parse coverage to BioLG on all texts, including
biomedical texts. The original BioLG test suite can be found in
data/en/4.0.biolg.batch.
The following changes in BioLG have NOT been merged:
-- Part of speech hinting. The BioLG code can accept part-of-speech
hints for unknown words.
-- XML I/O. The BioLG code can output parsed text in a certain
idiosyncratic XML format.
-- "term support". Link-grammar does support "entity placeholders",
which provides an equivalent function.
-- The link type CH. This was a large, intrusive, incompatible change
to the dictionary, and it is not strictly required -- there is a
better, alternative way of handling adj-noun-adj-noun chains commonly
seen in biomedical text, and this has been implemented.
All other BioLG changes, and in particular, extensive dictionary fixes,
as well as regex morphology handling, have been incorporated.
Medical Terms Merger
--------------------
Many, but not all, of the "medical terms" from Peter Szolovits have
been merged into version 4.3.1 (January 2008) of link-grammar. The
original project page was at:
http://groups.csail.mit.edu/medg/projects/text/lexicon.html
The following "extra" files were either merged directly, renamed, or
skipped (omitted):
/extra.1: -- merged
/extra.2: -- skip, too big
/extra.3: -- skip, too big
/extra.4: -- /en/words/words-medical.v.4.2:
/extra.5: -- /en/words/words-medical.v.4.1:
/extra.6: -- /en/words/words-medical.adj.2:
/extra.7: -- /en/words/words-medical.n.p
/extra.8: -- skip, too big
/extra.9: -- skip, random names
/extra.10: -- /en/words/words-medical.adv.1:
/extra.11: -- /en/words/words-medical.v.4.5:
/extra.12: -- skip, too big
/extra.13: -- /en/words/words-medical.v.4.3:
/extra.14: -- /en/words/words-medical.prep.1
/extra.15: -- /en/words/words-medical.adj.3:
/extra.16: -- /en/words/words-medical.v.2.1:
/extra.17: -- skip, too big
To make use of the "skipped" files, download the original extension,
gut the contents of "extra.dict" except for the parts referring to the
skipped files above, and then append to 4.0.dict (as per original
instructions).
Its not at all clear that the "skipped" files improve parse accuracy
in any way; they may, in fact, damage accuracy.
Fat Links:
----------
As of version 4.7.0 (September 2010), parsing using "fat links" has
been disabled by default, and is now deprecated. The function is
still there, and can be turned on by specifying the !use-fat=1 command,
or by calling parse_options_use_fat_links(TRUE) from programs.
As of version 4.7.12 (May 2013), the "fat link" code is no longer
compiled by default. To obtain the fat-link version, ./configure
must be run with the --enable-fat-links --disable-sat-solver flag.
Enabling this will generate a lot of warning messages during
compilation.
As of version 5.2.0 (December 2014) the "fat link" code has been
removed. The fat-link code consisted of about 5 KLOC or about 1/6th
of the total code. About 23 KLOC of the core parser code remains.
Users of the Russian dicts must use versions prior to this to get
Russian sentences with conjunctions in them to parse.
Older versions of the link-grammar parser used "fat links" to
support conjunctions (and, or, but, ...). However, this leads
to a number of complications, including poor performance due to
a combinatorial explosion of linkage possibilities, as well as
an excessively complex parse algorithm.
SAT solver:
-----------
The current parser uses an algorithm that runs in O(N^3) time, for
a sentence containing N words.
The SAT solver aims to replace this parser with an algorithm based
on Boolean Satisfiability Theory; specifically using the MiniSAT
solver. The SAT solver has a bit more overhead for shorter sentences,
but is faster for long sentences. To work properly, it needs to be
attached to a parse ranking system. This work is incomplete,
although the prototype works. it is not yet well-integrated with
the system, and needs cleanup. In particular, it fails to handle
morphemes correctly (i.e. to use compute_chosen_words() in
SATEncoder::create_linkage() -- this needs fixing. The
chosen_disjuncts array is not filled out, and thus, there is no
awareness of disjunct costs, which is the most basic parse ranking
that we've got ...)
The SAT solver is enabled by default. It can be disabled by specifying
./configure --disable-sat-solver
prior to compiling.
Directional Links
-----------------
Directional links are needed for some languages, such as Lithuanian,
Turkish and other free word-order languages. The goal is to have
a link clearly indicate which word is the head word, and which is
the dependent. This is achieved by prefixing connectors with
a single *lower case* letter: h,d, indicating 'head' and 'dependent'.
The linkage rules are such that h matches either nothing or d, and
d matches h or nothing. New feature in version 5.1.0.
Phonetics
---------
A/An phonetic determiners before consonants/vowels are handled by a
new PH link type, linking the determiner to the word immediately
following it. Status: mostly done, more testing needed. The rules
could be simplified. Many special-case nouns are unfinished.
ADDRESSES
---------
If you have any questions, or find any bugs, please feel free
to send a note to the mailing list:
link-grammar@googlegroups.com
Although all messages should go to the mailing list, the current
maintainers can be contacted at:
Linas Vepstas - <linasvepstas@gmail.com>
Dom Lachowicz - <domlachowicz@gmail.com>
A complete list of authors and copyright holders can be found in the
AUTHORS file. The original authors of the Link Grammar parser are:
Daniel Sleator sleator@cs.cmu.edu
Computer Science Department 412-268-7563
Carnegie Mellon University www.cs.cmu.edu/~sleator
Pittsburgh, PA 15213
Davy Temperley dtemp@theory.esm.rochester.edu
Eastman School of Music 716-274-1557
26 Gibbs St. www.link.cs.cmu.edu/temperley
Rochester, NY 14604
John Lafferty lafferty@cs.cmu.edu
Computer Science Department 412-268-6791
Carnegie Mellon University www.cs.cmu.edu/~lafferty
Pittsburgh, PA 15213
Mathematical Theory:
--------------------
The mathematical theory of link-grammar is deeper and more
interesting than the original, foundational papers on it let on.
This section provides a random list of remarks about this.
-- Although link-grammar links are un-oriented, a defacto direction
can be given to them that is completely consistent with a
dependency grammar.
-- Dependency-grammar arrows are:
* anti-reflexive (a word cannot depend on itself)
* anti-symmetric (if Word1 depends on Word2, then Word2 cannot
depend on Word1) (so e.g. determiners depend on nouns)
* anti-transitive (if Word1 depends on Word2 and Word2 depends
on Word3, then Word1 cannot depend directly on Word3)
The last property means that dependency graphs are always
skeletons of limits (in the sense of category theory).
Here, by "limit" we mean that the dependency arrow is the
universal, unique arrow through which all other dependencies
must pass (i.e. limit in the sense of category theory).
Note, however: skeletons are not categories: the first property
means there are no identity morphisms, the third property says
that arrows are not composable.
-- Link types can be handled with "type theory". In particular,
Link types can be mapped to types that appear in categorial
grammars. The nice thing about link-grammar is that the link
types form a type system that is much easier to use and comprehend
than that of categorial grammar, and yet can be directly converted
to that system! That is, link-grammar is completely compatible
with categorial grammar, and is easier-to-use.
See, for example, the work by Bob Coeke on category theory and
grammar; there, it becomes abundantly clear that the category
theoretic approach is equivalent to link-grammar, even though
this is not stated anywhere.
TODO -- Working Notes:
----------------------
Some working notes.
Easy to fix: provide a more uniform API to the constituent tree.
i.e provide word index. Also .. provide a clear word API,
showing word extent, suffix, etc.
Capitalized first words:
There are subtle technical issues for handling capitalized first
words. This needs to be fixed. See tokenize.c circa line 586 for
details. Also line 1131.
Maybe capitalization could be handled in the same way that a/an
could be handled! After all, its essentially a nearest-neighbor
phenomenon!
Zero/phantom words: Expressions such as "Looks good" have an implicit
"it" (also called a zero-it or phantom-it) in them; that is, the
sentence should really parse as "(it) looks good". The dictionary
could be simplified by admitting such phantom words explicitly,
rather than modifying the grammar rules to allow such constructions.
Other examples, with the phantom word in parenthesis, include:
* I ate all (of) the cookies.
* I taught him (how) to swim.
* I told him (that) it was gone.
* (It) looks good.
* (You) go home!
One possible solution to the unvoiced-word problem might be to
allow the LG rules to insert alternatives during the early culling
stages. This avoids the need to pre-insert all possible
alternatives during tokenization...
See github https://github.com/opencog/link-grammar/issues/224
punctuation, zero-copula, zero-that:
Poorly punctuated sentences cause problems: for example:
"Mike was not first, nor was he last."
"Mike was not first nor was he last."
The one without the comma currently fails to parse. How can we
deal with this in a simple, fast, elegant way? Similar questions
for zero-copula and zero-that sentences.
Bad grammar: When a sentence fails to parse, look for:
* confused words: its/it's, there/their/they're, to/too, your/you're ...
* missing apostrophes in possessives: "the peoples desires"
* determiner agreement errors: "a books"
* aux verb agreement errors: "to be hooks up"
Poor linkage choices:
Compare "she will be happier than before" to "she will be more happy
than before." Current parser makes "happy" the head word, and "more"
a modifier w/EA link. I believe the correct solution would be to
make "more" the head (link it as a comparative), and make "happy"
the dependent. This would harmonize rules for comparatives... and
would eliminate/simplify rules for less,more.
However, this idea needs to be double-checked against, e.g. Hudson's
word grammar. I'm confused on this issue ...
Stretchy links:
Currently, some links can act at "unlimited" length, while others
can only be finite-length. e.g. determiners should be near the
noun that they apply to. A better solution might be to employ
a 'stretchiness' cost to some connectors: the longer they are, the
higher the cost. (This eliminates the "unlimited_connector_set"
in the dictionary).
Repulsive parses: Sometimes, the existence of one parse should suggest
that another parse must surely be wrong: if one parse is possible,
then the other parses must surely be unlikely. For example: the
conjunction and.j-g allows the "The Great Southern and Western
Railroad" to be parsed as the single name of an entity. However,
it also provides a pattern match for "John and Mike" as a single
entity, which is almost certainly wrong. But "John and Mike" has
an alternative parse, as a conventional-and -- a list of two people,
and so the existence of this alternative (and correct) parse suggests
that perhaps the entity-and is really very much the wrong parse.
That is, the mere possibility of certain parses should strongly
disfavor other possible parses. (Exception: Ben & Jerry's ice
cream; however, in this case, we could recognize Ben & Jerry as the
name of a proper brand; but this is outside of the "normal"
dictionary (?) (but maybe should be in the dictionary!))
More examples: "high water" can have A joining high.a and AN joining
high.n; these two should either be collapsed into one, or one should
be eliminated.
WordNet hinting:
Use WordNet to reduce the number for parses for sentences containing
compound verb phrases, such as "give up", "give off", etc.
incremental parsing: to avoid a combinatorial explosion of parses,
it would be nice to have an incremental parsing, phrase by phrase,
using a Viterbi-like algorithm to obtain the parse. Thus, for example,
the parse of the last half of a long, run-on sentence should not be
sensitive to the parse of the beginning of the sentence.
Doing so would help with combinatorial explosion. So, for example,
if the first half of a sentence has 4 plausible parses, and the
last half has 4 more, then link-grammar reports 16 parses total.
It would be much, much more useful to instead be given the
factored results: i.e. the four plausible parses for the
first half, and the four plausible parses for the last half.
The lower combinatoric stress would ease the burden on
downstream users of link-grammar.
(This somewhat resembles the application of construction grammar
ideas to the link-grammar dictionary).
Caution: watch out for garden-path sentences: