Skip to content

Commit ab79d57

Browse files
mjfang27Stanford NLP
authored andcommitted
added own project
1 parent f18a0fb commit ab79d57

File tree

1,391 files changed

+301307
-439118
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,391 files changed

+301307
-439118
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,9 @@ However, Stanford CoreNLP is copyright by Stanford. (Technically, by The Board o
88
In order for us to continue to be able to dual-license Stanford CoreNLP, we need to make sure that contributions from others do not restrict Stanford from separately licensing the code.
99

1010
Therefore, we can accept contributions on any of the following terms:
11-
1211
* If your contribution is a bug fix of 6 lines or less of new code, we will accept it on the basis that both you and us regard the contribution as de minimis, and not requiring further hassle.
1312
* You can declare that the contribution is in the public domain (in your commit message or pull request).
1413
* You can make your contribution available under a non-restrictive open source license, such as the Revised (or 3-clause) BSD license, with appropriate licensing information included with the submitted code.
15-
* You can sign and return to us a contributor license agreement (CLA), explicitly licensing us to be able to use the code.
16-
There is a [Contributor License Agreement for Individuals](http://nlp.stanford.edu/software/CLA/individual.html) and
17-
a [Contributor License Agreement for Corporations](http://nlp.stanford.edu/software/CLA/corporate.html).
18-
You can send them to us or contact us at: java-nlp-support@lists.stanford.edu .
14+
* You can sign and return to us a contributor license agreement (CLA), explicitly licensing us to be able to use the code. You can find these agreements at http://nlp.stanford.edu/software/CLA/ . You can send them to us or contact us at: java-nlp-support@mailman.stanford.edu .
1915

2016
You should do development against our master branch. The project's source code is in utf-8 character encoding. You should make sure that all unit tests still pass. (In general, you will not be able to run our integration tests, since they rely on resources in our filesystem.)

LICENSE.txt

Lines changed: 282 additions & 617 deletions
Large diffs are not rendered by default.

README.md

Lines changed: 5 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,17 @@
11
Stanford CoreNLP
22
================
33

4-
[![Build Status](https://travis-ci.org/stanfordnlp/CoreNLP.svg?branch=master)](https://travis-ci.org/stanfordnlp/CoreNLP)
4+
Stanford CoreNLP provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, and mark up the structure of sentences in terms of phrases and word dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for Arabic, (mainland) Chinese, French, German, and Spanish. Stanford CoreNLP is an integrated framework, which make it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools on it with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, government, and industry.
55

6-
Stanford CoreNLP provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize and interpret dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases or word dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, and Spanish. Stanford CoreNLP is an integrated framework, which make it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, industry, and government. The tools variously use rule-based, probabilistic machine learning, and deep learning components.
6+
The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v3 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute.
77

8-
The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v3 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute to others.
9-
10-
#### Build Instructions
11-
12-
Several times a year we distribute a new version of the software, which corresponds to a stable commit.
13-
14-
During the time between releases, one can always use the latest, under development version of our code.
15-
16-
Here are some helpful instructions to use the latest code:
17-
18-
#### build with Ant
19-
20-
1. Make sure you have Ant installed, details here: [http://ant.apache.org/](http://ant.apache.org/)
21-
2. Compile the code with this command: `cd CoreNLP ; ant`
22-
3. Then run this command to build a jar with the latest version of the code: `cd CoreNLP/classes ; jar -cf ../stanford-corenlp.jar edu`
23-
4. This will create a new jar called stanford-corenlp.jar in the CoreNLP folder which contains the latest code
24-
5. The dependencies that work with the latest code are in CoreNLP/lib and CoreNLP/liblocal, so make sure to include those in your CLASSPATH.
25-
6. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-models](http://nlp.stanford.edu/software/stanford-english-corenlp-models-current.jar), and [english-models-kbp](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
26-
27-
#### build with Maven
28-
29-
1. Make sure you have Maven installed, details here: [https://maven.apache.org/](https://maven.apache.org/)
30-
2. If you run this command in the CoreNLP directory: `mvn package` , it should run the tests and build this jar file: `CoreNLP/target/stanford-corenlp-3.7.0.jar`
31-
3. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-models](http://nlp.stanford.edu/software/stanford-english-corenlp-models-current.jar), and [english-models-kbp](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
32-
4. If you want to use Stanford CoreNLP as part of a Maven project you need to install the models jars into your Maven repository. Below is a sample command for installing the Spanish models jar. For other languages just change the language name in the command. To install `stanford-corenlp-models-current.jar` you will need to set `-Dclassifier=models`. Here is the sample command for Spanish: `mvn install:install-file -Dfile=/location/of/stanford-spanish-corenlp-models-current.jar -DgroupId=edu.stanford.nlp -DartifactId=stanford-corenlp -Dversion=3.7.0 -Dclassifier=models-spanish -Dpackaging=jar`
33-
34-
You can find releases of Stanford CoreNLP on [Maven Central](http://search.maven.org/#artifactdetails%7Cedu.stanford.nlp%7Cstanford-corenlp%7C3.6.0%7Cjar).
8+
You can find releases of Stanford CoreNLP on [Maven Central](http://search.maven.org/#browse%7C11864822).
359

3610
You can find more explanation and documentation on [the Stanford CoreNLP homepage](http://nlp.stanford.edu/software/corenlp.shtml#Demo).
3711

3812
The most recent models associated with the code in the HEAD of this repository can be found [here](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar).
3913

40-
Some of the larger (English) models -- like the shift-reduce parser and WikiDict -- are not distributed with our default models jar.
41-
The most recent version of these models can be found [here](http://nlp.stanford.edu/software/stanford-english-corenlp-models-current.jar).
42-
43-
We distribute resources for other languages as well, including [Arabic models](http://nlp.stanford.edu/software/stanford-arabic-corenlp-models-current.jar),
44-
[Chinese models](http://nlp.stanford.edu/software/stanford-chinese-corenlp-models-current.jar),
45-
[French models](http://nlp.stanford.edu/software/stanford-french-corenlp-models-current.jar),
46-
[German models](http://nlp.stanford.edu/software/stanford-german-corenlp-models-current.jar),
47-
and [Spanish models](http://nlp.stanford.edu/software/stanford-spanish-corenlp-models-current.jar).
48-
49-
For information about making contributions to Stanford CoreNLP, see the file [CONTRIBUTING.md](CONTRIBUTING.md).
14+
For information about making contributions to Stanford CoreNLP, see the file `CONTRIBUTING.md`.
5015

51-
Questions about CoreNLP can either be posted on StackOverflow with the tag [stanford-nlp](http://stackoverflow.com/questions/tagged/stanford-nlp),
16+
Questions about CoreNLP can either be posted on StackOverflow with the tag [stanford-nlp](http://stackoverflow.com/questions/tagged/stanford-nlp),
5217
or on the [mailing lists](http://nlp.stanford.edu/software/corenlp.shtml#Mail).

build.gradle

Lines changed: 3 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ sourceCompatibility = 1.8
1111
targetCompatibility = 1.8
1212
compileJava.options.encoding = 'UTF-8'
1313

14-
version = '3.7.0'
14+
version = '3.4.1'
1515

1616
// Gradle application plugin
1717
mainClassName = "edu.stanford.nlp.pipeline.StanfordCoreNLP"
@@ -41,15 +41,12 @@ sourceSets {
4141
}
4242
}
4343

44-
task listDeps {
45-
doLast {
46-
configurations.compile.each { File file -> println file.name }
47-
}
44+
task listDeps << {
45+
configurations.compile.each { File file -> println file.name }
4846
}
4947

5048
dependencies {
5149
compile fileTree(dir: 'lib', include: '*.jar')
52-
testCompile fileTree(dir: 'liblocal', include: '*.jar')
5350
}
5451

5552
// Eclipse plugin setup
@@ -63,7 +60,3 @@ eclipse {
6360
}
6461
}
6562
}
66-
67-
task wrapper(type: Wrapper) {
68-
gradleVersion = '3.2'
69-
}

build.xml

Lines changed: 5 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,6 @@
2626
<include name="*.jar"/>
2727
<exclude name="javanlp*"/>
2828
</fileset>
29-
<fileset dir="${basedir}/liblocal">
30-
<include name="*.jar"/>
31-
<exclude name="javanlp*"/>
32-
</fileset>
3329
</path>
3430
</target>
3531

@@ -128,16 +124,6 @@
128124
<compilerarg value="-Xmaxwarns"/>
129125
<compilerarg value="10000"/> -->
130126
</javac>
131-
<copy todir="${build.path}/edu/stanford/nlp/pipeline/demo">
132-
<fileset dir="${source.path}/edu/stanford/nlp/pipeline/demo">
133-
<exclude name="**/*.java"/>
134-
</fileset>
135-
</copy>
136-
<copy todir="${build.path}/edu/stanford/nlp/pipeline">
137-
<fileset dir="${source.path}/edu/stanford/nlp/pipeline">
138-
<exclude name="**/*.java"/>
139-
</fileset>
140-
</copy>
141127
</target>
142128

143129
<target name="test" depends="classpath,compile"
@@ -160,7 +146,7 @@
160146
<target name="itest" depends="classpath,compile"
161147
description="Run core integration tests">
162148
<echo message="${ant.project.name}" />
163-
<junit fork="yes" maxmemory="12g" printsummary="off" outputtoformatters="false" forkmode="perTest" haltonfailure="true">
149+
<junit fork="yes" maxmemory="8g" printsummary="off" outputtoformatters="false" forkmode="perTest" haltonfailure="true">
164150
<classpath refid="classpath"/>
165151
<classpath path="${build.path}"/>
166152
<classpath path="${data.path}"/>
@@ -178,7 +164,7 @@
178164
<target name="slowitest" depends="classpath,compile"
179165
description="Run really slow integration tests">
180166
<echo message="${ant.project.name}" />
181-
<junit fork="yes" maxmemory="12g" printsummary="off" outputtoformatters="false" forkmode="perTest" haltonfailure="true">
167+
<junit fork="yes" maxmemory="8g" printsummary="off" outputtoformatters="false" forkmode="perTest" haltonfailure="true">
182168
<classpath refid="classpath"/>
183169
<classpath path="${build.path}"/>
184170
<classpath path="${data.path}"/>
@@ -313,7 +299,7 @@
313299
<include name="commons-lang3-3.1.jar"/>
314300
<include name="xom-1.2.10.jar"/>
315301
<include name="joda-time.jar"/>
316-
<include name="jollyday-0.4.9.jar"/>
302+
<include name="jollyday-0.4.7.jar"/>
317303
</lib>
318304
<zipfileset prefix="WEB-INF/data"
319305
file="/u/nlp/data/pos-tagger/distrib/english-left3words-distsim.tagger"/>
@@ -389,9 +375,7 @@
389375
<zipfileset prefix="WEB-INF/data"
390376
file="/u/nlp/data/lexparser/arabicFactored.ser.gz"/>
391377
<zipfileset prefix="WEB-INF/data"
392-
file="/u/nlp/data/lexparser/frenchFactored.ser.gz"/>
393-
<zipfileset prefix="WEB-INF/data"
394-
file="/u/nlp/data/lexparser/chineseFactored.ser.gz"/>
378+
file="/u/nlp/data/lexparser/xinhuaFactored.ser.gz"/>
395379
<zipfileset prefix="WEB-INF/data/chinesesegmenter"
396380
file="/u/nlp/data/gale/segtool/stanford-seg/classifiers-2010/05202008-ctb6.processed-chris6.lex.gz"/>
397381
<zipfileset prefix="WEB-INF/data/chinesesegmenter"
@@ -448,7 +432,7 @@
448432
<include name="xom-1.2.10.jar"/>
449433
<include name="xml-apis.jar"/>
450434
<include name="joda-time.jar"/>
451-
<include name="jollyday-0.4.9.jar"/>
435+
<include name="jollyday-0.4.7.jar"/>
452436
</lib>
453437
<!-- note for John: c:/Users/John Bauer/nlp/stanford-releases -->
454438
<lib dir="/u/nlp/data/StanfordCoreNLPModels">

data/README

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Small data files (test files, properties files, etc) can go here.
66
Don't put large models here. That would cause git to bloat too much.
77

88
Open source: Don't put here any file that is copyright or that we
9-
don't have the rights to redistribute. The contents of this
9+
don't have the rights to rights to redistribute. The contents of this
1010
directory appear in our public github repository.
1111

1212
Contact: John Bauer (horatio@gmail.com)

data/edu/stanford/nlp/patterns/surface/example.properties

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,16 +17,16 @@ outDir=SPIEDPatternsout
1717
#Number of threads available on the machine
1818
numThreads=1
1919
#***Use these options if you are limited by memory
20-
batchProcessSents = false
20+
batchProcessSents = true
2121
#This name is a misnomer. Max number of *lines* per batch file. Works only for text file format; ser files cannot be broken down
2222
numMaxSentencesPerBatchFile=100
23-
saveInvertedIndex=false
23+
saveInvertedIndex=true
2424
invertedIndexDirectory=${outDir}/invertedIndex
2525
#Loading index from invertedIndexDirectory
2626
#loadInvertedIndex=true
2727

2828
#Useful for memory heavy apps.
29-
#invertedIndexClass=edu.stanford.nlp.patterns.LuceneSentenceIndex
29+
invertedIndexClass=edu.stanford.nlp.patterns.LuceneSentenceIndex
3030

3131

3232
### Example for running it on presidents biographies. For more data examples, see the bottom of this file
@@ -43,7 +43,7 @@ saveSentencesSerDir=${outDir}/sents
4343
#fileFormat=ser
4444
#file=${outDir}/sents
4545

46-
#We are learning names of presidential candidates, places, and other names. In each line, all text after tabs are ignored in these seed files
46+
#We are learning names of presidential candidates, places, and other names
4747
seedWordsFiles=NAME,${DIR}/names.txt;PLACE,${DIR}/places.txt;OTHER,${DIR}/otherpeople.txt
4848
#Useful for matching lemmas or spelling mistakes
4949
fuzzyMatch=false
@@ -103,7 +103,7 @@ targetAllowedTagsInitialsStr=NAME,N;OTHER,N
103103
computeAllPatterns = true
104104

105105
#Options: MEMORY, DB, LUCENE. If using SQL for storing patterns for each token --- populate SQLConnection class, that is provide those properties!
106-
storePatsForEachToken=MEMORY
106+
storePatsForEachToken=LUCENE
107107
#***If your code is running too slow, try to reduce this number. Samples % of sentences for learning patterns
108108
sampleSentencesForSufficientStats=1.0
109109

data/edu/stanford/nlp/process/ptblexer.gold

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -885,7 +885,7 @@ origins
885885
''
886886
Libyan
887887
ruler
888-
Mu`ammar
888+
Muammar
889889
al-Qaddafi
890890
referred
891891
to

data/edu/stanford/nlp/ud/feature_map.txt

Lines changed: 27 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -6,25 +6,27 @@
66
* VBD VerbForm=Fin|Mood=Ind|Tense=Past
77
* VBN Tense=Past|VerbForm=Part
88
* VBP VerbForm=Fin|Mood=Ind|Tense=Pres
9-
* MD VerbForm=Fin
109
* JJ Degree=Pos
1110
* JJR Degree=Cmp
1211
* JJS Degree=Sup
12+
* RB Degree=Pos
13+
* RBR Degree=Cmp
14+
* RBS Degree=Sup
1315
* CD NumType=Card
1416
am VBP VerbForm=Fin|Mood=Ind|Tense=Pres|Person=1|Number=Sing
1517
was VBD VerbForm=Fin|Mood=Ind|Tense=Past|Number=Sing
16-
i PRP Number=Sing|Person=1|PronType=Prs|Case=Nom
18+
i PRP Number=Sing|Person=1|PronType=Prs
1719
you PRP Person=2|PronType=Prs
18-
he PRP Number=Sing|Person=3|Gender=Masc|PronType=Prs|Case=Nom
19-
she PRP Number=Sing|Person=3|Gender=Fem|PronType=Prs|Case=Nom
20+
he PRP Number=Sing|Person=3|Gender=Masc|PronType=Prs
21+
she PRP Number=Sing|Person=3|Gender=Fem|PronType=Prs
2022
it PRP Number=Sing|Person=3|Gender=Neut|PronType=Prs
21-
we PRP Number=Plur|Person=1|PronType=Prs|Case=Nom
22-
they PRP Number=Plur|Person=3|PronType=Prs|Case=Nom
23-
me PRP Number=Sing|Person=1|PronType=Prs|Case=Acc
24-
him PRP Number=Sing|Person=3|Gender=Masc|PronType=Prs|Case=Acc
25-
her PRP Number=Sing|Person=3|Gender=Fem|PronType=Prs|Case=Acc
26-
us PRP Number=Plur|Person=1|PronType=Prs|Case=Acc
27-
them PRP Number=Plur|Person=3|PronType=Prs|Case=Acc
23+
we PRP Number=Plur|Person=1|PronType=Prs
24+
they PRP Number=Plur|Person=3|PronType=Prs
25+
me PRP Number=Sing|Person=1|PronType=Prs
26+
him PRP Number=Sing|Person=3|Gender=Masc|PronType=Prs
27+
her PRP Number=Sing|Person=3|Gender=Fem|PronType=Prs
28+
us PRP Number=Plur|Person=1|PronType=Prs
29+
them PRP Number=Plur|Person=3|PronType=Prs
2830
my PRP$ Number=Sing|Person=1|Poss=Yes|PronType=Prs
2931
mine PRP$ Number=Sing|Person=1|Poss=Yes|PronType=Prs
3032
your PRP$ Person=2|Poss=Yes|PronType=Prs
@@ -37,70 +39,24 @@ our PRP$ Number=Plur|Person=1|Poss=Yes|PronType=Prs
3739
ours PRP$ Number=Plur|Person=1|Poss=Yes|PronType=Prs
3840
their PRP$ Number=Plur|Person=3|Poss=Yes|PronType=Prs
3941
theirs PRP$ Number=Plur|Person=3|Poss=Yes|PronType=Prs
40-
myself PRP Number=Sing|Person=1|PronType=Prs
41-
yourself PRP Number=Sing|Person=2|PronType=Prs
42-
himself PRP Number=Sing|Person=3|Gender=Masc|PronType=Prs
43-
herself PRP Number=Sing|Person=3|Gender=Fem|PronType=Prs
44-
itself PRP Number=Sing|Person=3|Gender=Neut|PronType=Prs
45-
ourselves PRP Number=Plur|Person=1|PronType=Prs
46-
yourselves PRP Number=Plur|Person=2|PronType=Prs
47-
themselves PRP Number=Plur|Person=3|PronType=Prs
42+
myself PRP Number=Sing|Person=1|Reflex=Yes|PronType=Prs
43+
yourself PRP Person=2|Reflex=Yes|PronType=Prs
44+
himself PRP Number=Sing|Person=3|Reflex=Yes|Gender=Masc|PronType=Prs
45+
herself PRP Number=Sing|Person=3|Reflex=Yes|Gender=Fem|PronType=Prs
46+
itself PRP Number=Sing|Person=3|Reflex=Yes|Gender=Neut|PronType=Prs
47+
ourselves PRP Number=Plur|Person=1|Reflex=Yes|PronType=Prs
48+
themselves PRP Number=Plur|Person=3|Reflex=Yes|PronType=Prs
4849
the DT Definite=Def|PronType=Art
4950
a DT Definite=Ind|PronType=Art
5051
an DT Definite=Ind|PronType=Art
52+
some DT Definite=Ind|PronType=Art
53+
any DT Definite=Ind|PronType=Art
5154
this DT PronType=Dem|Number=Sing
5255
that DT PronType=Dem|Number=Sing
5356
these DT PronType=Dem|Number=Plur
5457
those DT PronType=Dem|Number=Plur
55-
here RB PronType=Dem
56-
there RB PronType=Dem
57-
then RB PronType=Dem
58-
whose WP$ Poss=Yes
59-
hard RB Degree=Pos
60-
fast RB Degree=Pos
61-
late RB Degree=Pos
62-
long RB Degree=Pos
63-
high RB Degree=Pos
64-
easy RB Degree=Pos
65-
early RB Degree=Pos
66-
far RB Degree=Pos
67-
soon RB Degree=Pos
68-
low RB Degree=Pos
69-
close RB Degree=Pos
70-
well RB Degree=Pos
71-
badly RB Degree=Pos
72-
little RB Degree=Pos
73-
harder RBR Degree=Cmp
74-
faster RBR Degree=Cmp
75-
later RBR Degree=Cmp
76-
longer RBR Degree=Cmp
77-
higher RBR Degree=Cmp
78-
easier RBR Degree=Cmp
79-
quicker RBR Degree=Cmp
80-
earlier RBR Degree=Cmp
81-
further RBR Degree=Cmp
82-
farther RBR Degree=Cmp
83-
sooner RBR Degree=Cmp
84-
slower RBR Degree=Cmp
85-
lower RBR Degree=Cmp
86-
closer RBR Degree=Cmp
87-
better RBR Degree=Cmp
88-
worse RBR Degree=Cmp
89-
less RBR Degree=Cmp
90-
hardest RBS Degree=Sup
91-
fastest RBS Degree=Sup
92-
latest RBS Degree=Sup
93-
longest RBS Degree=Sup
94-
highest RBS Degree=Sup
95-
easiest RBS Degree=Sup
96-
quickest RBS Degree=Sup
97-
earliest RBS Degree=Sup
98-
furthest RBS Degree=Sup
99-
farthest RBS Degree=Sup
100-
soonest RBS Degree=Sup
101-
slowest RBS Degree=Sup
102-
lowest RBS Degree=Sup
103-
closest RBS Degree=Sup
104-
best RBS Degree=Sup
105-
worst RBS Degree=Sup
106-
least RBS Degree=Sup
58+
59+
60+
61+
62+

0 commit comments

Comments
 (0)