Skip to content

Commit

Permalink
started tuto on semantic networks
Browse files Browse the repository at this point in the history
  • Loading branch information
seinecle committed Mar 7, 2017
1 parent a6e7977 commit 03c4f5a
Show file tree
Hide file tree
Showing 14 changed files with 1,826 additions and 227 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
target/
src/main/asciidoc/subdir/
src/main/asciidoc/subdir/
properties/
899 changes: 899 additions & 0 deletions docs/generated-html/working-with-text-en.html

Large diffs are not rendered by default.

Binary file added docs/generated-pdf/working-with-text-en.pdf
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ last modified: {docdate}
:iconsfont: font-awesome
:revnumber: 1.0
:example-caption!:
:sourcedir: ../../../main/java

:title-logo-image: gephi-logo-2010-transparent.png[width="450" align="center"]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ video::Y3jk-_QaFx4[youtube, height=315, width=560, align="center"]

//ST: !

For this tutorial you need:
For this tutorial you will need:

- some knowledge of Java.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ link to animated version: https://www.youtube.com/watch?v=Y3jk-_QaFx4

//ST: !

For this tutorial you need:
For this tutorial you will need:

- some knowledge of Java.

Expand Down
222 changes: 222 additions & 0 deletions docs/generated-slides/subdir/working-with-text-en_temp_common.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,222 @@
= Working with text in Gephi
Clément Levallois <clementlevallois@gmail.com>
2017-02-28

last modified: {docdate}

:icons!:
:iconsfont: font-awesome
:revnumber: 1.0
:example-caption!:

:title-logo-image: gephi-logo-2010-transparent.png[width="450" align="center"]

image::gephi-logo-2010-transparent.png[width="450" align="center"]
{nbsp} +

//ST: 'Escape' or 'o' to see all sides, F11 for full screen, 's' for speaker notes

== Presentation of this tutorial
//ST: Presentation of this tutorial

//ST: !
This tutorial explains how to draw "semantic networks" like this one:

image::en/cooccurrences-computer/gephi-result-1-en.png[align="center", title="a semantic network"]
{nbsp} +

We call "semantic network" a visualization where textual items (words, expressions) are connected to each others, like above.

//ST: !
We will see in turn:

- why are semantic networks interesting
- how to create a semantic network
- tips and tricks to visualize semantic networks in the best possible way in Gephi


== Why semantic networks?
//ST: Why semantic networks?

A text, or many texts, can be hard to summarize.

Drawing a semantic network highlights what are the most frequent terms, how they relate to each other, and reveal the different groups or "clusters" of they form.

Often, a cluster of terms characterizes a topic. Hence, converting a text into a semantic network helps detecting topics in the text, from micro-topics to the general themes discussed in the documents.

//ST: !

Semantic networks are regular networks, where:

- nodes are words ("USA") or groups of words ("United States of America")

- relations are, usually, signifying a co-occurrences: two words are connected if they co-occur.

It means that if you have a textual network, you can visualize it with Gephi just like any other network. Yet, not everything is the same, and this tutorial provides tips and tricks on why textual data can be a bit different than other data.

//ST: !
== Choosing what a "term" is in a semantic network
//ST: !

The starting point can be: a term is a single word. So in this sentence, we would have 7 terms:


My sister lives in the United States (7 words -> 7 terms)

This means that each single term is a meaningful semantic unit.

This approach is simple but not great. Look again at the sentence:

//ST: !

My sister lives in the United States

1. `My`, `in`, `the` are frequent terms which have no special significance: they should probably be discarded
2. `United` and `States` are meaningful separately, but here they should probably be considered together: `United States`
3. `lives` is the conjugated form of the verb `to live`. In a network, it would make sense to regroup `live`, `lives` and `lived` as one single node.

Analysts, facing each of these issues, have imagined several solutions:

//ST: !
==== 1. Removing "stopwords"
//ST: !

To remove these little terms without informational value, the most basic approach is to keep a list of them, and remove any word from the text which belongs to this list.

You can find a list of these useless terms in many languages, called "stopwords", http://www.ranks.nl/stopwords/[on this website].

//ST: !
[start=2]
==== 2. Considering "n-grams"
//ST: !

So, `United States` should probably be a meaningful unit, not just `United` and `States`. Because `United States` is composed of 2 terms, it is called a "bi-gram".

Trigrams are interesting as well obviously (eg, `chocolate ice cream`).

People often stop there, but I find quadrigrams can be meaningful as well, if less frequent: `United States of America`, `functional magnetic resonance imaging`, `The New York Times`, etc.

Many tools exist to extract n-grams from texts, for example http://homepages.inf.ed.ac.uk/lzhang10/ngram.html[these programs which are under a free license].

//ST: !
[start=2]
==== 2 bis. Considering "noun phrases"
//ST: !

Another approach to go beyond single word terms (`United`, `States`) takes a different approach than n-grams. It says:

"delete all in the text except for all groups of words made of nouns and adjectives, ending by a noun"

-> (these are called, a bit improperly, "noun phrases")

Take `United States`: it is a noun (`States`) preceded by an adjective (`United`). It will be considered as a valid term.

//ST: !

This approach is interesting (implemented for example in the software http://www.vosviewer.com[Vosviewer]), but it has drawbacks:

- you need to detect adjectives and nouns in your text. This is very language dependent, and slow for large corpora.

- what about verbs, and noun phrases comprising non adjectives, such as "United States *of* America"? These are not included in the network.

//ST: !
[start=3]
==== 3. Stemming and lemmatization
//ST: !

`live`, `lives`, `lived`: in a semantic network, it is probably useless to have 3 nodes, one for each of these 3 forms of the same root.

- Stemming consists in chopping the end of the words, so that here, we would have only `live`.
- Lemmatization is the same, but in a more subtle way: it takes grammar into account. So, "good" and better" would be reduced to "good" because there is the same basic semantic unit behind these two words, even if their lettering differ completely.

A tool performing lemmatization is https://textgrid.de/en/[TextGrid]. It has many functions for textual analysis, and lemmatization https://wiki.de.dariah.eu/display/TextGrid/The+Lemmatizer+Tool[is explained there].


//ST: !
== Should we represent all terms in a semantic network?

//ST: !
We have seen that some words are more interesting than others in a corpus:

- stopwords should be removed,
- some varieties of words (`lived`, `lives`) could be grouped together (`live`).
- sequences of words (`baby phone`) can be added because they mean more than their words taken separately (`baby`, `phone`)

//ST: !
Once this is done, we have transformed the text into plenty of words to represent. Should they all be included in the network?

Imagine we have a word appearing just once, in a single footnote of a text long of 2,000 pages. Should this word appear? Probably not.

Which rule to apply to keep or leave out a word?

//ST: !
==== 1. Start with: how many words can fit in your visualization?
//ST: !

A starting point can be the number of words you would like to see on a visualization. *A ball park figure is 300 words max*:

- it already fills in all the space of a computer screen.
- 300 words provides enough information to allow micro-topics of a text to be distinguished

More words can be crammed in, but in this case the viewer would have to take time zooming in and out, panning to explore the visualization. The viewer transforms into an analyst, instead of a regular reader.

//ST: !
==== 2. Representing only the most frequent terms
//ST: !

If ~ 300 words would fit in the visualization of the network, and the text you start with contains 5,000 different words: which 300 words should be selected?

To visualize the semantic network *for a long, single text* the straightforward approach consists in picking the 300 most frequent words (or n-grams, see above).


In the case of a colection of texts to visualize (several documents instead of one), two possibilities:
//ST: !

1. Either you also take the most frequent terms across these documents, like before

2. Or you can apply a more subtle rule called "tf-idf", detailed below.

//ST: tf-idf

The idea with tf-idf is that terms which appear in all documents are not interesting, because they are so ubiquitous.

Example: you retrieve all the webpages mentioning the word `Gephi`, and then want to visualize the semantic network of the texts contained in these webpages.

//ST: !

-> by definition, all these webpages will mention Gephi, so Gephi will probably be the most frequent term.

-> so your network will end up with a node "Gephi" connected to many other terms, but you actually knew that. Boring.

-> terms used in all web pages are less interesting to you than terms which are used frequently, but not uniformly accross webpages.

//ST: !

Applying the tf-idf correction will highlight terms which are frequently used within some texts, but not used in many texts.

(to go further, here is a webpage giving a simple example: http://www.tfidf.com/)

//ST: !
Should you visualize the most frequent words in your corpus, or the words which rank highest according to tf-idf?

Both are interesting, as they show a different info. I'd suggest that the simple frequency count is easier to interpret.

tf-idf can be left for specialists of the textual data under consideration, after they have been presented with the simple frequency count version.

== (to be continued)
//ST: (to be continued)


== More tutorials on importing data to Gephi
//ST: More tutorials on importing data to Gephi
//ST: !

- https://github.com/gephi/gephi/wiki/Import-CSV-Data[The Gephi wiki on importing csv]
- https://www.youtube.com/watch?v=3Im7vNRA2ns[Video "How to import a CSV into Gephi" by Jen Golbeck]

== the end

//ST: The end!
Visit https://www.facebook.com/groups/gephi/[the Gephi group on Facebook] to get help,

or visit https://seinecle.github.io/gephi-tutorials/[the website for more tutorials]
Loading

0 comments on commit 03c4f5a

Please sign in to comment.