Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Almost-but-not-quite #10

Open
MichaelPaulukonis opened this issue Oct 24, 2016 · 13 comments
Open

Almost-but-not-quite #10

MichaelPaulukonis opened this issue Oct 24, 2016 · 13 comments

Comments

@MichaelPaulukonis
Copy link

MichaelPaulukonis commented Oct 24, 2016

My main project will be to complete an npm module for getting texts that are almost-but-not-quite the same as the source text.

The idea is rougly the same as @dariusk's Harpooners and Sailors (here (source) and here (output+notes)) from last year - but wrapped up into a nice reusable package.

I think I would like to use such a module for other projects, so this is a good time to git-r-done.

Plus, I've been holding off the implementation of it until November, anyway.

@MichaelPaulukonis
Copy link
Author

Link Dump

@MichaelPaulukonis
Copy link
Author

start of crude proof-of-concept code here.

Includes some not-quite-as-crude code from another project I've done.

Which uses the nlp-compromise package, instead of natural. I'm going to look into swapping those out.

@MichaelPaulukonis
Copy link
Author

Sooooooo.... the light dawns on Marblehead: I'm using Levenshtein (edit-distance), wheras Kazemi used Word2Vec - which gives a semantic distance. Edit-distance is purely an accident of orthography.

So, what I've got is not nearly as interesting as I was hoping for (as usual).

It is of some interest, and I'll post some examples later this week (I'm desperately short on time this year, le sigh).

@enkiv2
Copy link

enkiv2 commented Nov 8, 2016

If you could normalize both to a scale between 0 and 1 you could multiply
them :)

On Tue, Nov 8, 2016 at 11:15 AM Michael Paulukonis notifications@github.com
wrote:

Sooooooo.... the light dawns on Marblehead: I'm using Levenshtein
(edit-distance), wheras Kazemi used Word2Vec - which gives a semantic
distance. Edit-distance is purely an accident of orthography.

So, what I've got is not nearly as interesting as I was hoping for (as
usual).

It is of some interest, and I'll post some examples later this week
(I'm desperately short on time this year, le sigh).


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#10 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAd6GYmsBrf2Y5MTyCMG5MqtoLflj0YRks5q8KAsgaJpZM4KewBN
.

@MichaelPaulukonis MichaelPaulukonis changed the title Close-but-not-quite Almost-but-not-quite Nov 9, 2016
@MichaelPaulukonis
Copy link
Author

MichaelPaulukonis commented Nov 9, 2016

I think I'm going to do some overkill and play with retext and the nodes of its natural language concrete syntax tree. Which has some charms as paragraph and sentence tokenization, and the ability to recreate the original text.

I find the online examples of using retext and nlcst to be sub-optimal.

Also, I'm curious why the project works asynchronously, when there are no asynchronous sub-elements.

@MichaelPaulukonis
Copy link
Author

@enkiv2 - What would that do? Pretend I'm almost statistically innumerate....

There are libs that provide a 0..1 edit distance; I happened to pick a package that didn't.

We've got a baby coming in < 3 weeks, so I'm not going to get into too much craziness. Figuring out how to get retext going seems to be the high-point of the month for me.

@enkiv2
Copy link

enkiv2 commented Nov 11, 2016

If you had the two factors scaled the same way, and multiplied them, you
would rank words that are a good match on both factors much higher than one
that is a good match on one but a poor match on the other. So, you'd get a
lot of heavily related words. The results might be much more interesting,
or much less interesting; I'm not sure.

On Fri, Nov 11, 2016 at 11:17 AM Michael Paulukonis <
notifications@github.com> wrote:

@enkiv2 https://github.com/enkiv2 - What would that do? Pretend I'm
almost statistically innumerate....

There are libs that provide a 0..1 edit distance; I happened to pick a
package that didn't.

We've got a baby coming in < 3 weeks, so I'm not going to get into too
much craziness. Figuring out how to get retext going seems to be the
high-point of the month for me.


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
#10 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAd6GRcZ5D1OpnyE_MEeXoyI2vHv1D6yks5q9JU1gaJpZM4KewBN
.

@MichaelPaulukonis
Copy link
Author

@enkiv2 we're ranking sentences, not words. I'm still not clear on what I would multiply.


Here is some sample output

It only took 11 hours, but that's also because the computer slept for much of that time.

@enkiv2
Copy link

enkiv2 commented Nov 18, 2016

I guess if we're ranking sentences that's a much harder problem. I don't
know how to get, say, a word2vec-style location in semantic space for a
whole sentence. Adding all the vectors would probably produce some
unrelated word, if anything.

On Fri, Nov 18, 2016 at 10:51 AM Michael Paulukonis <
notifications@github.com> wrote:

@enkiv2 https://github.com/enkiv2 we're ranking sentences, not words.

I'm still not clear on what I would multiply.

Here is some sample output
https://gist.github.com/MichaelPaulukonis/2b2d47a5e22066e950c39841b9a6c889

It only took 11 hours, but that's also because the computer slept for much
of that time.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#10 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAd6GTRI06wI6KmJ6bBXifrniEG5fsLkks5q_cmGgaJpZM4KewBN
.

@ikarth
Copy link

ikarth commented Nov 18, 2016

There's been some work with vectors at the sentence, paragraph, and document level. Look into doc2vec.

@MichaelPaulukonis
Copy link
Author

Kazemi's project last year used word2vec - which I missed when I started the project. I was trying to do a single-language (NodeJS) solution. Not quite possible.

@michelleful
Copy link

@enkiv2, you may want to give skip-thought vectors a try.

@MichaelPaulukonis
Copy link
Author

@ikarth part of this was NOT using doc2vec since that's not NodeJS. Another part was thinking that Kazemi had not used it, either.

Something I did discover is some word-vectors as JSON - https://igliu.com/word2vec-json/


I'm going to call it quits for the month. I've got a novel, I didn't hit my objective of a nicely packaged npm module, but I did generate a novel and learned new things.

We've got another baby due on Dec 1, so I'm going to finish off the month focusing on that!

The entire novel has been appended to gist @ https://gist.github.com/MichaelPaulukonis/2b2d47a5e22066e950c39841b9a6c889

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants