New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

False positives #33

Closed
kvz opened this Issue Sep 10, 2015 · 10 comments

Comments

6 participants
@kvz

kvz commented Sep 10, 2015

Thanks for this tool! I think it will be wonderful to run our blog posts & documentation through this in our CI before deployment.

However, I have the case where

Authentication is disabled by default

red flags on disabled

1. [Osama Mahmood](https://twitter.com/OsamaMahmood007) (1)

red flags Osama

and

Felix, (alumnus of both Transloadit and Node.js nowadays

red flags alumnus

The documents with the false positives are fairly big and subject to change, so Ideally I'd have a whitelist of sorts to mark these exceptions - and still be able to fail on new violations.

What's the best way to achieve this?

@kvz kvz changed the title from False positive to False positives Sep 10, 2015

@wooorm

This comment has been minimized.

Show comment
Hide comment
@wooorm

wooorm Sep 10, 2015

Member

Thanks!

They could be detected by #16 and #18, probably.
Also: it’s a really personal question, some people would still prefer graduate over alumnus in the last example, and the same goes for disabled and turned off in the first.
Your second example could probably be ignored by alex itself because it’s (clearly) a name.

I always meant for alex to be a help when writing, not in a CI to block publishing. Humans are smarter than alex. But, I do see that people (want to) use it that way though.

Short version: #16 and #18 need to be fixed!

Member

wooorm commented Sep 10, 2015

Thanks!

They could be detected by #16 and #18, probably.
Also: it’s a really personal question, some people would still prefer graduate over alumnus in the last example, and the same goes for disabled and turned off in the first.
Your second example could probably be ignored by alex itself because it’s (clearly) a name.

I always meant for alex to be a help when writing, not in a CI to block publishing. Humans are smarter than alex. But, I do see that people (want to) use it that way though.

Short version: #16 and #18 need to be fixed!

@joshhunt

This comment has been minimized.

Show comment
Hide comment
@joshhunt

joshhunt Sep 11, 2015

Alex: Catch insensitive, inconsiderate writing

Authentication is disabled by default isn't insensitive or inconsiderate writing.

joshhunt commented Sep 11, 2015

Alex: Catch insensitive, inconsiderate writing

Authentication is disabled by default isn't insensitive or inconsiderate writing.

@jdalton

This comment has been minimized.

Show comment
Hide comment
@jdalton

jdalton Sep 11, 2015

Contributor

@joshhunt

Alex: Catch insensitive, inconsiderate writing

Authentication is disabled by default isn't insensitive or inconsiderate writing.

Alex reports it --> may <-- be insensitive.

warning  `disabled` may be insensitive, use `person with disabilities` instead

I think the warning is qualified enough here and besides false positives it does happen to catch insensitive, inconsiderate writing.

Contributor

jdalton commented Sep 11, 2015

@joshhunt

Alex: Catch insensitive, inconsiderate writing

Authentication is disabled by default isn't insensitive or inconsiderate writing.

Alex reports it --> may <-- be insensitive.

warning  `disabled` may be insensitive, use `person with disabilities` instead

I think the warning is qualified enough here and besides false positives it does happen to catch insensitive, inconsiderate writing.

@wooorm

This comment has been minimized.

Show comment
Hide comment
@wooorm

wooorm Sep 11, 2015

Member

Thanks @jdalton, I agree with you! Alex isn’t as smart as a human, but it tries its best and is sometimes overly happy to let you know something may be insensitive.

Member

wooorm commented Sep 11, 2015

Thanks @jdalton, I agree with you! Alex isn’t as smart as a human, but it tries its best and is sometimes overly happy to let you know something may be insensitive.

@wooorm

This comment has been minimized.

Show comment
Hide comment
@wooorm

wooorm Oct 7, 2015

Member

@ixti I can see how the bot seems offensive here. However, “pancake face” is an actual ethnic slur (according to WikiPedia). Therefore, I think alex is operating according to its byline, “Catch insensitive, inconsiderate writing”, by warning about it.

Please open a new issue with more information if you’d like to discuss this further, or discus the slackbot on the keoghpe/alex-slack#1.

Member

wooorm commented Oct 7, 2015

@ixti I can see how the bot seems offensive here. However, “pancake face” is an actual ethnic slur (according to WikiPedia). Therefore, I think alex is operating according to its byline, “Catch insensitive, inconsiderate writing”, by warning about it.

Please open a new issue with more information if you’d like to discuss this further, or discus the slackbot on the keoghpe/alex-slack#1.

@wooorm wooorm referenced this issue Oct 7, 2015

Closed

Profanities #46

@yoshuawuyts

This comment has been minimized.

Show comment
Hide comment
@yoshuawuyts

yoshuawuyts Oct 7, 2015

Collaborator

@ixti Welp, covering all possible contexts of a slur is hard. It correctly caught "pancake face" as being used as a derogatory term; the suggestion could probably use some work though. Could you clarify in what context it was used so we can add an appropriate suggestion?

Also: I understand that you're upset, but please refrain from dropping all-caps f-bombs in issues. Everyone involved with Alex does so in their spare time and has the best intentions. It'd be appreciated if you'd be considerate of that in future interactions.

Collaborator

yoshuawuyts commented Oct 7, 2015

@ixti Welp, covering all possible contexts of a slur is hard. It correctly caught "pancake face" as being used as a derogatory term; the suggestion could probably use some work though. Could you clarify in what context it was used so we can add an appropriate suggestion?

Also: I understand that you're upset, but please refrain from dropping all-caps f-bombs in issues. Everyone involved with Alex does so in their spare time and has the best intentions. It'd be appreciated if you'd be considerate of that in future interactions.

@ixti

This comment has been minimized.

Show comment
Hide comment
@ixti

ixti Oct 7, 2015

I apologize for my pretty rude and aggressive (previous) comment (deleted it myself).
In that particular case issue was definitely not with alex but with integration itself.

ixti commented Oct 7, 2015

I apologize for my pretty rude and aggressive (previous) comment (deleted it myself).
In that particular case issue was definitely not with alex but with integration itself.

@wooorm

This comment has been minimized.

Show comment
Hide comment
@wooorm

wooorm Oct 7, 2015

Member

Thanks for being so considerate @ixti 😄

Member

wooorm commented Oct 7, 2015

Thanks for being so considerate @ixti 😄

@wooorm wooorm added the enhancement label Jan 11, 2016

@wooorm wooorm added this to the 2.0.0 milestone Jan 11, 2016

wooorm added a commit to retextjs/retext-equality that referenced this issue Jan 16, 2016

@wooorm

This comment has been minimized.

Show comment
Hide comment
@wooorm

wooorm Feb 3, 2016

Member

Been a while. Thanks for hanging on. The just releases 2.0.0 version fixes this by introducing several mechanisms to control the messages suggested by alex.

Member

wooorm commented Feb 3, 2016

Been a while. Thanks for hanging on. The just releases 2.0.0 version fixes this by introducing several mechanisms to control the messages suggested by alex.

@wooorm wooorm closed this Feb 3, 2016

@kvz

This comment has been minimized.

Show comment
Hide comment
@kvz

kvz Feb 4, 2016

Awesome!

kvz commented Feb 4, 2016

Awesome!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment