Skip to content
This repository has been archived by the owner on Jul 17, 2020. It is now read-only.

More elegant oneboxing #135

Open
allquixotic opened this issue Nov 26, 2013 · 16 comments
Open

More elegant oneboxing #135

allquixotic opened this issue Nov 26, 2013 · 16 comments

Comments

@allquixotic
Copy link

In Root Access, we have taught our bot a ridiculous number of commands whose sole functionality is to link to an image somewhere. We love us some image macros.

My enhancement request is this: Modify !!tell as follows: !!tell UserName cmd args should find the most recent message posted by a user matching that user name (within reason; searching the entire history of chat is impractical, but perhaps just the chat that the bot remembers seeing, or a finite number of messages in history), and reply to that message. This would eliminate the issue where calling !!tell UserName cmd args does not onebox whatever the result of !!cmd args might be, unless the bot can't find a message by UserName, in which case it would just fall back to the old behavior.

Also, this isn't part of this issue, but my resident browser extension factory, Oliver Salzburg, a SU moderator, created an extension that makes it easier to reply to messages with a !!tell -- all you have to do is type !!tell then hit the arrow keys to pick a message ID, then your command. Check it out here

@allquixotic
Copy link
Author

Update: TIL (thanks @rlmeon) that simply running !!learn whatever '<>http://whatever.url' will onebox upon running !!whatever. Nice :) Now I don't need to worry about that. Edited main issue.

@Zirak
Copy link
Owner

Zirak commented Nov 29, 2013

Yeah, oneboxing is a bitch. But changing /tell that way makes it a bit...unintuitive. If it's still an issue, we can consider an alternate syntax for it, maybe tell :UserName or something like that.

Also, the extension looks handy!

@allquixotic
Copy link
Author

Adding a new syntax for the functionality I am asking for would be fine. I'm not dead-set on changing the behavior of the existing syntax. Adding : in front of the user name would be fine.

@FirstWhack
Copy link
Contributor

Just remember, oneboxing opens doors that you can't unsee the contents of.
If you plan to add a /googleImages or similar you may want to rethink it.
I'm guessing this is more for things like /wiki and /amazon (do we have that? We should).

@allquixotic
Copy link
Author

The bot is unable to do any oneboxing that a regular user can't do manually. The behavior of what to onebox (or not onebox) is controlled server-side by StackExchange, and there is nothing the bot can do to onebox things that are not possible to onebox manually by pasting a URL into chat. Even if absolutely no code changes result from this issue, a user can still use !!learn to teach the bot a command that oneboxes arbitrary images. But users can already post these links themselves without interacting with the bot, so the bot is not providing any sort of bypass or enabling any sort of misbehavior that isn't already possible.

@FirstWhack
Copy link
Contributor

Aside from the fact that you have to ignore the bot as well as the user if someone is posting images you would rather not see. Which is, by definition, a bypass. Bypassing the block list.

On Dec 5, 2013, at 12:28 PM, Sean McNamara notifications@github.com wrote:

The bot is unable to do any oneboxing that a regular user can't do manually. The behavior of what to onebox (or not onebox) is controlled server-side by StackExchange, and there is nothing the bot can do to onebox things that are not possible to onebox manually by pasting a URL into chat. Even if absolutely no code changes result from this issue, a user can still use !!learn to teach the bot a command that oneboxes arbitrary images. But users can already post these links themselves without interacting with the bot, so the bot is not providing any sort of bypass or enabling any sort of misbehavior that isn't already possible.


Reply to this email directly or view it on GitHub.

@rlemon
Copy link
Collaborator

rlemon commented Dec 5, 2013

You would also have to block two users if two people were in on it.
Considering most of us are regulars that isn't far fetched.
On Dec 5, 2013 6:11 PM, "Jhawins" notifications@github.com wrote:

Aside from the fact that you have to ignore the bot as well as the user if
someone is posting images you would rather not see. Which is, by
definition, a bypass. Bypassing the block list.

On Dec 5, 2013, at 12:28 PM, Sean McNamara notifications@github.com
wrote:

The bot is unable to do any oneboxing that a regular user can't do
manually. The behavior of what to onebox (or not onebox) is controlled
server-side by StackExchange, and there is nothing the bot can do to onebox
things that are not possible to onebox manually by pasting a URL into chat.
Even if absolutely no code changes result from this issue, a user can still
use !!learn to teach the bot a command that oneboxes arbitrary images. But
users can already post these links themselves without interacting with the
bot, so the bot is not providing any sort of bypass or enabling any sort of
misbehavior that isn't already possible.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHubhttps://github.com//issues/135#issuecomment-29948549
.

@allquixotic
Copy link
Author

@Jhawins That would appear to be an argument against having bots at all, not against any particular feature. If you're in that camp, one wonders why you are here on the github for a bot... Also, be careful not to share such an opinion with someone at SE; they might next decide to come down on high and declare that all bots are disallowed from chat, permanently. Wouldn't put it past them.

The only way to have a bot that couldn't be made to say objectionable things is if nothing the user says is ever "reflected" (repeated back) into chat in any way. Additionally, things like google searches would have to be disallowed, because objectionable content could come from third party sites as well. To lock it down properly against any potential abuse, just for the sake of having to only block one user, you would have to have all possible inputs and responses to the bot be hard-coded, and only able to be modified by the sysadmin, or users who are sufficiently trustworthy as to not add abusive commands.

Locking a bot down to that degree to guarantee an absence of malicious content is kind of silly, when it takes about 5 minutes to get enough rep to join chat, and about as long to change one's IP address, so a lone individual with the goal of posting offensive content in chat could quite trivially and repeatedly (until they got bored) make new email addresses, register new SE accounts, earnestly attempt to answer a few questions until receiving 20 rep (really, it's not hard at all to get 2 upvotes), then join chat and fire away a dozen oneboxes. The bot does not make accomplishing their goal significantly easier, because instead of ignoring N users, you have to ignore N+1. For sufficiently high N, the 1 is negligible.

There is typically only one SO-ChatBot instance per channel. (Or, I should say, most channels' regulars would attempt to actively enforce the maintenance of exactly one SO-ChatBot in their channel, to avoid having bots talking over one another, or to one another, which could get spammy very fast...) So I really don't see the issue with that.

@FirstWhack
Copy link
Contributor

Tl;dr for now. Not in entirety anyway.

Your argument is that of an extremist, taking things too literally and assuming the absolute worst. No text could possibly be offensive enough to justify removing bots, or getting someone in trouble at work. Although images can. Therefore you cannot stretch this point all the way to even text based replied of Google searches.

I am done with this discussion as it will soon turn into a bitch fight (because I know myself).

On Dec 5, 2013, at 6:22 PM, Sean McNamara notifications@github.com wrote:

@Jhawins That would appear to be an argument against having bots at all, not against any particular feature. If you're in that camp, one wonders why you are here on the github for a bot... Also, be careful not to share such an opinion with someone at SE; they might next decide to come down on high and declare that all bots are disallowed from chat, permanently. Wouldn't put it past them.

The only way to have a bot that couldn't be made to say objectionable things is if nothing the user says is ever "reflected" (repeated back) into chat in any way. Additionally, things like google searches would have to be disallowed, because objectionable content could come from third party sites as well. To lock it down properly against any potential abuse, just for the sake of having to only block one user, you would have to have all possible inputs and responses to the bot be hard-coded, and only able to be modified by the sysadmin, or users who are sufficiently trustworthy as to not add abusive commands.

Locking a bot down to that degree to guarantee an absence of malicious content is kind of silly, when it takes about 5 minutes to get enough rep to join chat, and about as long to change one's IP address, so a lone individual with the goal of posting offensive content in chat could quite trivially and repeatedly (until they got bored) make new email addresses, register new SE accounts, earnestly attempt to answer a few questions until receiving 20 rep (really, it's not hard at all to get 2 upvotes), then join chat and fire away a dozen oneboxes. The bot does not make accomplishing their goal significantly easier, because instead of ignoring N users, you have to ignore N+1. For sufficiently high N, the 1 is negligible.


Reply to this email directly or view it on GitHub.

@allquixotic
Copy link
Author

I'm the extremist? It seems pretty extreme already to presuppose that there might be users actively attempting to get other users in trouble at work by posting offensive images, who you then add to your ignore list, but who are able to bypass your ignore by having the bot onebox the images, thus exacerbating the problem. Doesn't SE have a design philosophy of not designing for the edge cases? That's quite an edge case you have there, and we may as well follow SE's design philosophy if we're building something that integrates with their services... (the philosophy makes sense, too, to me at least.)

On an unrelated note, I completely disagree that "no text could possibly be offensive enough ... to get someone in trouble at work". Just throw in a few red-flag words; tack on the fact that chat is unencrypted, and you could very quickly get a network administrator at some big corporation or government entity to block chat.SO or chat.SE due to perceived bad content. In fact, chat.SO is in fact blocked where I work for this very reason, and I had nothing to do with getting it blocked (it has been blocked since before I visited chat for the first time).

Think about it logically: oneboxing "makes" your client HTTP GET an arbitrary URL. If "bad stuff" is getting oneboxed, and it's coming from a third-party domain, which domain or IP would your network administrator block: chat, or the third-party domain hosting the offensive image? Seems pretty obvious to me that the image-hosting site would be blocked, not chat. On the other hand, the chat text comes directly from the chat.SO/SE domain, so if offensive text is streaming from there, the network administrator would block that domain.

BTW, the only reason I brought up the extreme example was to illustrate how SE might someday reason about bots in general. You may or may not already be aware of the fact that they cracked down on a few of the commands that were available in the past. All I'm saying is that the type of reasoning you used to argue against oneboxing could very easily be twisted around to justify getting rid of bots altogether, and it's certainly within the world of possibility that SE could in fact do that.

Now, having said that, if you want to debate something with someone and then "tl;dr" their response, feel free, but I still can't say I understand what exact action you are proposing that we take in terms of modifying the chat bot? Or did you actually want us to eliminate it entirely? I'd really rather bypass the whole political discussion and focus on what technical modifications you think would be reasonable, code-wise, for the bot, that would support your ideas/rhetoric. We can then discuss the code on its own merits.

I'll conclude by providing you the information that you can very easily get the bot to onebox any image, right now, with existing code; no /googleImages command is necessary:

!!learn somecmd '<>http://example.com/something.jpg'

followed by

!!somecmd

will cause the bot to onebox splat the image given in the !!learn statement.

So, continuing to focus on the technical aspect of your point: do you feel that the above command sequence should not cause the bot to onebox the image URL provided?

@FirstWhack
Copy link
Contributor

I can't help it.

This is not an edge case. Things very similar to this have happened already. Not with the intent of getting people in trouble (I don't think). But I personally got to see 2 girls 1 cup oneboxed as a gif by the bot.
I would love for that to NEVER happen again.

I believe the premise behind the user using the bot to onebox said image was so that he/she would not be flagged/suspended. Allowing users to onebox through the bot eliminates their accountability for the content they post, because only users in the room will know who's fault it is. Other users with 10K rep in other rooms will see the flag on the bots message but no context.

This allows users to bypass the flagging system. Normally this isn't a huge issue, but there are many times throughout the day that there are no truly active owners who can deal with the issue. Of course the flags will draw moderator attention and lead to the user being reprimanded in the end instead of the bot, but it causes unnecessary "clunkiness" if you will to the flagging system (which is insanely flawed on its own).

If it were my choice, I don't think the bot should be allowed to onebox images in user created commands.

(This is unrelated to the time our favorite resident of India made a similar post)

On Dec 5, 2013, at 6:39 PM, Sean McNamara notifications@github.com wrote:

I'm the extremist? It seems pretty extreme already to presuppose that there might be users actively attempting to get other users in trouble at work by posting offensive images, who you then add to your ignore list, but who are able to bypass your ignore by having the bot onebox the images, thus exacerbating the problem. Doesn't SE have a design philosophy of not designing for the edge cases? That's quite an edge case you have there, and we may as well follow SE's design philosophy if we're building something that integrates with their services... (the philosophy makes sense, too, to me at least.)

On an unrelated note, I completely disagree that "no text could possibly be offensive enough ... to get someone in trouble at work". Just throw in a few red-flag words; tack on the fact that chat is unencrypted, and you could very quickly get a network administrator at some big corporation or government entity to block chat.SO or chat.SE due to perceived bad content. In fact, chat.SO is in fact blocked where I work for this very reason, and I had nothing to do with getting it blocked (it has been blocked since before I visited chat for the first time).

Think about it logically: oneboxing "makes" your client HTTP GET an arbitrary URL. If "bad stuff" is getting oneboxed, and it's coming from a third-party domain, which domain or IP would your network administrator block: chat, or the third-party domain hosting the offensive image? Seems pretty obvious to me that the image-hosting site would be blocked, not chat. On the other hand, the chat text comes directly from the chat.SO/SE domain, so if offensive text is streaming from there, the network administrator would block that domain.

BTW, the only reason I brought up the extreme example was to illustrate how SE might someday reason about bots in general. You may or may not already be aware of the fact that they cracked down on a few of the commands that were available in the past. All I'm saying is that the type of reasoning you used to argue against oneboxing could very easily be twisted around to justify getting rid of bots altogether, and it's certainly within the world of possibility that SE could in fact do that. Now, having said that, if you want to debate something with someone and then "tl;dr" their response, feel free, but I still can't say I understand what exact action you are proposing that we take in terms of modifying the chat bot? Or did you actually want us to eliminate it entirely?


Reply to this email directly or view it on GitHub.

@allquixotic
Copy link
Author

@Jhawins You're completely entitled to hold that viewpoint. Be that as it may, I think you should take up that mantle either in a new issue on this issue tracker, or with the owners/maintainers of the bot in the specific room you hang out in that contains a bot. I'm active in this space because I maintain a bot lightly forked from Zirak's for the Root Access chat, which is the chat channel for SuperUser.com. I admit to having little to no knowledge of the daily politics and behavior of users in the JavaScript chatroom, if that's the room you're alluding to.

Come to think of it, I would actually be in favor of committing a change to Zirak's master bot repository that disables user-injected link oneboxing by default, but then provides some mechanism for the sysadmin to re-enable it if he or she so desires. I would of course enable it for Root Access, but then the issue of whether to disable the bot's oneboxing in the JavaScript chatroom would be punted to the owner of that bot, who (thankfully) is not me. Since I don't hang out very often in the JS chat, I have no particular opinion on whether or not it should be disabled in there.

Disabling it by default carries a similar flavor to the fact that many GNU/Linux server-side processes install with a config file that only listens on localhost by default. "Secure out of the box" and similar principles apply. That maps well to this idea, which would cause Zirak's bot to be work-friendly out of the box, with the sysadmin making the decision of whether or not to allow features that carry with them some risk.

@FirstWhack
Copy link
Contributor

I apologize for rash statements made earlier. I did not expect you to be sensible and cynically respond in a civil manner (most don't, including myself, obviously).

I agree 100% with the above statement.

On Dec 5, 2013, at 6:55 PM, Sean McNamara notifications@github.com wrote:

@Jhawins You're completely entitled to hold that viewpoint. Be that as it may, I think you should take up that mantle either in a new issue on this issue tracker, or with the owners/maintainers of the bot in the specific room you hang out in that contains a bot. I'm active in this space because I maintain a bot for the Root Access chat, which is the chat channel for SuperUser.com. I admit to having little to no knowledge of the daily politics and behavior of users in the JavaScript chatroom, if that's the room you're alluding to.

Come to think of it, I would actually be in favor of committing a change to Zirak's master bot repository that disables user-injected link oneboxing by default, but then provides some mechanism for the sysadmin to re-enable it if he or she so desires. I would of course enable it for Root Access, but then the issue of whether to disable the bot's oneboxing in the JavaScript chatroom would be punted to the owner of that bot, who (thankfully) is not me. Since I don't hang out very often in the JS chat, I have no particular opinion on whether or not it should be disabled in there.


Reply to this email directly or view it on GitHub.

@Zirak
Copy link
Owner

Zirak commented Dec 13, 2013

uh...ok?

Anyway...After re-reading the original issue, am I to take it's a relative non-issue since we have the /learn output modifiers (<> and such)?

@allquixotic
Copy link
Author

Relatively, sure, it's low priority. But I'd still like to see:

!!tell :Bob no

do the following:

  • Search a reasonable number of messages into the chat history for a user with a display name of "Bob"
  • If found:
    • Get the most recent message ID of a message entered by that user in the current chatroom
    • Perform the same action as if the requester had typed !!tell <msgid> no
  • If not found:
    • Perform the same action as if the requester had typed !!tell Bob no

@rlemon
Copy link
Collaborator

rlemon commented Dec 13, 2013

I approve this idea.
On Dec 13, 2013 5:29 PM, "Sean McNamara" notifications@github.com wrote:

Relatively, sure, it's low priority. But I'd still like to see:

!!tell :Bob no

do the following:

  • Search a reasonable number of messages into the chat history for a
    user with a display name of "Bob"
  • If found:
    • Get the most recent message ID of a message entered by that user
    • Perform the same action as if the requester had typed !!tell
      no
      • If not found:
    • Perform the same action as if the requester had typed !!tell Bob
      no


Reply to this email directly or view it on GitHubhttps://github.com//issues/135#issuecomment-30548987
.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants