Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User Script Sandbox for detection of potentially malicious scripts #53

Open
sizzlemctwizzle opened this issue Nov 24, 2013 · 6 comments
Labels
enhancement Something we do have implemented already but needs improvement upon to the best of knowledge. tracking upstream Waiting, watching, wanting.
Milestone

Comments

@sizzlemctwizzle
Copy link
Member

One of the things I've been toying with is the idea of running scripts when they're uploaded inside a sandbox on the server that would replicate the Tampermonkey environment (since Node.js uses V8 it would be trivial for a malicious author to detect the difference between V8 and Spidermokey, e.g. if (typeof [] === 'object') return;) in order to detect potentially malicious code. Here's my logic to do this:

  • get the source of a page matching a random @include rule of the of the script using Google search api. For a @include *, just use http://google.com
  • use jsdom with the most advance options to provide a DOM for the page. I might need to make some modifications to make it more believable, like setting request headers that jsdom uses when fetching remote scripts and images in the page to look like Google Chrome requests.
  • replicate GM_* apis and metadata behavior (@require and @resource) that Tampermonkey supports
  • run the script in a sandbox with the window from jsdom as its global object and detect potentially malicious function calls using defineGetter for eval and document.cookies
  • provide warnings for scripts that use these risky functions and suggest safer alternatives

Note: I'm also doing this because it fills one of my requirements to get this project accepted by my professor as my capstone project.

Edit (4/10): Fleshed things out a little more.

@cletusc
Copy link
Contributor

cletusc commented Nov 24, 2013

I wouldn't refuse them to be uploaded or even unlist it, but slap it with a warning so users know it is potentially unsafe. This way if there is a false positive, users who know what they are doing can still install it.

Speaking of false positives, what happens when a script author uploads a version that is a false positive, how would the author go about removing the notice, if at all?

@sizzlemctwizzle
Copy link
Member Author

slap it with a warning so users know it is potentially unsafe.

Works for me. I'm really just doing this to meet a requirement lol

what happens when a script author uploads a version that is a false positive, how would the author go about removing the notice, if at all?

By uploading a version that doesn't trigger a false positive. Scripts are not just checked on initial upload, but for every new version. Perhaps we could have moderators look over scripts that are caught by this method and give the script an exception so the warning is removed and doesn't show up in any future versions of the script. Still seems like a good way to game the system: 1) Upload a script with non-malicious code that triggers a false positive. 2) Get warning removed by a moderator. 3) Upload malicious script and receive no warning.

I don't think the system should have any human element and automatically puts a warning on any script the fails the test.

@jerone
Copy link
Contributor

jerone commented Nov 24, 2013

@sizzlemctwizzle Maybe off topic, but what about using datamining to prevent script copies as an assignment.

@sizzlemctwizzle
Copy link
Member Author

I'd like scripts to be forkable on the site, and of course that would mean giving proper attribution to the original author (perhaps an author could opt out of this). I guess we could look for copies that weren't forked. It's not something that really interests me but if someone wants to do that, I would wouldn't be opposed to it.

@sizzlemctwizzle sizzlemctwizzle modified the milestones: 0.4, 0.3 May 7, 2014
Martii pushed a commit to Martii/OpenUserJS.org that referenced this issue Jul 8, 2014
* Some adherance to STYLEGUIDE.md
* Some routine optimization
* Additional parameter for `parseMeta` to not clobber a parsed metadata block e.g. not normalized
* Default to normalized meta with existing stored Object on S3
* Add normalized flag to all existing `parseMeta` calls

Applies to OpenUserJS#232 and eventually OpenUserJS#243 and OpenUserJS#53
@Martii
Copy link
Member

Martii commented Nov 26, 2014

Just wanted to update everyone on this one... since node 0.10.x doesn't fully support ES6 yet we will never be able to effectively do this 100% as let statements, and related identifiers/operators/etc. will always throw whatever "warning" or exception we put in... some document body parsers also are limited to ES5.1 strict. This of course could change when V8 actually supports full Harmony. tick tock tick tock Going on waiting ~8 years now. :)

@Martii Martii added the tracking upstream Waiting, watching, wanting. label Jan 24, 2015
Martii pushed a commit to Martii/OpenUserJS.org that referenced this issue Jan 21, 2016
* Add in a basic handler for compiling/minification errors and output to stdout with *express-minify* ... not doing `stack` as that consumes a bit.. and `body` can be up to 1MiB... so that will never get logged. Feature request of breezewish/express-minify#44
* *compression* is just dep updates according to their changelog

**NOTE**:
* This can still use some TLC but hacking it in just so we get some of this data to start

Also applies to OpenUserJS#432 and very loosely OpenUserJS#53
@Martii
Copy link
Member

Martii commented Nov 23, 2017

Update ref:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Something we do have implemented already but needs improvement upon to the best of knowledge. tracking upstream Waiting, watching, wanting.
Development

No branches or pull requests

4 participants