New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User Script Sandbox for detection of potentially malicious scripts #53
Comments
I wouldn't refuse them to be uploaded or even unlist it, but slap it with a warning so users know it is potentially unsafe. This way if there is a false positive, users who know what they are doing can still install it. Speaking of false positives, what happens when a script author uploads a version that is a false positive, how would the author go about removing the notice, if at all? |
Works for me. I'm really just doing this to meet a requirement lol
By uploading a version that doesn't trigger a false positive. Scripts are not just checked on initial upload, but for every new version. Perhaps we could have moderators look over scripts that are caught by this method and give the script an exception so the warning is removed and doesn't show up in any future versions of the script. Still seems like a good way to game the system: 1) Upload a script with non-malicious code that triggers a false positive. 2) Get warning removed by a moderator. 3) Upload malicious script and receive no warning. I don't think the system should have any human element and automatically puts a warning on any script the fails the test. |
@sizzlemctwizzle Maybe off topic, but what about using datamining to prevent script copies as an assignment. |
I'd like scripts to be forkable on the site, and of course that would mean giving proper attribution to the original author (perhaps an author could opt out of this). I guess we could look for copies that weren't forked. It's not something that really interests me but if someone wants to do that, I would wouldn't be opposed to it. |
* Some adherance to STYLEGUIDE.md * Some routine optimization * Additional parameter for `parseMeta` to not clobber a parsed metadata block e.g. not normalized * Default to normalized meta with existing stored Object on S3 * Add normalized flag to all existing `parseMeta` calls Applies to OpenUserJS#232 and eventually OpenUserJS#243 and OpenUserJS#53
Just wanted to update everyone on this one... since node 0.10.x doesn't fully support ES6 yet we will never be able to effectively do this 100% as |
* Add in a basic handler for compiling/minification errors and output to stdout with *express-minify* ... not doing `stack` as that consumes a bit.. and `body` can be up to 1MiB... so that will never get logged. Feature request of breezewish/express-minify#44 * *compression* is just dep updates according to their changelog **NOTE**: * This can still use some TLC but hacking it in just so we get some of this data to start Also applies to OpenUserJS#432 and very loosely OpenUserJS#53
Update ref: |
One of the things I've been toying with is the idea of running scripts when they're uploaded inside a sandbox on the server that would replicate the Tampermonkey environment (since Node.js uses V8 it would be trivial for a malicious author to detect the difference between V8 and Spidermokey, e.g.
if (typeof [] === 'object') return;
) in order to detect potentially malicious code. Here's my logic to do this:@include
rule of the of the script using Google search api. For a@include *
, just use http://google.com@require
and@resource
) that Tampermonkey supportsNote: I'm also doing this because it fills one of my requirements to get this project accepted by my professor as my capstone project.
Edit (4/10): Fleshed things out a little more.
The text was updated successfully, but these errors were encountered: