Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exploits and malware policy updates #397

Merged
merged 12 commits into from
Jun 4, 2021

Conversation

vollmera
Copy link
Contributor

@vollmera vollmera commented Apr 29, 2021

5/20 update -- Based on feedback here, we pushed out a second iteration of changes - more details in #397 (comment)

5/3 update -- Based on feedback here, we pushed out a first iteration of changes - more details in
#397 (comment)


We're opening this PR to update our policies around security research, malware, and exploits, so that the security community can collaborate on GitHub under a clearer set of terms.

Read more at https://github.blog/2021-04-29-call-for-feedback-policies-exploits-malware

We are also making some general updates for clarity and documentation (not new policy or practice)

  • moving up section on Spam and Inauthentic Activity in our Acceptable Use Policies so it directly follows sections on content and conduct restrictions
  • documenting our existing appeals process in our Community Guidelines by adding a Appeal and Reinstatement section

We'll keep this PR open for 30 days and welcome your feedback.

-- updating this comment to add more context about our intent and goals from @mph4 👇 --

Hello again. I wanted to chime in with a few additional thoughts based on the response so far from the community, as well as clarify a few points, particularly that our intention is not to introduce a policy change, but rather clarify existing language. We’re reading every piece of feedback and are grateful that you’re taking the time to comment here. We are listening and learning from it.

First, I want to say that it’s quite clear from the feedback that our attempt to provide clarity here has fallen short and has led to some interpretations that don’t match our intention. While we are continuing to work toward a next iteration, I wanted to emphasize a few points:

  1. We allow, support, and promote software that is dual use. That is not changing. We understand that many security research projects on GitHub are dual use and broadly beneficial to the security community. We assume positive intention and use of these projects to promote and drive improvements across the ecosystem. While many of these tools can be abused, we do not intend or want to adjudicate intent or solve the question of abuse of dual use projects that are hosted on GitHub. Many of the projects cited in this ongoing discussion, such as mimikatz, metasploit, and others are all incredibly valuable tools and the goal is to further protect from what we felt was overly broad language in our existing AUP that could be viewed as hostile toward these projects as-written. We will work to communicate and clarify that such projects are welcome.
  2. Our intent is to clarify what crosses the line into a policy violation. Our existing language qualified on “active malware and exploits”, which was too broad in practice. Our intent is to narrow scope to “malware and exploits that are directly supporting unlawful activity”. For example, a repository that was purposely created to deliver exploits or malware into a specific victim infrastructure as opposed to unintended abuse of a dual use project. We do not allow anyone to use our platform in support of unlawful active attacks.
  3. We acknowledge your feedback that the language around “harm” is too broad and the concern that implementing this as-is would be a regression. Our intent is to capture situations where content on the platform is posted in direct support of unlawful attack or malware campaigns that are causing technical harm, such as overconsumption of resources, physical damage, downtime, denial of service, or data loss. For example, using GitHub as a malware CDN. We want to clarify and narrow our policy scope with regards to restricting content, not increase it.
  4. Our updated language asks that you provide a security contact for your dual use projects, but does not require it. The intent is to enable 3rd parties to reach out directly and ideally resolve with the researcher prior to escalating and filing abuse reports with GitHub.
  5. We also appreciate the feedback that disclosure methodology is a choice for the security research community. We do not aim to dictate how vulnerability disclosure occurs on GitHub as policy, but do encourage a maintainer-centric approach. We take our role as an impartial and trusted code custodian with all the gravity it deserves, and welcome debate and feedback as your voice is central to our mission of ensuring GitHub is a home for developers and security researchers alike.

We are taking your continued feedback into careful consideration and are actively working to incorporate your feedback and suggestions into revised language to better reflect this goal in the coming days.

@curi0usJack
Copy link

By using verbiage such as "contains or installs malware or exploits that are in support of ongoing and active attacks that are causing harm" in your use policy, you are effectively designating yourselves as the police of what constitutes "causing harm". By one person's definition, that may just be an exploit proof of concept, by another that may be the whole metasploit framework. How do you plan on judging this, and whose criteria do you plan on using? What definitions are you proposing for these terms? As with most sites these days, good intentions for content moderating will likely just end up in unnecessary censorship of content that the loudest group objects to.

@edoardottt
Copy link

here I smell the scent of gitlab...

@strazzere
Copy link

https://github.com/github/site-policy/pull/397/files#diff-0444222c00da7f2dbfae079e3792bc2bfaa66939ca95891e3c9e4145655905dbR89

Is there a defined disclosure timeline framework for people to know how these disputes will be handled? How does someone "complain" that they're trying to fix things and what burden of proof must they show.

Conversely - when you have decided to remove content due to this "risk" will the decision and proof used be made public or shared with the content creator? Is there a timeline added for when the content can be re-released?

This seems very vague and arbitrary currently.

@@ -31,7 +31,7 @@ Under no circumstances will Users upload, post, host, execute, or transmit any C

- is or contains false, inaccurate, or intentionally deceptive information that is likely to adversely affect the public interest (including health, safety, election integrity, and civic participation);

- contains or installs any active malware or exploits, or uses our platform for exploit delivery (such as part of a command and control system); or
- contains or installs malware or exploits that are in support of ongoing and active attacks that are causing harm; or

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement should not be modified from the original.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. What "active attack" does even mean? Some viruses from the 90s are still attacking the few vulnerable devices that remain (like old industrial machines). Can it be considered "active attack"?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If your goal is to clarify, removing specifics and replacing with vague handwaving doesn't do it. Using github asa command-and-control system is a very specific example where it should be clear when somehow has violated the rule. But "support of ongoing and active attacks" is a vague catchall that's impossible to determine if somebody has violated. Hackers have already automated download of my code in their attacks, meaning that I'm violating the new rules technically. I wouldn't get canceled/censored, but those who you do choose to cancel/censor would not become an arbitrary and prejudicial decision, not one based upon facts.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will you be the arbiters of what is, and is not causing harm? What will the threshold be? My local coffee shop (with online ordering) might see a simple exploit kit as something that could ruin their business completely, while my bank is likely to have an appropriate D&D strategy to defend against this.

When considering on-going attack, is there some sort of limitation, or is the expectation that GitHub will maintain it's own threat intelligence and make decisions on this basis ad-infinitum?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSFT intentions are quite clear here, first they deleted the proof of concept for exchange, then now this policy.
Anything like Responder, CrackMapExec, Empire,etc can be banned tomorrow, because it's inconvenient for MSFT to have tools exploiting 35 years old vulnerabilities they don't want to patch removed from Github. When they bought Github, it was clear to me that it was not to promote open-source -they always complained about it and argued against it- but to control what kind of code is convenient, and which kind of code should be online and which not.
In anyways, it wont have any effects on malware, virus, spyware, ransomware, state sponsored malware, as they are all closed source ;)

Copy link

@KOLANICH KOLANICH Apr 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't get canceled/censored, but those who you do choose to cancel/censor would not become an arbitrary and prejudicial decision, not one based upon facts.

BTW, as an owner of the resource M$ ₲H has a right to block any user (even the one that even have never violated the rules) and repo from it on its own discretion (including criteria, like being a person whose political position and/or activity (i. e. being a proponent of free software or being a Stallman sympathizer or devsloping free software competing to M$ one or just not using Windows) is potentially harmful to the business in a long term, being not enough profitable ("free-rider") or being one of a specific nationality/religion/geographical location/political orientation/sexual orientation/induism caste/prison caste/occupation/age/employer/luck level (at random) ) without disclosing the actual criteria of discrimination. So de-facto the proposed terms are already active, just were not codified.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good change only in that the original text said:

 Under no circumstances will Users upload, post, host, execute, or transmit any Content that

[... SNIP ...]

- contains or installs any active malware or exploits [...]

This old text completely forbids the publication of malware and exploits via GitHub. On one hand this is a bad policy, as "malware and exploits" is a broad term that covers professional penetration testing tools such as Metasploit Framework. On the other hand, the old text was clearly not being enforced or policed. In light of the recent ProxyLogin incident and this policy change being put forth, we can expect that any policy (Either the old one, or a new one) will be more actively enforced.

The change to the text does not go far enough in fixing this issue, as raised by others. In my opinion, at the very least the proposed policy must be updated to provide clarity on:

  • What "ongoing and active attacks that are causing harm" means, who makes this determination, and how they make the determination
  • The transparency that GitHub will provide when removing content under this policy
  • The option for individuals and the community to appeal decisions made to remove content under this policy
  • An assurance (codified in policy) that a declaration of an "ongoing and active attack that [is] causing harm" will be short and sharp, and will be lifted as soon as possible so that professional security tooling can be made available to detect and mitigate the "harm"

For what it's worth, I am in support of a policy that forbids the use of GitHub as a direct malware delivery mechanism (i.e. a CDN) or as a malware command and control mechanism. In my opinion, GitHub should generally permit collaboration on and publication of exploit code and malware, and the original text as it relates to this activity should simply be struck from the policy entirely.

Copy link

@KOLANICH KOLANICH Apr 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "old" text disallowed to host on GH files that will infect users when a web page on GH is opened or when a repo is cloned. The proposed text forbids any code that can be potentially used in attacks, i.e. if some malware infects some machine, then downloads source code of, i.e. hashcat, builds it and then uses it to brute hashes on infected servers to advance attacks, then hashcat is forbidden too.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "old" text disallowed to host on GH files that will infect users when a web page on GH is opened or when a repo is cloned

The full text was:

Under no circumstances will Users upload, post, host, execute, or transmit any Content to any repositories that:
[...]
contains or installs any active malware or exploits, or uses our platform for exploit delivery (such as part of a command and control system); or
[...]

I disagree with your reading of the old text. Take for example Metasploit Framework. IMO it "contains" "active malware [and] exploits".

Granted I'm not a lawyer. If you aren't either, all we have is our own understandings of the text.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth considering some cases here. If we take GH at it's word that it's trying to protect the good pentesting and research tools, then the language should be clear to allow both of these:

  1. mimikatz - This tool has been instrumental towards helping to fix vulnerabilities in how Windows handles passwords. It's also used by basically every attacker out there. Will this be taken down? It's used in an active attack every day. This is where the "active attacks that are causing harm" language falls short.
  2. Repos like malware-samples or Phishing Kit Tracker. These host real world malware being used by live attackers to put light on them and enable defenders and researchers.

@Technetium1
Copy link

This effectively allows GitHub to become the arbiter of what is good and bad. Changing this phrasing will be a regression for the information security community here. There is no clear burden of proof for this process, nor does there seem to be a straightforward public dispute process. If someone distributes some of my innocent code in malware in a malicious way, this seems to imply that my code would be removed because someone else used it in an attack? What is the definition of causing harm? Does this mean that malware that's been dead for a decade will be removed retroactively? GitHub is really going to lose a lot of support when the infosec community gets shoved away because of these actions.

TL;DR
How about listening to your users at least as much as you listen to your lawyers?

@ceramicskate0
Copy link

ceramicskate0 commented Apr 29, 2021

Please don't make the cyber security community move off the platform due to something like a bad idea for a policy. We like using github but other options do exist. I vote it's not broken and this platforms current policy pushed the entire cyber securityspace light-years in the short time it's been mainstream. It's not a bug, it's a feature.

### 4. Spam and Inauthentic Activity on GitHub
Automated excessive bulk activity and coordinated inauthentic activity, such as spamming, are prohibited on GitHub. Prohibited activities include:
* bulk distribution of promotions and advertising prohibited by GitHub terms and policies
* inauthentic interactions, such as fake accounts and automated inauthentic activity
Copy link

@tunip3 tunip3 Apr 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/gelstudios/gitfiti
Will this block commit history modification and git commit art?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gelstudios/gitfiti
Will this block commit history modification and git commit art?

It certainly would! That's automated and inauthentic activity.
Edit: It seems that GitHub only selectively enforces this. You probably won't get in trouble unless you routinely abuse it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually not a new statement. If you notice, GitHub moved it up from bullet point 9 to bullet point 4, but the same provision exists.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, this should remain. Github has the right to defend their systems from abuse conditions.

Copy link

@fos111 fos111 May 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was never a abuse."Security researchers do not "support ongoing and active attacks that are causing harm", they instead raise awareness of those risks and help the concerned parties defend against them"

@joshfaust
Copy link

I'm not sure what the goal of this policy change is other than to possibly hinder public and highly beneficial security research and development. By your definition, several of my repositories would possibly be removed and several others which, have aided in active detection engineering, threat hunting, and yes red teaming to help build better security programs. Please don't limit and police our code when more times that not, it's beneficial to the infosec community as a whole, not just attackers.

Copy link

@TACIXAT TACIXAT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the spirit of "publishing code is OK but don't use us as a CNC or CDN for malware". I think the language could be refined to clarify this distinction.

I usually only hear about 0day when the take down occurs, and even then, you can hit up your friends to see who grabbed it. I think this measure is pointless and bends the knee to ethical hacking just being in service of corporations. Additionally, a contact should not be required for publishing. This again just feels in service to corporations who can use legal pressure to attack researchers.

Note, however, that GitHub supports the posting of content which is used for research into vulnerabilities, malware or exploits, as the publication and distribution of such content has educational value and provides a net benefit to the security community. We ask that repository owners take the following steps when posting potentially harmful content for the purposes of security research:

* Clearly identify and describe any potentially harmful content in a disclaimer in the project’s README.md file.
* Provide a designated security contact through a SECURITY.md file in the repository.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As publishing exploits can be a legal grey area, in certain situations someone may wish to publish anonymously (e.g. whistle blowing, forcing a fix with public disclosure). Requiring a contact opens an avenue for legal action against researchers.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will simply result in an army of strawmen appearing as "security contacts" and nothing more. GitHub has all the means to get in touch with the user who published the content, why introduce this measure? Who exactly does this serve the most and which problem does this solve? Please explain how this possibly personal information will be used and why the currently existing means are not sufficient.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change essentially mandates that people publish PII somewhere it can trivially be scraped.

That seems like a really, really poor policy change. As @dev-zzo says, GH already has all that's needed to make contact (as well as the ability to take stuff down).

This feels like an example of really, really bad practice - this isn't an organisation exposing services (i.e. the use-case for .well-known/security.txt)

* Clearly identify and describe any potentially harmful content in a disclaimer in the project’s README.md file.
* Provide a designated security contact through a SECURITY.md file in the repository.

Please also note, GitHub will generally not remove exploits in support of vulnerability reporting or security research into known vulnerabilities. However, GitHub may restrict content if we determine that it still poses a risk where we receive active abuse reports and maintainers are working toward resolution.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once a 0day is published, have take downs been shown effective in mitigating spread? My assumption would be that malicious actors have access to it via personal networks at that point. The take down likely just draws more attention to it. Recommend to leave things up.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not only that, the take down also hinders detection and response teams. Look, the malware is going to exist and is going to be deployed. If we decide to remove it from public eye, we're also choosing to remove our capability to quickly respond.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends on how operationalized/weaponizable the exploit is, and how easily it can fit into the attacker's toolkits and operational tempo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a large part of what informed the current language in this policy. Take downs tend to just push explot/0day code into less-public and less-accessible spaces, where defenders and researchers are less likely to see them (and, thus, less likely to be able to create patches or fixes).

Ease of access by researchers becomes more, not less, important when an exploit is being actively exploited.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ease of access by researchers becomes more, not less, important when an exploit is being actively exploited.

I think this sentence might go against that spirit - However, GitHub may restrict content if we determine that it still poses a risk where we receive active abuse reports and maintainers are working toward resolution. Companies will want 0day or something being actively exploited (read: hurting their reputation) taken down and will push hard for this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my experience, by the time an exploit hits public repos, it has either been depleted of most of its hack value and/or spread over "the forums" and whoever wanted has got a copy already or getting it is within direct reach. Removing content does not really help curb any ongoing attacks as they are, well, already ongoing, but it does hinder or prevent future analysis and other research work. If GitHub has any statistical data that would back the contrary, please share.

@vladionescu
Copy link

If a proof of concept exploit is removed under this policy because it is actively being used in a wide spread attack, will it be automatically restored by GitHub once the attacks tapers off? Or will the repo owner have to appeal?

Since GitHub is making the determination when a repo's code is being (ab)used, I recommend GitHub have a way to notify the repo owner or the public once it's "all clear".

GitHub should be transparent about takedowns related to this policy section. Publishing metrics or a transparency report on actions regarding this policy would help the community have a more informed conversation. I'm not sure that mandating such transparency in the policy itself is necessary, so long as it happens.

@securesean
Copy link

How about we follow the responsible vulnerability disclosure method? For the first 30 days after the patch/update is released a basic PoC exploit is allowed to be hosted on github without hosting :

  • raw version - to avoid PowerShell/Bash scripts to just be downloaded and executed as one-liners when org's trust github's domain
  • gist - same reason as above
  • pre-compiled - same reason as above
  • Weaponized/Operationalized versions/wrappers - for example Metasploit modules, support for different OS's, support for different languages, or various prevention or detection bypasses such as AMSI/UAC/App whitelisting, shellcode encoding, automated payload generation, AV evasion, etc.

Then after 30 days, those exploits can be added to kits/scanners/evasion tools like Metasploit. I have seen businesses literally have a vulnerability management program which raises the priority of patching based on whether or not there is a Metasploit module.

And add:

  • github reserves the right to remove code/binaries/scripts/resources if they are being used for active command and control given a sample of malware or memory dump that demonstrates the malware is download content from github.

@johnjhacking
Copy link

Github is owned by Microsoft, this will go sideways fast. The level of collusion that can occur, especially as it pertains to Microsoft being transparent about their own vulnerabilities and allowing for exploits to be published will be reduced, if not eradicated.

Why is this problematic? Easy. It's problematic because blue teamers rely on exploits/malware/tooling to do their jobs and to emulate threat actors to better protect the environment. Giving Github the power to do this will just create a environment where APTs leverage this to purposefully use any hacking-based tooling or APT-like framework in order to get it removed from Github, meanwhile they will spin up their own variants.

Do not take this away from the Information Security Community. If Microsoft thinks it's tired of exploitation now, imagine how much worse the situation is going to get if we are taking the exploits out of the hands of the good guys.

@0xGilda
Copy link

0xGilda commented Apr 29, 2021

approved ✔️

@moorer2k
Copy link

Ridiculous..

@attritionorg
Copy link

I don't think it can be automated realistically, but deleting malware that claims to be something else (e.g. a PoC for a known vulnerability with a CVE ID) seems like a good idea. Trick is, exploits and security tools that are clear in what they are and what they do, should not be deleted.

Copy link

@jimdotcom jimdotcom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest further revision before merge, based on feedback from community.

* using GitHub as a platform for propagating abuse on other platforms
* phishing or attempted phishing

GitHub reserves the right to remove any Content in violation of this policy.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the event github removes content (for the reasons above or any reason for that matter) what is the expectation to the end user? is it just... gone forever? is it potential to recover the data and move it somewhere else? do we get a warning or anything? deprecation?


Note, however, that GitHub supports the posting of content which is used for research into vulnerabilities, malware or exploits, as the publication and distribution of such content has educational value and provides a net benefit to the security community. We ask that repository owners take the following steps when posting potentially harmful content for the purposes of security research:

* Clearly identify and describe any potentially harmful content in a disclaimer in the project’s README.md file.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This and the following line imply that a repository must use Markdown. Not all users wish to use Markdown for their repositories, and any suitable file format should be acceptable.

@nickvangilder
Copy link

Play stupid (Microsoft) games and win stupid (Microsoft) prizes. This is a terrible idea simply provides Microsoft with an opportunity to step in at any point and do whatever they deem best in the moment. This will result in a mass exodus from the platform.

@fashionproof
Copy link

I learn so much from people who share their source and ideas here. I support 3 researchers here that make the world more secure by pointing out issues publicly. They could sell their research privately and only purchasers would know the issues exist.

i’m all for responsible disclosure but why don’t you spend more money to find flaws and fix your microsoft code to not be so exploitable rather than trying to close down people that publicly point out flaws?

@sickcodes
Copy link

sickcodes commented Apr 30, 2021

Seeing as the CVE project uses GitHub as an automated registrar of CVEs https://github.com/CVEProject/cvelist, the very things that protect GitHub and the entire Internet Connected community, I strongly oppose this PR. Do not merge.

Without Proof or Concepts and the tools to find such vulnerabilities, the Internet as a whole would rewind 10 years and all PoCs and exploits would be hosted on untrusted networks.

I publish all my advisories, including Proof of Concepts where appropriate, so that even developers who have abstracted code from a given software, under any license can protect themselves.

https://github.com/sickcodes/security/tree/master/advisories

GitHub even awarded one of them with a GHSA. Since I have an exploitable PoC attached to that repository, and less than half of affected projects have updated, it would have to be removed. I don’t plan on counting how many examples of that there would be in the GitHub official advisories, however, I assume it would be most.
GHSA-4c7m-wxvm-r7gc

Just the sheer knowledge that an exploit exists is enough to wake up a CEO in the middle of the night, instead of relying on hearsay.

I vehemently oppose the restriction of the ability to both publish & investigate exploits and ongoing cyber threats on GitHub and attest that it would ultimately divide developers and hackers, both good and bad in ways that we may not even know are possible yet.

* Clearly identify and describe any potentially harmful content in a disclaimer in the project’s README.md file.
* Provide a designated security contact through a SECURITY.md file in the repository.

Please also note, GitHub will generally not remove exploits in support of vulnerability reporting or security research into known vulnerabilities. However, GitHub may restrict content if we determine that it still poses a risk where we receive active abuse reports and maintainers are working toward resolution.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “…and maintainers are working toward resolution” guideline is vague. Which maintainers’ involvement satisfies this prong of the guidelines? The maintainers of a GitHub-hosted project providing the proof of concept? The maintainers of a closed-source project that is the target of the PoC?

Note, however, that GitHub supports the posting of content which is used for research into vulnerabilities, malware or exploits, as the publication and distribution of such content has educational value and provides a net benefit to the security community. We ask that repository owners take the following steps when posting potentially harmful content for the purposes of security research:

* Clearly identify and describe any potentially harmful content in a disclaimer in the project’s README.md file.
* Provide a designated security contact through a SECURITY.md file in the repository.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A SECURITY.md security policy seems positioned to identify how a project wishes to receive responsible disclosures. What would an example SECURITY.md look like for a project documenting a proof of concept?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More so what would a security.md look like for projects like metasploit that are actually used for malicious intent.

@mph4
Copy link

mph4 commented Apr 30, 2021

Thanks for the comments so far, really appreciate the interest in this topic.

As I mentioned in the blog post, our intent with this PR is to get your feedback as to how we can better clarify expectations on the platform in service of promoting security research on GitHub. We’re hearing that the changes we’ve asked for feedback on both aren’t providing that clarity yet and are being heard as a proposed policy change, which is not our intention.

We see your comments and would love to collect more feedback on what would lead to clearer guidelines in our policies. Also want to make sure you saw the proposed updates to the Community Guidelines, which elaborate on the Acceptable Use Policies. For example, this PR would move some of the language that was in the AUP (around command and control) to the Community Guidelines, not delete it altogether.

Suggested commits and comments in the files themselves are welcome, along with the continued feedback and conversation in the PR.  We look forward to working with the community to either make changes that improve the clarity of policies or continuing to collaborate under the existing language. Thanks again.

@skorov
Copy link

skorov commented Apr 30, 2021

There is a lot to unpack here.

Firstly, let me say that I don't believe that any one organisation or person has the right to determine if a piece of code is inherently malicious or not. Least of all a subsidiary of Microsoft who has had a sketchy track record, at best, of (not) patching vulnerabilities in their software. So, I appreciate the opportunity to add my voice to the discussion.

Having been involved in many penetration tests and many incident response cases, I can tell you that my experience with proof of concept, C2 infrastructure and other exploitation tools has been a net positive, by far. It gives the people a chance to learn (both red and blue teams), it raises the bar for security in general and it showcases the types of thing to expect from actual adversaries.

Policing this type of content will simply create demand in black markets, where these PoCs and attack tools will be available to criminals, but not defenders. Transparency of all things is the best solution here.

I say this as a member of the infosec community for 10 years, but this is my opinion and I won't speak for everyone.

@chadbrewbaker
Copy link

As a Microsoft shareholder I think this really hurts the GitHub brand. Microsoft shouldn’t even take down Azure 0-days. Dog food the ownership that bugs will always exist, and controls need to be put in place. I could see repos being actively pulled by malware throttled or requiring GitHub user auth, but that is about it.

@vanhauser-thc
Copy link

These changes clarify Github's stance and IMHO they are community/security interest driven.

What is unclear to me though is the (not so hypthetical) 0-day drop of a full blown exploit in a github repo.
One could argue that this supports unlawful attacks - if this is "direct" is unclear because this is not defined in the policy.
One could also argue that this is not dual use, and only a PoC would be (not my opinion but people have different views).

If such an exploit is made available in metasploit it would be clear - metasploit = dual use = OK.
But if the repository just contains this new exploit ...
Maybe I overlooked something in the updated policy, but I think this could need clarification.

@anticomputer
Copy link

These changes clarify Github's stance and IMHO they are community/security interest driven.

What is unclear to me though is the (not so hypthetical) 0-day drop of a full blown exploit in a github repo.
One could argue that this supports unlawful attacks - if this is "direct" is unclear because this is not defined in the policy.
One could also argue that this is not dual use, and only a PoC would be (not my opinion but people have different views).

If such an exploit is made available in metasploit it would be clear - metasploit = dual use = OK.
But if the repository just contains this new exploit ...
Maybe I overlooked something in the updated policy, but I think this could need clarification.

There is no qualification on status of vulnerability knowledge and no intent to qualify disclosure methodology in this iteration (in my read), so in that sense (and in my understanding) an 0day drop is essentially just a full disclosure as far as this policy clarification is concerned.

The only qualifier for incident response would be if that 0day exploit was posted by the project owner in support of an unlawful attack campaign, as evidenced by abuse reports and prior to any dual use purpose (e.g. notification of vulnerability is a dual use purpose, network penetration testing tooling is a dual use purpose, etc.), but the vulnerability knowledge status alone (i.e. it being 0day) does not meet the bar since it's no different than posting a PR containing vulnerability details for a trivially exploitable vulnerability (i.e. no diff between poc and full exploit).

This is my personal read and I do not speak for GitHub, but I believe that is the intent/spirit of the policy clarification.

@github github deleted a comment May 4, 2021
Copy link

@santosomar santosomar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree with @justinsteven. Keep in mind that "unlawful active attacks" can use legitimate tools and living-off-the-land resources (i.e., WMI, python, nmap, powershell commands, etc.) and those should not be removed from a repo. The use of GitHub as a C2, exfiltration platform, etc. is what should be strictly forbidden. Including private repositories (since I can do something like: my-malware/master/install.sh?token=DEADBEEF1337 from a private repo as part of my payload, etc.)

@chadbrewbaker
Copy link

chadbrewbaker commented May 4, 2021

These changes clarify Github's stance and IMHO they are community/security interest driven.

What is unclear to me though is the (not so hypthetical) 0-day drop of a full blown exploit in a github repo.
One could argue that this supports unlawful attacks - if this is "direct" is unclear because this is not defined in the policy.
One could also argue that this is not dual use, and only a PoC would be (not my opinion but people have different views).

If such an exploit is made available in metasploit it would be clear - metasploit = dual use = OK.
But if the repository just contains this new exploit ...
Maybe I overlooked something in the updated policy, but I think this could need clarification.

Metasploit is not a unicorn. Even an exploit in metasploit could be downloaded by a lazy malware.

I was proposing that only active attack vectors be network throttled, and only in the worse case are they gated to where you need GitHub authentication. Easy two level approach that isn't censorship, just latency to hinder bot download. Applies to everyone, even Metasploit.

In fact, as a Microsoft shareholder - I want them to make a product out of this. Some sort of dashboard showing active malwares - have honeypots in GCP/AWS/androids so it is truly multi-platform and global. Good eBPF traces so sys admins can patch at the kernel level. Optimyze.ai has a working product if they want a quick investment, but Thomas is focused on energy performance so Micosoft would have to staff a malware fingerprinting/remediation team on it's own dime.

TLDR censorship is the old way. Automation to accellerate patching exploits is the new way.

@anticomputer
Copy link

I completely agree with @justinsteven. Keep in mind that "unlawful active attacks" can use legitimate tools and living-off-the-land resources (i.e., WMI, python, nmap, powershell commands, etc.) and those should not be removed from a repo. The use of GitHub as a C2, exfiltration platform, etc. is what should be strictly forbidden. Including private repositories (since I can do something like: my-malware/master/install.sh?token=DEADBEEF1337 from a private repo as part of my payload, etc.)

👍 Responded on this in #397 (comment)


We allow dual use content and assume positive intention and use of these projects to promote and drive improvements across the ecosystem. In rare cases of very widespread abuse of dual use content, we may restrict access to that specific instance of the content to disrupt an ongoing unlawful attack or malware campaign. Restriction is aimed at disrupting ongoing attack or malware campaigns and where possible takes the form of putting the content behind authentication, but may, as an option of last resort, involve a full removal where this is not possible (e.g. when posted as a gist) or if the content is posted by the account owner as part of a direct participation in unlawful attack or malware campaigns that are causing technical harms. We will contact the project owner in an effort to discuss and collaborate on any such response. The goal is to hinder the proliferation of a specific unlawful active attack or malware campaign that is causing technical harm, and does not serve the purpose of purging or restricting any specific dual use content, or copies of that content, from the platform in perpetuity. While we aim to make these rare cases of restriction a collaborative process with project owners, if you do feel your content was unduly restricted, we have an appeals process in place (See "Appeal and Reinstatement")

*GitHub considers the npm registry to be a platform used primarily for installation and run-time use of code, and not for research.*

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intended to discourage research like "Dependency Confusion" or would this kind of research still be allowed under this policy change?

https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intended to discourage research like "Dependency Confusion" or would this kind of research still be allowed under this policy change?

https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610

GitHub will actively remove typosquatting and dependency confusion attacks from package registries to protect end users. The implication here is that researchers should not have an expectation to keep dependency confusion and typosquatting research up for any prolonged time in package ecosystems such as npm. GitHub is working in the next few months to increase the scope of our bug bounty program to include core npm infrastructure and services. This program will provide a clear path to share future research and vulnerabilities in the npm platform while also offering a way to reward researchers for their work.

@tenable-inc
Copy link

GitHub’s Role in and Responsibility to the Security Community

This is an important discussion and one that has real-world implications for organizations, researchers, defenders and everyday consumers.

When GitHub removed the ProxyLogon exploit from the platform, the security community was prevented from analyzing it — its implications, mitigations, detections and so on. Meanwhile, attackers were busy infiltrating Microsoft Exchange servers across the globe en masse. It would be foolish to think that removing the PoC from GitHub meant that no one would have access to it. It’s quite the opposite, actually. It meant that defenders — providers of essential services, critical industries and the everyday security engineer — would lose the access they needed to understand the PoC even as attackers moved to underground forums to share it widely.

GitHub is an important platform for collaborating and sharing vulnerability intelligence. It is one of the most popular platforms in the security community for a reason. With that kind of power comes responsibility to continue to share information openly, transparently and quickly. However, when implicit trust in a platform is shaken, it takes a lot more than post-facto justification of previous actions for it to be regained and maintained.

There is a path forward by ensuring that material which can be used for defensive purposes is not lumped in the same bucket as weaponized malware. GitHub’s responsibility here is to ensure that the defenders stay ahead in the game and not cause information asymmetry by making it more difficult for security professionals to access this type of sensitive information.

Security through obscurity will never work. GitHub could and should be used by the security community to coordinate defense more easily.

The revisions in the latest iteration of the policy are a good start. However, there are still multiple caveats that could put the security community at a disadvantage especially when there is an instance of widespread exploitation. We recommend Microsoft remove any verbiage around actions that would censor dual use content on GitHub in any form.

We strongly urge GitHub’s owner, Microsoft, to reconsider its position and realize the power — for good or bad — that GitHub holds. It can be a great asset to secure our global ecosystem, if handled responsibly.

@JasonKeirstead
Copy link

This is not directly related to this issue but I figure as of now it may be the simplest way to issue an RFE to the Github security team.

Please allow repo owners to mark a repo as requiring 2FA for PRs and commits and issues

Currently one can do this at an organization level, but there is no way to do it for a repository in order to cover third-party PRs.

Thanks very much for the continued contributions to this PR. As we continue to listen and iterate based on community feedback, we’ve incorporated many of your code review suggestions in this latest set of revisions to our proposed updates to the Acceptable Use Policies (AUP) and Community Guidelines, to:
- More clearly and narrowly define the scope of platform abuse
- Move examples of abusive content (such as using the platform to manage command and control servers) back to the Acceptable Use Policies
- Broadly and explicitly exempt dual use technology and elaborate on this exemption early on in our Community Guidelines
- Remove redundancy in the elaboration of the policy in the Community Guidelines
- Move examples of what we mean by technical harms to the Community Guidelines
- Clearly indicate in which cases of platform abuse restrictions might apply, and what form those restrictions take
- Further clarify that our SECURITY.md contact recommendation is not a requirement

:eyes: at the changes https://github.com/github/site-policy/pull/397/files and read below for more details.

For a full summary of previous iterations and updates based on your feedback so far, please see the [opening PR comment](#397 (comment)), which we've updated with that history.
***
Again, the goal of these updates is to remove any overly broad restrictions on dual use technology on GitHub as it exists in our current policy, and to provide clear guidelines for both ourselves and the security community as a whole that enable, welcome and encourage security research and collaboration on our platform.

As we draw closer to the end of our 30-day comment period on June 1, 2021, we invite your continued discussion and feedback on these changes. If you have direct updates to the AUP or Community Guidelines language you’d like to propose, we strongly encourage the use of commit suggestions in your PR comments.

We would like to thank the community members, project maintainers, and developers who have shared feedback with us in the PR and have reached out for live discussions on this topic. Your feedback and suggestions have been tremendously valuable throughout this process. ✨
Thanks very much for the continued contributions to this PR. As we continue to listen and iterate based on community feedback, we’ve incorporated many of your code review suggestions in this latest set of revisions to our proposed updates to the Acceptable Use Policies (AUP) and Community Guidelines, to:

- More clearly and narrowly define the scope of platform abuse
- Move examples of abusive content (such as using the platform to manage command and control servers) back to the Acceptable Use Policies
- Broadly and explicitly exempt dual use technology and elaborate on this exemption early on in our Community Guidelines
- Remove redundancy in the elaboration of the policy in the Community Guidelines
- Move examples of what we mean by technical harms to the Community Guidelines
- Clearly indicate in which cases of platform abuse restrictions might apply, and what form those restrictions take
- Further clarify that our SECURITY.md contact recommendation is not a requirement

:eyes: at the changes https://github.com/github/site-policy/pull/397/files and read below for more details.

For a full summary of previous iterations and updates based on your feedback so far, please see the [opening PR comment](#397 (comment)), which we've updated with that history.
***
Again, the goal of these updates is to remove any overly broad restrictions on dual use technology on GitHub as it exists in our current policy, and to provide clear guidelines for both ourselves and the security community as a whole that enable, welcome and encourage security research and collaboration on our platform.

As we draw closer to the end of our 30-day comment period on June 1, 2021, we invite your continued discussion and feedback on these changes. If you have direct updates to the AUP or Community Guidelines language you’d like to propose, we strongly encourage the use of commit suggestions in your PR comments.

We would like to thank the community members, project maintainers, and developers who have shared feedback with us in the PR and have reached out for live discussions on this topic. Your feedback and suggestions have been tremendously valuable throughout this process. ✨
@vollmera
Copy link
Contributor Author

Thanks very much for the continued contributions to this PR. As we continue to listen and iterate based on community feedback, we’ve incorporated many of your code review suggestions in this latest set of revisions to our proposed updates to the Acceptable Use Policies (AUP) and Community Guidelines, to:

  • More clearly and narrowly define the scope of platform abuse
  • Move examples of abusive content (such as using the platform to manage command and control servers) back to the Acceptable Use Policies
  • Broadly and explicitly exempt dual use technology and elaborate on this exemption early on in our Community Guidelines
  • Remove redundancy in the elaboration of the policy in the Community Guidelines
  • Move examples of what we mean by technical harms to the Community Guidelines
  • Clearly indicate in which cases of platform abuse restrictions might apply, and what form those restrictions take
  • Further clarify that our SECURITY.md contact recommendation is not a requirement

👀 at the changes https://github.com/github/site-policy/pull/397/files and read below for more details.

For a full summary of previous iterations and updates based on your feedback so far, please see the opening PR comment, which we've updated with that history.


Again, the goal of these updates is to remove any overly broad restrictions on dual use technology on GitHub as it exists in our current policy, and to provide clear guidelines for both ourselves and the security community as a whole that enable, welcome and encourage security research and collaboration on our platform.

As we draw closer to the end of our 30-day comment period on June 1, 2021, we invite your continued discussion and feedback on these changes. If you have direct updates to the AUP or Community Guidelines language you’d like to propose, we strongly encourage the use of commit suggestions in your PR comments.

We would like to thank the community members, project maintainers, and developers who have shared feedback with us in the PR and have reached out for live discussions on this topic. Your feedback and suggestions have been tremendously valuable throughout this process. ✨

@TACIXAT
Copy link

TACIXAT commented May 21, 2021

This reads really well to me. Thank you for iterating with the community on these changes.

@vanhauser-thc
Copy link

I am happy with the newest change.
It overall improves the situation for security researchers and developers.

@Technetium1
Copy link

I think it's a good thing that feedback from the community was taken into account. I feel like this has somewhat extinguished a fire.

Copy link

@santosomar santosomar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great improvement! Thank you for all your efforts, transparency, and for allowing others in the industry to provide feedback and collaborate! Great leadership!

Copy link
Contributor Author

@vollmera vollmera left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made some non-substantive changes in f0740c7 and 9cc8c70

  • fixed typos
  • fixed indentation
  • updated front matter

@vollmera
Copy link
Contributor Author

vollmera commented Jun 2, 2021

We’ve reached the end of the comment period on this PR, and want to thank the security community, project maintainers, and interested stakeholders for the feedback and discussion. We’re taking stock of the latest feedback we’ve received relative to the most current revisions, and will be following up with our closing comments and merging this PR in the coming days. 🙇‍♀️

@vollmera vollmera merged commit bec164f into main Jun 4, 2021
@vollmera vollmera deleted the security-research-malware-exploits-policy-update branch June 4, 2021 16:06
@vollmera
Copy link
Contributor Author

vollmera commented Jun 4, 2021

Thanks again for all your feedback! ✨

The policy changes are now live, and you can learn more in our blog post here.

Copy link

@P5729fud P5729fud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

**** I know I have spam on my Android device .thank you for sorting it out Paul

pixillion-bd

This comment was marked as spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet