Skip to content

Defensive Guidance

moo_hax edited this page Apr 30, 2021 · 1 revision

Counterfit can help organizations baseline their machine learning models against known public attacks - and hopefully provides a gentle entry point for security people to start exploring machine learning security. Counterfit addresses a single aspect of the security of an ML system, the model itself. There is still a lot to be done. Fortunately, the security principles you know, and love apply to machine learning. While there is still research to be done about the best techniques to protect the underlying algorithms, organizations can and should start fact-finding exercises into where ML is being used in the organization, this includes third-party vendors. Not just ML vendors, but vendors that could possibly be using the organizations data to train and deploy models for their customers.

In this early stage of ML security, implementation is not as important as developing the processes. Keep the information in an Excel sheet, in a random wiki, ask your data science team to collect the information and give access to security. The important thing is to start building awareness inside the security organization and ensure ML operations do not expose the organization to unnecessary risk. The below guidance describes some specific ML security concerns and basic security processes that organizations can start with and may already have security processes in place for.

Inventory ML components and systems

Awareness of these systems is the first step. You can’t secure what you don’t know you have. Most organizations already have asset inventories, and it is likely ML systems are already in this inventory. The organization should look through existing inventories and mark systems that are part of ML operations as such.

Where ML inventories differ from traditional asset inventories is that ML models require data to be trained. These data come from somewhere, are kept somewhere, and are summarized by a model (saved to disk as a file) and deployed somewhere. The organization should find where production datasets are kept, what type of data (PII, Intellectual Property, etc) is in them, and what models are trained on which dataset. In addition to documenting datasets and models, document the services and accounts associated with these services.

Additionally, ML workspaces in cloud environments should be added to this inventory. For example, some workspaces expose Jupyter-Lab to the internet, which may have access to cloud storage, compute, or the ability to deploy a model to a public IP. While attackers might not care about the machine learning aspects of the workspace, compromising a workspace accidently exposed to the internet still offers a foothold in the network for unknown impact.

Ensure that ML systems adhere to current secure configuration policies

Often organizations use their asset inventories to ensure hosts meet secure configuration standards. The organization should ensure that ML systems are included in these policies and are up to date. Much like developers, data scientists and ML engineers require administrative privileges over ML systems or services to perform their job function. Permissions that violate existing guidance regarding privilege tiering should be found and remediated immediately. Minimum secure configurations for ML systems should include,

  • Logging production model inference telemetry and performance data to a central location.
  • File Integrity Monitoring for production model files and associated production datasets.
  • Log access to ML systems to a central location.
  • Adequately gating models and associated resources with proper authentication.

Ensure that ML systems adhere to compliance requirements

While it might not be immediately obvious how a production model (or the associated data) could be subject to compliance requirements. It has been shown that models can memorize training data, if training data contained PII, it is possible that PII can recovered during inference. For example, in very large datasets like CommonCrawl, it is not known how much PII exists in the dataset. It is also not known as to how much PII models trained on this dataset “remember”. Moreover, in the event PII is exposed via an ML model, it may not be clear if the PII came from a public dataset or a private dataset. It could be that the public PII memorized by a model overlaps with current customer information.

The prevalence and impact of this phenomenon is unknown, and it will be different for each organization. There are a lot of open security questions that need to be resolved. Again, in this early stage of ML security it is important to enumerate the risks associated with ML systems. Security activities will ensure the organization can successfully remain, and plan to remain in good standing with existing compliance requirements. Additionally, there is movement on AI governance that ML heavy organizations can get a head start on by applying foundational security principles to their ML operations).

Perform Technical Assessments

Most organizations are well equipped to point offensive security resources toward ML systems and environments. Traditional vulnerabilities and their associated risks can be found in ML systems and should be remediated accordingly. Counterfit aims to help organizations assess their machine learning models, and a successful blending of both disciplines (infosec and ml) is key to protecting ML. Offensive teams should consider collaborative exercises with data science teams for the greatest understanding and impact.