Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tactics and Techniques from MITRE ATLAS #245

Open
aamedina opened this issue May 8, 2024 · 5 comments
Open

Tactics and Techniques from MITRE ATLAS #245

aamedina opened this issue May 8, 2024 · 5 comments

Comments

@aamedina
Copy link
Contributor

aamedina commented May 8, 2024

Overview

This proposal seeks to integrate MITRE ATLAS tactics and techniques into the existing D3FEND ontology to enhance the representational fidelity of AI threats within the model. The ATLAS framework, which extends the MITRE ATT&CK Enterprise Matrix to AI-specific contexts, models adversarial goals and methods in compromising AI systems. By keeping D3FEND up to date with ATLAS, we could provide the public with a more comprehensive ontology that can better guide the development of defenses against evolving AI-centric cyber threats.

Discussion

  • We need to extend the ATTACK mappings to support ATLAS stix data
  • Is it a subclass of d3f:ATTACKEnterpriseThing?
  • How should the ATLAS specific annotations on MITRE tactics (e.g. "Discovery") be handled in D3FEND? At the moment, d3f:DiscoveryTechnique is a subclass of d3f:ATTACKEnterpriseTechnique. There are a number of ways to go about this, and I'm not sure how to model each derived ATTACK matrix we want to track.
  • From my perspective, almost everyone in the business world wants to add AI into their products and to be competitive going forward this will only become more common. Semantic modeling of the adversary's tactics and techniques is an important step toward understanding the threat landscape and mapping effective countermeasures.

References

@netfl0
Copy link
Contributor

netfl0 commented May 8, 2024

This would hang right under External Threat Model Thing. It would not be considered an Attack Thing.

@aamedina
Copy link
Contributor Author

aamedina commented May 8, 2024

This would hang right under External Threat Model Thing. It would not be considered an Attack Thing.

Is this right?

:ATLASThing a owl:Class ;
    rdfs:label "ATLAS Thing" ;
    rdfs:subClassOf :ExternalThreatModelThing .

and then define subclasses for ATLAS(Tactic/Technique/Mitigation) from that?

@netfl0
Copy link
Contributor

netfl0 commented May 8, 2024

Yes, it should mirror the attack tree structure

@aamedina
Copy link
Contributor Author

aamedina commented May 8, 2024

assuming the following added to the ontology:

:ATLASThing a owl:Class ;
    rdfs:label "ATLAS Thing" ;
    rdfs:subClassOf :ExternalThreatModelThing .

:ATLASTactic a owl:Class ;
    rdfs:label "ATLAS Tactic" ;
    rdfs:subClassOf :ATLASThing,
        :Goal,
        [ a owl:Restriction ;
            owl:onProperty :enabled-by ;
            owl:someValuesFrom :ATLASTechnique ] ;
    rdfs:seeAlso <https://atlas.mitre.org/tactics> ;    
    :definition "An ATLAS Tactic is a categorical classification of techniques within the MITRE ATLAS™ framework, representing adversarial goals particular to artificial intelligence systems. It also adapts MITRE ATT&CK® Enterprise Matrix tactics by integrating machine learning concepts, thus capturing the unique motives behind actions in AI-specific operations." .

:ATLASTechnique a owl:Class ;
    rdfs:label "ATLAS Technique" ;
    rdfs:subClassOf :ATLASThing,
      :Action,
      :Technique,
      [ a owl:Restriction ;
          owl:onProperty :enables ;
          owl:someValuesFrom :ATLASTactic ] ;
    rdfs:seeAlso <https://atlas.mitre.org/techniques> ;
    :definition "An ATLAS Technique is an action conducted by adversaries to accomplish tactical goals within the context of artificial intelligence systems. These techniques articulate both 'how' adversaries execute these actions to reach their objectives and 'what' outcomes are achieved from these maneuvers." .

:ATLASMitigation a owl:Class ;
    rdfs:label "ATLAS Mitigation" ;
    rdfs:subClassOf :ATLASThing,
        [ a owl:Restriction ;
            owl:onProperty :semantic-relation ;
            owl:someValuesFrom :DefensiveTechnique ] .

:atlas-id a owl:DatatypeProperty,
        owl:FunctionalProperty ;
    rdfs:label "atlas-id" ;
    rdfs:subPropertyOf :d3fend-kb-data-property ;
    rdfs:domain :ATLASThing ;
    rdfs:range xsd:string ;
    :definition "x atlas-id y: The ATLAS thing x is identified by string y." .

@netfl0 I will port this to Python, but does this Turtle output look sensible for the tactics mapping? https://gist.github.com/aamedina/7580cb202173e2a34a3d9e69e7316dc8

@netfl0
Copy link
Contributor

netfl0 commented May 8, 2024

That looks pretty much perfect to me :)

I am thinking we ought to just duplicate the python update attack code. (Some of it is ugly, a bash script comes to mind :)

make update-atlas

As a new target then we'll know if something breaks with the atlas data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants