Skip to content

ServiceCutter: A Structured Way to Service Decomposition (provided as library here)


Notifications You must be signed in to change notification settings


Repository files navigation

Context Mapper

Service Cutter Library

Build (master) codecov Maven Central License

The Service Cutter tool provides a structured way to service decomposition. It is based on the Bachelor Thesis by Lukas Kölbener and Michael Gysel.

The original tool repository is located under

This fork aims to provide the Service Cutter engine as a library to be used within other tools such as Context Mapper.


The library is published to the Maven central repository, so you can easily include it to your Gradle or Maven build:


implementation 'org.contextmapper:service-cutter-library:1.2.1'




Since we use the library inside our Context Mapper Eclipse plugin, we also need it as Eclipse feature (OSGi bundle) in a P2 repository. Thus, we provide the following two Eclipse update sites for snapshots (master builds) and releases:


As in the original Service Cutter tool there are two input models which you need to generate service decompositions with the library:

  • Domain Model (Entity Relation Model); required
  • User Representations; optional

Originally these input models are provided as JSON files. You find explanations and examples on the Service Cutter page. You also find the examples in our test resources, here. However, if you use this library you just have to provide the models with the classes offered in the package ch.hsr.servicecutter.api.model. Thereby you can create the models manually in Java or however you want to create it.

Solver Configuration

In the original Service Cutter tool you can provide parameters and priorities on the coupling criteria via the user interface. This configuration has to be provided with the ch.hsr.servicecutter.solver.SolverConfiguration class here in the library version. The configuration can be created with the ch.hsr.servicecutter.api.SolverConfigurationFactory which gives you the default configuration which can then be changed (factory also provides JSON import).

Usage Example

Usage of Service Cutter within your application, in case you provide the input files (entity relations and user representations) as JSON files:

public static void main(String[] args) throws IOException {
    // create ERD and user representations from JSON files
    File erdFile = new File("./src/test/resources/booking_1_model.json");
    File urFile = new File("./src/test/resources/booking_2_user_representations.json");
    EntityRelationDiagram erd = new EntityRelationDiagramImporterJSON().createERDFromJSONFile(erdFile);
    UserRepresentationContainer userRepresentations = new UserRepresentationContainerImporterJSON()

    // build solver context (user representations are optional)
    ServiceCutterContext context = new ServiceCutterContextBuilder(erd)

    // generate service decompositions
    SolverResult result = new ServiceCutter(context).generateDecomposition();
    // analyze and do something with the result ...

If you don't want to work with JSON files you can construct the models manually with the classes provided in the package ch.hsr.servicecutter.api.model.

If you want to create the SolverConfiguration and change priorities on coupling criteria it can be done as follows:

// create configuration
SolverConfiguration configuration = new SolverConfigurationFactory().createDefaultConfiguration();
// change parameters
configuration.setPriority(CouplingCriterion.PREDEFINED_SERVICE, SolverPriority.XS);
// build solver context (user representations are optional)
ServiceCutterContext context = new ServiceCutterContextBuilder(erd)

Note: If you don't pass a configuration to the ServiceCutterContextBuilder, it will use the default configuration.

With the ch.hsr.servicecutter.api.SolverConfigurationFactory you can also create the Configuration from a JSON file. The default configuration looks as follows: (JSON)

  "algorithmParams": {
    "leungM": 0.1,
    "leungDelta": 0.55,
    "cwNodeWeighting": 1.0,
    "mclExpansionOperations": 2.0,
    "mclPowerCoefficient": 2.0
  "priorities": {
    "Identity & Lifecycle Commonality": "M",
    "Semantic Proximity": "M",
    "Shared Owner": "M",
    "Structural Volatility": "XS",
    "Latency": "M",
    "Consistency Criticality": "XS",
    "Availability Criticality": "XS",
    "Content Volatility": "XS",
    "Consistency Constraint": "M",
    "Storage Similarity": "XS",
    "Predefined Service Constraint": "M",
    "Security Contextuality": "M",
    "Security Criticality": "XS",
    "Security Constraint": "M"
  "algorithm": "Leung"


The solver currently supports the following priorities: IGNORE, XS, S, M, L, XL, XXL


We currently only support the "Epidemic Label Propagation" by Leung et al. The "Girvan-Newman" algorithm by M. Girvan and M. E. J. Newman has been supported by Service Cutter but we haven't included it into the library due to its license. We want to add new algorithms in the future.


If you want to checkout the library and build it by yourself you can do that with the following Gradle command: (prerequisite: JDK 1.8)

./gradlew clean build


Contribution is always welcome! Here are some ways how you can contribute:

  • Create Github issues if you find bugs or just want to give suggestions for improvements.
  • This is an open source project: if you want to code, create pull requests from forks of this repository. Please refer to a Github issue if you contribute this way.


Service Cutter and this library version are released under the Apache License, Version 2.0.