Skip to content

Commit

Permalink
docs: enhance technical documentation (#159)
Browse files Browse the repository at this point in the history
  • Loading branch information
jjeroch committed Sep 19, 2023
1 parent 6cda266 commit 3330fe3
Show file tree
Hide file tree
Showing 5 changed files with 315 additions and 29 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,11 @@ Beyond this view, the portal allows for component integration of other (sub-prod
Currently integrated (or in the process of being integrated) products are:

* Semantic Hub
* BPDM
* BPDM-Pool
* Managed Identity Wallet
* Self-Description Factory
* Clearing House / Gaia-X
* Digital Twin Registry
* Digital Twin Registry (until decentral DTR is in place)

<br>
<br>
170 changes: 169 additions & 1 deletion developer/Technical Documentation/Architecture/Development Concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,179 @@ The API uses OpenAPI annotations to describe the endpoints with all necessary in
<br>
<br>

#### API Dev Guidelines

###### Implement authorization
API's need to ensure that they only grant access to the authorized requester. For example, a user might be approved to access the API, but if they’re not allowed to add information to the application’s database via the POST method, any request to do so should be rejected. Authorization information can also be contained within a request as a token.

Unlike some other API types, REST APIs must authenticate and authorize each request made to the server, even if multiple requests come from the same user. This is because REST communications are stateless — that is, each request can be understood by the API in isolation, without information from previous requests.

Authorization can be governed by user roles, where each role comes with different permissions. Generally, API developers should adhere to the principle of least privilege, which states that users should only have access to the resources and methods necessary for their role, and nothing more. Predefined roles make it easier to oversee and change user permissions, reducing the chance that a bad actor can access sensitive data.

In terms of implementation all endpoints should be secured with the highest restrictions as default. Restrictions should only be lessened through explicit exemptions. This ensures that in case of oversights an endpoint can be more secured than intended but never less secured.

<br>
<br>

###### Validate all requests
As mentioned, sometimes requests from perfectly valid sources may be hacking attempts. Therefore, APIs need rules to determine whether a request is friendly, friendly but invalid, or harmful, like an attempt to inject harmful code.

An API request is only processed once its contents pass a thorough validation check — otherwise, the request should never reach the application data layer.

Validation also includes sanity checks: Define sensible value ranges for the parameters a user provides. This especially is valid for the size of the request and the response. APIs should limit the possible number of records to process in order to prevent intentional or unintentional overloads of the system.

<br>
<br>

###### Encrypt all requests and responses
To prevent MITM attacks, any data transfer from the user to the API server or vice versa must be properly encrypted. This way, any intercepted requests or responses are useless to the intruder without the right decryption method.

Since REST APIs use HTTP, encryption can be achieved by using the Transport Layer Security (TLS) protocol or Secure Sockets Layer (SSL) protocol. These protocols supply the S in “HTTPS” (“S” meaning “secure'') and are the standard for encrypting web pages and REST API communications.

TLS/SSL only encrypts data when that data is being transferred. It doesn’t encrypt data sitting behind your API, which is why sensitive data should also be encrypted in the database layer as well.

<br>
<br>

###### Only include necessary information in responses
Like you might unintentionally let a secret slip when telling a story to a friend, it’s possible for an API response to expose information hackers can use. To prevent this, all responses sent to the end-user should include only the information to communicate the success or failure of the request, the resource requested (if any), and any other information directly related to these resources.

In other words, avoid “oversharing” data — the response is a chance for you to inadvertently expose private data, either through the returned resources or verbose status messages.

=> in the ownership of every API Developer

<br>
<br>

###### Throttle API requests and establish quotas
To prevent brute-force attacks like DDoS, an API can impose rate-limiting, a way to control the number of requests to the API server at any given time.

There are two main ways to rate-limit API requests, quotas and throttling. Quotas limit the number of requests allowed from a user over a span of time, while throttling slows a user’s connection while still allowing them to use your API.

Both methods should allow normal API requests but prevent floods of traffic intended to disrupt, as well as unexpected request spikes in general.

<br>
<br>

###### Log API activity
Logging API activities is extremely important when it comes to tracing user activity and in worst case hack activity.

<br>
<br>

###### Conduct security tests
=> see Test Section below

<br>
<br>

###### Pagination
API pagination is essential if you're dealing with a lot of data and endpoints. Pagination automatically implies adding order to the query result.

All APIs which listing a number of response elements, should get considered for pagination. If an actual pagination implementation is needed depends on the volume of realistic responses.

E.g. if we have an API which could theoretically respond with 1000 elements, but will realistically never have more then 20 elements, a pagination is not relevant.

<br>
<br>

Parameter for pagination implementation

* Type of API: if the API is a kind of a list
* Standard Volume: if we can expect that the API is expected to respond often with more then 50 elements
* Max Volume: if there is the chance that the API will have to at least once respond with more then 100 elements


Implementation

* APIs relevant for pagination get a meta area added at the top where "total elements", "totalPages", "page" and "contentSize" added
* After that, the content area starts


Example

<img width="606" alt="image" src="https://github.com/catenax-ng/tx-portal-assets/assets/94133633/52971680-ea8c-4edc-8d6d-0b52d4b67b23">

<br>
<br>

###### Error Handling


The simplest way we handle errors is to respond with an appropriate status code.

Common agreed response codes:

* 400 Bad Request – client sent an invalid request, such as lacking required request body or parameter.
Example: add user role to user. User ID (primary key of the to be modified resource) is submitted as path parameter and user role as parameter or body. If the user role is not found inside the portal db, a 400 error is expected. If the user id would fail, a 404 would get responded.
* 401 Unauthorized – user authenticated but doesn't have permission to access the requested resource.
Example: User token doesn't include the relevant service permission or user doesn't have the access on the resource (e.g. not under the same company id)
* 403 Forbidden – client failed to authenticate with the server.
Example: token expired oder invalid login.
* 404 Not Found – the requested resource does not exist.
Example: user details are requested via a GET API with the user ID (as path parameter). The user ID is not found in the portal db table.
* 409 Conflict.......
* 412 Precondition Failed – one or more conditions in the request header fields evaluated to false - currently not used
* 415 Unsupported Media Type - uploaded media type is not supported.
Example: only pdf document upload is allowed and user is trying to load a png file.
* 500 Internal Server Error – a generic error occurred in the internal system logic.
Example: add user role to an user. The user role is found in the portal db and service is trying to add the role to the user in Keycloak. In case Keycloak can not find this role or user id inside its db (due to inconsistent data) a 500 error is showing up. Logic/Service is breaking due to inconsistent data!
* 502 Bad Gateway – a generic error occurred in the internal system logic.
Example: Keycloak is not responding / down.
* 503 Service Unavailable – the requested service is not available
Example: tbd
Additionally to the generic error code, a detailed message/error is needed to ensure that the issue can get validated and resolved quickly.

<br>
<br>

###### Repository Pattern

The repositories are used via the Factory PortalRepositories, which ensures that the same database instance is used for all repositories.

Furthermore, it provides an implicit transaction functionality.

The repositories themselves must not be registered for dependency injection in the corresponding startup; the method PortalRepositories.GetInstance<RepositoryType> provides the instance of a requested repository.

In the repository itself, you should not work with SaveChanges, it should only be called via the PortalRepositories.SaveChanges to ensure that any transaction dependencies can be rolled back.

Since EF-Core offers a change tracking feature, the database objects are modified in the business logic



<br>
<br>

##### Tests

###### User Authentication Test
If authentication mechanisms are implemented incorrectly, attackers can compromise authentication tokens or exploit implementation flaws to assume other users’ identities and gain access to your API’s endpoints.

To test your authentication mechanisms, try sending API requests without proper authentication (either no tokens or credentials, or incorrect ones) and see if your API responds with the correct error and messaging.

###### Parameter Tampering Test
To run a parameter tampering test, try various combinations of invalid query parameters in your API requests and see if it responds with the correct error codes. If not, then your API likely has some backend validation errors that need to be resolved.

###### Injection Test
To test if your API is vulnerable to injections, try injecting SQL, NoSQL, LDAP, OS, or other commands in API inputs and see if your API executes them. These commands should be harmless, like reboot commands or cat commands.

###### Unhandled HTTP Methods Test
Most APIs have various HTTP methods that are used to retrieve, store, or delete data. Sometimes web servers will give access to unsupported HTTP methods by default, which makes your API vulnerable.

To test for this vulnerability, you should try all the common HTTP methods (POST, GET, PUT, PATCH, and DELETE) as well as a few uncommon ones. TRY sending an API request with the HEAD verb instead of GET, for example, or a request with an arbitrary method like FOO. You should get an error code, but if you get a 200 OK response, then your API has a vulnerability.

###### Load Test
Load testing should be one of the last steps of your API security auditing process. This type is pushing the API to its limits in order to discover any functional or security issues that have yet to be revealed.

To achieve this, send a large number of randomized requests, including SQL queries, system commands, arbitrary numbers, and other non-text characters, and see if your API responds with errors, processes any of these inputs incorrectly, or crashes. This type of testing will mimic Overflow and DDoS attacks.

An API manager or gateway tool will handle or help address the API security guidelines described above (including testing).

## Migration
To run the portal, migrations are needed to load the initial data inside the identity provider and the portal db to enable the portal to work.
The migration will consist of an initial migration as well as delta migration files with future releases. As part of a new release, a migration file (if applicable) will get released and can get loaded via a delta load.
<br>
<br>

## Configurability
Portal configuration is mainly possible via the appsetting files as well as the static data migration files.
Portal configuration is mainly possible via the appsettings files as well as the static data migration files.
50 changes: 37 additions & 13 deletions developer/Technical Documentation/Identity & Access/10. FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,45 @@

### How to crete new roles

Bevore creating new roles, check once for which level/purpose the role is needed
Before creating new roles, check once for which level/purpose the role is needed

1. Portal Role
2. App Role
3. Technical User Role
1. Company Role
2. Portal Role
3. App Role
4. Technical User Role

<br>

##### Portal Role
##### Company Role(s)

To add a new company role, a couple of steps need to get followed.
Different to Portal/App/Technical User Roles, it is not needed to do any update inside the IdP.

DB Table Changes:
* add new company role inside the table company_roles
* if the new company role should be selectable for company registrations, set the role inside table company_role_registration_data to "true"; otherwise "false"
* add description of the new company role inside table company_role_descriptions
* create a new user role collection inside user_role_collections to define selectable user roles for the company role
* add description of the new collection inside table user_role_collection_descriptions
* map user roles to the new created collection via table user_role_assigned_collections
* connect the new company role with the new role collection via "company_role_assigned_role_collections"
* new or existing agreements to be linked to the new company role via table "agreement_assigned_company_roles"

Additionally needed:
* create migration
* update "version_upgrade" details [open the file](/Technical%20Documentation/Version%20Upgrade/portal-upgrade-details.md)
* update Roles&Rights Matrix

<br>
<br>

##### Portal Role(s)

Portal roles can get added easily if following steps are considered/followed.

1. Create the roles inside keycloak - central idp; realm: CX-Central inside the respective client
* open the client via the left side menu <strong>Clients</strong>
* select the resepctive client (Cl2-CX-Portal or Cl1-CX-Registration)
* select the respective client (Cl2-CX-Portal or Cl1-CX-Registration)
* Open the tab <strong>Roles</strong>
* And click "Add" on the right hand side
* Enter the respective role name (keep in mind the role naming conversation)
Expand Down Expand Up @@ -66,18 +90,18 @@ Portal roles can get added easily if following steps are considered/followed.
<br>
<br>

##### App Role
##### App Role(s)

App roles are managed by app provider by the portal user interface. It should be strightly forbidden to add / change any app roles in any other way. Reason: app roles are (beside that they are in the ownership of the app provider) impacting not only a keycloak client and portal db; additionally apps have app clients registered in keycloak and each client need to get enahnced with the new roles where human errors are very likely possible.
App roles are managed by app provider by the portal user interface. It should be strictly forbidden to add / change any app roles in any other way. Reason: app roles are (beside that they are in the ownership of the app provider) impacting not only a Keycloak client and portal db; additionally apps have app clients registered in Keycloak and each client need to get enhanced with the new roles where human errors are very likely possible.

<br>
<br>

##### Technical User Role
##### Technical User Role(s)

Technical user roles are similar like portal user roles created/managed and enhanced by the platform owner.

1. Create the roles inside keycloak - central idp; realm: CX-Central inside the client "technical_role_management"
1. Create the roles inside Keycloak - central idp; realm: CX-Central inside the client "technical_role_management"
* open the client via the left side menu <strong>Clients</strong>
* Open the tab <strong>Roles</strong>
* And click "Add" on the right hand side
Expand Down Expand Up @@ -136,9 +160,9 @@ The assignment of rights to an actual user is happening on the role level itself

### How to setup technical user authentication

Technical user/service accounts should get created as standalone client to clearly differenciate applications from technical users.
Technical user/service accounts should get created as standalone client to clearly differentiate applications from technical users.
Each OIDC client has a built-in service account which allows it to obtain an access token.
This is covered in the OAuth 2.0 specifiation under Client Credentials Grant. To use this feature you must set the Access Type of your client to confidential. Make sure that you have configured your client credentials.
This is covered in the OAuth 2.0 specification under Client Credentials Grant. To use this feature you must set the Access Type of your client to confidential. Make sure that you have configured your client credentials.

In tab Service Account Roles you can configure the roles available to the service account retrieved on behalf of this client.

Expand All @@ -163,7 +187,7 @@ After saving the config, the client gets automatically a service user account cr

### Retrieve token for service account

curl --location --request POST '{keyloak URL}/auth/realms/{realm}/protocol/openid-connect/token' \
curl --location --request POST '{Keyloak URL}/auth/realms/{realm}/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_secret={secret} \
--data-urlencode 'grant_type=client_credentials' \
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
## Clearinghouse
<br>

### Interface Summary

The Gaia-X Clearinghouse provides two key services

* trust (validation of corporate data by verifying data from legal entities, according to the Gaia-X Trust Framework)
* conformity assessment (SD Documents via teh compliance check)


<br>
<br>

### Architecture Overview
<br>

#### #1 Notarization Check

<br>
<img width="1000" alt="image" src="https://user-images.githubusercontent.com/94133633/210450411-03a7c623-464c-4246-bdc9-460b98952af4.png">
<br>
<br>

#### #2 Compliance Check

The compliance check is used for legal entity SDs as well as connector SDs.
Both the flows are identical and displayed below:

<br>
<img width="1025" alt="image" src="https://github.com/catenax-ng/tx-portal-assets/assets/94133633/cba051a0-246f-494f-8dd9-db353904abc1">
<br>
<br>

### Authentication Flow / Details
<br>
<br>
<p align="center">
<img width="709" alt="image" src="https://github.com/catenax-ng/tx-portal-assets/assets/94133633/3d073212-45ee-47b4-8a4a-5561b3fccbcc">
</p>
<br>
<br>

### Description of the functional interface (WHAT) and the physical interfaces (HOW)
The Clearinghouse is triggered by the respective CX service (depending on the scenario by portal or SD Factory) and processes the data.
The response is provided back to the portal in both the cases.
Since the interface is asynchron - a response delay of 60 seconds have been agreed. In special cases it could take longer.

Endpoints used by the CH for response:

* /api/administration/registration/clearinghouse/selfDescription
* /api/administration/Connectors/clearinghouse/selfDescription

<br>
<br>

Loading

0 comments on commit 3330fe3

Please sign in to comment.