Skip to content
This repository has been archived by the owner on May 24, 2022. It is now read-only.

Development Tips

zazi edited this page Oct 21, 2014 · 23 revisions

A collection of tips for development.

Explore Communication between Front End and Back End

The communication between the front end and the back end's HTTP API is mainly done with the help of JSON resources. You may use the Chrome web browser and its Developer Tools. (hint: a Developer Tools window displays details on the particular tab that was in focus when opening the developer tools.) The network perspective is pretty useful to inspect the communication to the d:swarm back end server.

The Headers tab of the developer tools gives an overview, e.g. which d:swarm API endpoint was requested, e.g. http://127.0.0.1:8087/dmp/resources on a local installation when opening the d:swarm Data Perspective.

The Response tab of the developer tools contains data received from the d:swarm back end. Copy the JSON string and a) open an editor of your choice to format the JSON string properly (e.g. with http://jsbeautifier.org you can paste in the content and hit "Beautify ..." to get a human readable version of the response, or b) and/or just explore the JSON object (e.g. with http://jsonviewer.stack.hu).

Back End Log Files

When playing around with the front end, you can open a terminal and run tail -f on one of the log files written to $[d:swarm back end project home]/logs to observe what' s happening in the back end.

Integration Tests for Metadata Repository

Note: This description is for testing the Metadata Repository only, i.e., no data is written to Neo4j!

For writing integration tests, you may have look at org.dswarm.controller.resources.job.test.ProjectRemoveMappingResourceTest.testPUTProjectWithRemovedMapping() as an example. The test simulates a user loading an already persisted project in front end, removing a mapping (that contains filters and functions) and saving the updated project (by putting the whole project JSON to the respective API end point).

As a test basis, you first need to create the necessary, e.g., a Project with its Mappings, Functions etc. as a basis for your tests. You can do this via creating a compound object, such as a project, step-by-step programmatically and transform the result into a JSON string, or you can create the project with mappings etc. in the backoffice web application and than intercept the JSON that is send to the back end and store it to simulate this request in your integration test (which is described in the following).

Create JSON file step by step

Note: At the moment, you need to cope with dummy IDs when sending a compound object that contains other new objects to the back end. So it's highly recommended to create compound objects programmatically step-by-step and utilise them in the test (you may have a look at the domain model).

The following section should be followed with caution (maybe not every step is leading to the intended result)

A dummy ID is a negative long value. The reason is that, e.g, when adding a Mapping in the front end, the front end has no knowledge about the ID (of this mapping) that will be generated by the back end (Metadata Repository). Therefore, the front end creates dummy IDs for all Entitys newly created. These will be replaced by valid IDs in the back end when processing the JSON of the request.

Go through the following steps to create a custom project JSON with dummy IDs that can be used to initialize the data base.

  1. ensure that front end, back end, Neo4j and MySQL running
  2. reset the database (not necessarily required but recommended to avoid confusion)
  3. Open front end in Chrome and activate Developer Tools (see also Explore Communication)
  4. Go to Import Perspective, import a Data Resource
  5. Go to Data Perspective and configure the Data Resource
  6. In Data Perspective, create a new Project. From here, do not click save project until told to do so!
  7. In the Mapping Perspective, do everything you need as a basis for the integration test, e.g., select Output Data Model, add Mappings with Filters etc.
  8. Click Save Project and notice the request in the Developer Tools. There should be something like http://127.0.0.1:8087/dmp/projects/1 in the list.
  9. Select the element with Request Method: PUT in the Headers.
  10. Below, go to Request Payload and click view source. An unformatted block of JSON should be shown. Copy the whole JSON to clipboard.
  11. Open an editor of your choice to beautify the JSON string, e.g., with the help of http://jsbeautifier.org. The result may look like dswarm/controller/src/test/resources/project_to_remove_mapping_from_with_original_IDs_and_output_data_model.json
  12. Systematically replace all non-negative, i.e. valid IDs by negative dummy IDs. See Pitfalls when Replacing dummy IDs! To speed up this process, you may remove all Attribute Paths from the Output Data Model that are not part of any Mapping.
  13. Save the JSON with dummy IDs as new file and use it in an integration test. The file may look like dswarm/controller/src/test/resources/project_to_remove_mapping_from_with_dummy_IDs.json. You can use a diff tool to compare the two files from the example to see the changes that have been done in step 12. (additionally, some "id":value pairs of JSON objects have been put to other positions to enhance readability)
  14. you can "verify" the result of the steps above by resetting the database, starting the integration test and loading the file with dummy IDs, pause execution. Now open the front end, load the project and check that all mappings are displayed. Since we did not save the data itself too, Neo4j does not contain any data, hence there is no example data displayed!

Pitfalls when Replacing dummy IDs

Caution! The same Entity, i.e. the same Attribute Path may occur several times in the JSON and needs to get the same dummy ID. The pitfall is, that dummy IDs from different entities must not be equal. Have a look at the file from step 11, dswarm/controller/src/test/resources/project_to_remove_mapping_from_with_original_IDs_and_output_data_model.json, lines 27-34:

"attribute_paths": [{
  "id": 1,
  "attributes": [{
    "name": "type",
    "uri": "http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
    "id": 1
  }]
}, [...]

We see the first element of an (ordered) array of attribute paths. This element has the attribute path ID 1 and contains exactly one attribute. The attribute has the attribute ID 1. When replacing these IDs by dummy IDs, they need to be different negative numbers. The snippet with dummy IDs may be, see also dswarm/controller/src/test/resources/project_to_remove_mapping_from_with_dummy_IDs.json, lines 28-35

"attribute_paths": [{
  "id": -4070983487343468340,
  "attributes": [{
    "name": "type",
    "uri": "http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
    "id": -2323909237623843453
  }]
}, [...]

So far, different entities got different dummy IDs. To keep the same entities same, lookout for -4070983487343468340 in dswarm/controller/src/test/resources/project_to_remove_mapping_from_with_dummy_IDs.json. You will find 4 occurrences since this attribute path with the attribute "type" is part of both of the schemas (input data model, line 29; output data model, line 296) and part of a mapping as input attribute path, line 113 and output attribute path, line 126. These 4 occurrences must have equal dummy IDs. Puhhh. We know it's not easy and will replace the dummy IDs by UUIDs one day - maybe you want to contribute to this ticket :)

Clone this wiki locally