Skip to content

Commit

Permalink
Merge pull request #29 from input-output-hk/develop
Browse files Browse the repository at this point in the history
Update master with recent changes from develop
  • Loading branch information
alexromanov committed Jan 17, 2022
2 parents 72fefa3 + 85f40ae commit b7c00d2
Show file tree
Hide file tree
Showing 9 changed files with 185 additions and 56 deletions.
12 changes: 6 additions & 6 deletions README.md
Expand Up @@ -11,13 +11,13 @@ Professional Services Group Repository:
| Component | Version |
| ------------------------------------------------------------------------------------|:----------------:|
| [psg-cardano-wallet-api](https://github.com/input-output-hk/psg-cardano-wallet-api) | 0.2.4.1 |
| [cardano-wallet](https://github.com/input-output-hk/cardano-wallet) | 2021-09-29 |
| [cardano-wallet](https://github.com/input-output-hk/cardano-wallet) | 2021-11-11 |
| [cardano-node](https://github.com/input-output-hk/cardano-node) | 1.30.1 |
| store-and-hash-service | 0.2.44 |
| auth-emails-service | 0.2.44 |
| cardano-metadata-service | 0.2.44 |
| psg-self-service-node | 0.5.50 |
| psg-self-service-ui | 0.5.50 |
| store-and-hash-service | 0.2.58 |
| auth-emails-service | 0.2.58 |
| cardano-metadata-service | 0.2.58 |
| psg-self-service-node | 0.5.82 |
| psg-self-service-ui | 0.5.85 |

### PSG Services: gRPC API Examples

Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/new_user_guide.md
Expand Up @@ -56,4 +56,4 @@ If you have 3 credits that expire on May 1, after depositing 10 ADA, you will ha
- SubmitMetadata: 10 credits
- ListTransactions: 1 credit
- AuthenticatedEmail: 15 credits
- HashAndStore: free!
- HashAndStore: 1 credit
111 changes: 82 additions & 29 deletions docs/source/guides/store_and_hash_service_guide.md
@@ -1,30 +1,32 @@
# The StoreAndHash Service

The StoreAndHash service provides an API for posting data to AWS S3 storage, getting the hash of the data and URL for further download.
The StoreAndHash service provides an API for posting data to [AWS S3](https://aws.amazon.com/s3/) or [IPFS](https://ipfs.io/) storages, getting the hash of the data and URL for further download.

This document describes the service API and how to use it.

Working knowledge of gRPC is assumed.

## The Service
## The StoreAndHash Service

The rationale is to provide a complement to the SubmitMetadata functionality.

Submitting large amounts of metadata is not practical, so a standard solution is to submit a hash of a document, thereby attesting to the
existence of the data by putting its unique fingerprint in an immutable blockchain

This API allows the caller to upload documents to S3 (other persistence mechanisms coming soon) as a permanent store and
returns the permanent URL to the document, the SHA-265 hash (a unique fingerprint of the document), which can then
be attested to through the `SubmitMetadata` functionality described elsewhere.
This API allows the caller to upload documents to a permanent storage (AWS) and
returns the permanent URL to the document, the SHA-256 hash (a unique fingerprint of the document), which can then be attested to through the `SubmitMetadata` functionality described elsewhere.

### The StoreAndHash Service API
**Supported storages:**
* [AWS S3](#storeandhashhttp)
* [IPFS](#storeandhashipfs)

#### StoreAndHashHttp
### The StoreAndHash Service API

This is a partial listing of the service circa v0.2
This is a partial listing of the service
```
service StoreAndHashService {
rpc StoreAndHashHttp (stream StoreAndHashRequest) returns (stream StoreAndHashResponse);
rpc StoreAndHashIpfs (stream StoreAndHashIpfsRequest) returns (stream StoreAndHashResponse);
}
message StoreAndHashRequest {
Expand All @@ -34,58 +36,109 @@ message StoreAndHashRequest {
}
}
message UploadDetails {
string path = 1;
iog.psg.service.common.CredentialsMessage credentials = 2;
AwsCredentials aws = 3;
}
message Chunk {
bytes part = 1;
}
message UploadDetails {
string path = 1;
AwsCredentials aws = 2;
iog.psg.service.common.CredentialsMessage credentials = 3;
}
message AwsCredentials {
string keyId = 1;
string keySecret = 2;
string bucket = 3;
string region = 4;
}
```
The StoreAndHashHttp method expects a stream of requests from the client: UploadDetails and one or more Chunk of data to be stored.
message StoreAndHashIpfsRequest {
oneof options {
Chunk chunk = 1;
IpfsUploadDetails details = 2;
}
}
First, the client must provide the `UploadDetails`, which consists of
message IpfsUploadDetails {
IpfsAddress ipfsAddress = 1;
iog.psg.service.common.CredentialsMessage credentials = 2;
}
- path
message IpfsAddress {
string host = 1;
int32 port = 2;
}
```

The path within the bucket that the file will be stored at will form part of the download URL.
### StoreAndHashHttp

- credentials
The **StoreAndHashHttp** method expects a stream of requests from the client: **UploadDetails** and one or more **Chunk** of data to be stored. The result of the call is [StoreAndHashResponse](#storeandhashresponse)

These are the clients secret key and identifiers so that they be validated and their credits debited
***First***, the client must provide the `UploadDetails`, which consists of

- aws
- **Path**: The path within the bucket that the file will be stored at will form part of the download URL.

The aws credentials identify and authorize the service to upload to a particular aws s3 bucket.
- **CredentialsMessage**: These are the clients secret key and identifiers so that they be validated and their credits debited

AWS's credentials object is expected to contain:
- **AwsCredentials**: The AWS credentials identify and authorize the service to upload to a particular aws s3 bucket.

**AwsCredentials** object is expected to contain:
* key id
* key secret
* bucket name
* region

Details on how to create a minimal IAM user suitable for use in this service are [here](create_minimal_s3_user.md)

Each following call should to contain a chunk of the bytes read from the file.
***Second***, each following call should to contain a chunk of the bytes read from the file.

- chunk
```
message Chunk {
bytes part = 1;
}
```

When the whole file is uploaded in chunks, the responses are returned.
The call provides a stream of responses: the first containing the hash of the file and second containing the permanent URL to file served over HTTP through the PSG service
or a client can choose to use the s3 URL provided by amazon or serve the file by any other means.

The result will be a stream of responses:
A hash object contains the type of hash used (SHA-256) and hashes in bytes and base64 String formats.
- URL to file stored
### StoreAndHashIpfs

The StoreAndHashIpfs method expects a stream of requests from the client: **IpfsUploadDetails** and one or more **Chunk** of data to be stored. The result of the call is [StoreAndHashResponse](#storeandhashresponse)

***First***, the client must provide the `IpfsUploadDetails`, which consists of

- **IpfsAddress**: IP address and port of IPFS node
```
message IpfsAddress {
string host = 1;
int32 port = 2;
}
```

- **CredentialsMessage**: These are the clients secret key and identifiers so that they be validated and their credits debited

***Second***, each following call should to contain a chunk of the bytes read from the file.

If any issue occurred, the `problem` field of response contains the root cause of the case.
- chunk
```
message Chunk {
bytes part = 1;
}
```

### StoreAndHashResponse

The result of the call to service endpoint will be a stream of StoreAndHash responses containing one of the following:
- **Hash** object contains the type of hash used (SHA-256) and hashes in bytes and base64 String formats.
- **URL** to the uploaded file

**NOTE**: If any error occurred, the `AppError` response will be returned with the code and error message as strings:
```
message AppError {
string code = 1;
string msg = 2;
}
```
2 changes: 1 addition & 1 deletion docs/source/guides/store_and_hash_service_server_guide.md
Expand Up @@ -7,7 +7,7 @@ This document describes the service from a deployment point of view.
Working knowledge of gRPC is assumed.

## The Service
The service wraps access to the AWS instance for storing files.
The service wraps access to the AWS and IPFS instances for storing files.
It is possible to specify connection details as well as the custom path to upload the file.

### The StoreAndHash Service (server)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Expand Up @@ -46,7 +46,7 @@ PSG services core functionality
################################

* Submit metadata to the blockchain using Metadata service gRPC API
* Store file in AWS and get URL to file and file's hash via StoreAndHash service gRPC API
* Store file in AWS S3 or IPFS and get URL to file and file's hash via StoreAndHash service gRPC API
* Submit metadata and attachments to the blockchain using mail clients from UI
* Submit metadata and attachments to the blockchain using the gpg command-line tool
* Verify transaction metadata info via PSG Self Serve
51 changes: 42 additions & 9 deletions examples/js/README.md
Expand Up @@ -25,36 +25,69 @@ grpc_tools_node_protoc --proto_path=../../protos/cardano-metadata-service/protob
## How to execute example for Metadata service

1. Make sure, that you have registered an account in [PSG Self Serve UI](https://psg.iog.services/), purchased a package and generated API Token.
2. Set your username and API Token in the src/metadata_service_client.js file:
2. Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values in src/metadata_service_client.js
```shell
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("your_psg_username")
.setApiToken("your_api_token");
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");
```
3. Execute in Terminal:
```shell
node src/submit_metadata_client.js
node src/metadata_service_client.js
```
## How to execute example for StoreAndHash service
## How to execute example for StoreAndHashHttp

1. Make sure, that you have registered an account in [PSG Self Serve UI](https://psg.iog.services/), purchased a package and generated API Token.
2. Replace AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_S3_BUCKET and BUCKET_REGION with your S3 account values and replace demo-file-path with your desired path in src/store_and_hash_client.js:
2. Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values in src/store_and_hash_http_client.js
```shell
const reqs = [
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");
```
3. Replace AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_S3_BUCKET and BUCKET_REGION with your S3 account values and replace demo-file-path with your desired path in src/store_and_hash_http_client.js:
```shell
const storeAndHashHttpRequests = [
new StoreAndHashMessages.StoreAndHashRequest()
.setDetails(
new StoreAndHashMessages.UploadDetails()
.setAws(new StoreAndHashMessages.AwsCredentials()
.setKeyid("AWS_ACCCESS_KEY")
.setKeyid("AWS_ACCESS_KEY")
.setKeysecret("AWS_SECRET_KEY")
.setBucket("AWS_S3_BUCKET")
.setRegion("BUCKET_REGION")
)
.setCredentials(credentials)
.setPath("demo-file-path")
),
```
3. Execute in Terminal:
```shell
node src/store_and_hash_client.js
node src/store_and_hash_http_client.js
```
## How to execute example for StoreAndHashIpfs
1. Make sure, that you have registered an account in [PSG Self Serve UI](https://psg.iog.services/), purchased a package and generated API Token.
2. Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values in src/store_and_hash_ipfs_client.js
```shell
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");
```
3. Replace IPFS_HOST, IPFS_PORT with your chosen IPFS node ip address and port in src/store_and_hash_ipfs_client.js:
```shell
const storeAndHashIpfsRequests = [
new StoreAndHashMessages.StoreAndHashIpfsRequest()
.setDetails(
new StoreAndHashMessages.IpfsUploadDetails()
.setIpfsaddress(new StoreAndHashMessages.IpfsAddress()
.setHost("IPFS_HOST").setPort(IPFS_PORT))
.setCredentials(credentials)
),
```
3. Execute in Terminal:
```shell
node src/store_and_hash_ipfs_client.js
```
5 changes: 3 additions & 2 deletions examples/js/src/metadata_service_client.js
Expand Up @@ -8,9 +8,10 @@ const metadataSvc = new MetadataService.MetadataServiceClient(
'psg-testnet.iog.services:2001',
grpc.credentials.createSsl());

// Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("your_psg_username")
.setApiToken("your_api_token");
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");

// List Metadata Request
loggingStreamHandler(
Expand Down
Expand Up @@ -9,27 +9,33 @@ const storeAndHashSvc = new StoreAndHashService.StoreAndHashServiceClient(
grpc.credentials.createSsl()
);

// Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");

// Replace AWS_ACCESS_KEY, AWS_SECRET_KEY, AWS_S3_BUCKET and BUCKET_REGION with your S3 account values and replace demo-file-path with your desired path.
const reqs = [
const storeAndHashHttpRequests = [
new StoreAndHashMessages.StoreAndHashRequest()
.setDetails(
new StoreAndHashMessages.UploadDetails()
.setAws(new StoreAndHashMessages.AwsCredentials()
.setKeyid("AWS_ACCCESS_KEY")
.setKeyid("AWS_ACCESS_KEY")
.setKeysecret("AWS_SECRET_KEY")
.setBucket("AWS_S3_BUCKET")
.setRegion("BUCKET_REGION")
)
.setCredentials(credentials)
.setPath("demo-file-path")
),

new StoreAndHashMessages
.StoreAndHashRequest()
.setChunk(new StoreAndHashMessages.Chunk().setPart("Demo Data To Be Saved"))
.setChunk(new StoreAndHashMessages.Chunk().setPart("Demo Data To Be Saved At AWS S3"))
]

const storeAndHashCall = storeAndHashSvc.storeAndHashHttp();
loggingStreamHandler(storeAndHashCall);
reqs.forEach(req => storeAndHashCall.write(req));
storeAndHashCall.end();
const storeAndHashHttpCall = storeAndHashSvc.storeAndHashHttp();
loggingStreamHandler(storeAndHashHttpCall);
storeAndHashHttpRequests.forEach(req => storeAndHashHttpCall.write(req));
storeAndHashHttpCall.end();

36 changes: 36 additions & 0 deletions examples/js/src/store_and_hash_ipfs_client.js
@@ -0,0 +1,36 @@
const CommonMessages = require('./common_pb.js');
const { loggingStreamHandler } = require('./util.js');
const StoreAndHashMessages = require('./storeandhash-service_pb.js');
const StoreAndHashService = require('./storeandhash-service_grpc_pb.js');
const grpc = require('@grpc/grpc-js');

const storeAndHashSvc = new StoreAndHashService.StoreAndHashServiceClient(
'psg-testnet.iog.services:2001',
grpc.credentials.createSsl()
);

// Replace CLIENT_ID and API_TOKEN with your PSG Self Serve UI account values
const credentials = new CommonMessages.CredentialsMessage()
.setClientId("CLIENT_ID")
.setApiToken("API_TOKEN");

// Replace IPFS_HOST and IPFS_PORT with your chosen IPFS node ip address and port
const IPFS_PORT = 5001
const storeAndHashIpfsRequests = [
new StoreAndHashMessages.StoreAndHashIpfsRequest()
.setDetails(
new StoreAndHashMessages.IpfsUploadDetails()
.setIpfsaddress(new StoreAndHashMessages.IpfsAddress()
.setHost("IPFS_HOST").setPort(IPFS_PORT))
.setCredentials(credentials)
),

new StoreAndHashMessages
.StoreAndHashIpfsRequest()
.setChunk(new StoreAndHashMessages.Chunk().setPart("Demo Data To Be Saved At IPFS"))
]

const storeAndHashIpfsCall = storeAndHashSvc.storeAndHashIpfs();
loggingStreamHandler(storeAndHashIpfsCall);
storeAndHashIpfsRequests.forEach(req => storeAndHashIpfsCall.write(req));
storeAndHashIpfsCall.end();

0 comments on commit b7c00d2

Please sign in to comment.