Skip to content

grovolis/cloud_computing_alice_bob_messaging

Repository files navigation

Documentation

Alper Necati Akin, Ryan Halliburton, George Rovolis

Aim of the project is to maintain alice & bob cryptographic messaging protocol via third trusted party (TTP) by using Amazon Cloud. There are three entities (Alice, Bob, and TTP). The code was implemented in Java. AWS Java SDK has been used to implement operations with the cloud. JCA libraries have been used to implement cryptographic operations. Figures can be found in 'Figures for Readme' issue.

Background Research

We used Amazon Web Services to build our fair exchange system. AWS is capable of deploying virtual servers, setting up firewalls, control access policies, allocate IP addresses and scale the infrastructure as demand increases. We undertook some background research in order to quantify which services would be best suited to our needs. After performing some research on the monetary and time costs of different services Amazon offered.

To provide document transfer, Amazon S3 Bucket was used. To provide messaging between entities, Amazon SQS was used. To log data, Amazon DynamoDB was used.

Amazon S3: An internet scale data storage service that can handle vast amounts of data and concurrent accesses. The data is stored in containers called Buckets. The bucket access can be restricted through AWS access policies defined by the customer. The items stored in the buckets can be any type (e.g. JPEG, HTML, java). Security is one of the main benefits of the bucket. It supports user authentication and access control mechanisms such as access control lists. We can selectively grant permissions to users or groups of users. SSL over data transfer is available and automatic encryption once data is uploaded. Moreover, it is offered as a low-cost solution and can be easily integrated with other Amazon services, such as Amazon S3 notifications (which were an extension we considered had we had more time).

Amazon SQS: An Amazon queue contains messages that can be process by clients. Upon processing the messages a client can read, write and delete messages to a queue, should their permissions allow it. AWS FIFO Queues give exactly once processing semantics, so we wouldn’t have to worry about messages not arriving or duplicates.

Amazon DynamoDB: This service provides managed NoSQL Databases. NoSQL database tables are schema-less and can be used to store JSON-style documents or key-value pairs. Based on the architecture of this database type, NoSQL offers flexibility, short latency times and limitless scaling as it can adjust according to needs. This seemed like the best option for us, as we would not be locked into a schema, the flexibility was attractive.

Cryptography

The security was implemented with Java’s JCA libraries. We used RSA asymmetric encryption with a modulus of 2048. This was because it’s now the recommended size now due to 768 being cracked 7 years ago [1], and RSA is a standard for asymmetric encryption so the libraries and documentation are well supported. For hashing we used SHA-256, as SHA-1 was proven to be insecure a number of years ago, with Google providing a proof of concept collision in early 2017 [2]. For symmetric key encryption we chose AES with a 128bit key. We chose AES because its predecessor DES is vulnerable to brute force and deprecated due to its small key size. AES is the current standard for symmetric cryptography. In general we imagined our system running on non resource constrained devices, so we used larger keys for security and discarded the slight extra overhead incurred. When the clients and servers ”wake up” they generate their public/private key pair and request an exchange, public keys are then shared between participants via the TTP. We were aware there is a possibility for a man-in-the-middle attack here, but in general without some prior shared secret or alternate channel of communication it can be hard to completely rule this out, so we considered the risk acceptable. Also one of the main benefits of the S3 bucket storage we decided to implement is the fact that it supports SSL for data transfer and automatic encryption once data is uploaded.

Access Control

In order to ensure that our fair-exchange protocol was able to deliver on the earlier stated fairness guarantee requirement “both parties obtain all the items required or neither party does”, we decided to establish access control policies for each participants queue from within the AWS console. This is the logical choice as access control is built into AWS and we could establish different users, roles and group privileges, with keys which can be created or destroyed for each participant or exchange, depending on the granularity required. As stated earlier we chose Amazon’s S3 service to store the document being exchanged. An S3 Bucket is flexible, scalable and secure. There are two ways of accessing the bucket, programmatically via a client, or through the web interface. To ensure our systems integrity, we applied several policies over the bucket storage so all parties would only have the minimum necessary permissions to complete the transaction. We gave full access to the Trusted Third Party, Alice, the message sender is able to store and read objects for the bucket, however she cannot delete anything. Finally Bob, the message receiver is able to delete and read objects from the bucket. Bob will only delete a file from the bucket once it is downloaded to a local directory through the java client. These permissions are given to each client through AWS IAM user policy. As result, the permissions are tied to each user specifically and each of our systems entities can only have the role assigned to them, e.g. Alice should always be the message sender, Bob the receiver and TTP the middleman. In order to implement these restrictions we used the Amazon Management console to set up the policies and then applied each profile to the corresponding entity through Amazons Web Toolkit in Eclipse IDE. Apart from the bucket access management,we did apply policies to the amazon queues as well. For our application we created three queues, one for each entity.

Through Amazons Simple Queue Service, we were able to set up access policies for message processing, so for each queue created we set up specific permissions tied to each entity. Alice’s had full access to read, write and delete messages in her queue and TTP and Bob could only write messages to her queue. Bob had full access to his queue and the other two entities could only send a message to the queue. Finally, on the TTP queue, TTP had full access, while the other two entities could only send a message. By enforcing those policies, we did ensure that no entity was able to get access to a message not intended for them. As in the Bucket case, the permissions where tied to each entity through AWS IAM user policies and then implemented through the Amazons Console Management in JSONstyle. See (Fig 2) for a diagrammatic overview of our access control policies for our queues.

System Design

We found initial design to be challenging as there were many ways to implement the system. After research was conducted on Amazon components we decided to use the design shown in (Fig 3). This was because we wanted to use AWS services if possible, due to requirements in using AWS and also some of the ”out-the-box” services which come with them, providing simplicity. The design was transformed to the last version (Fig 4).

The team initially discussed the service structure of the system. We considered data transfer between clients (Alice & Bob) and the TTP to be the core issue. The team decided to use SQS in the three entities (Alice, Bob, and TTP) because it was deemed SQS was easier to implement. Also AWS FIFO Queues give exactly once processing semantics, so we didn’t have to worry about messages not arriving or duplicates, which we may have done had we decided to use a RESTful service. Another benefit to using queues was it allowed the protocol to be easily extended, as using SQS allowed us to use self descriptive messages to implement the protocol. So the message handling at either end was decoupled from the internal protocol workings, so in effect we could change the make up of the message to use a different protocol as needed.

One of the challenges for SQS was to secure messages between the three entities. To prevent other clients from deleting or reading messages not intended for them, the team put restrictive access control policies on the Amazon components (explained more fully in the Access Control section). Unfortunately restricting a single queue between three entities was not viable so we decided to use three different queues, one for each entity to provide a safer messaging structure (Fig 5).

The design of the document transfer was also deliberated on for some time. Initially it was thought that the document could be sent with byte arrays in queue messages. However, ordinary queue messages can only transfer data up to 256 kilobytes. This amount was deemed inadequate to transfer ordinary files so we considered extended messages for the task. SQS extended messages can transfer data up to two gigabytes by using an Amazon Bucket in the background. So there is not direct programmatic interaction between entities and AmazonBucket. The first design (Fig3) was based on SQS extended messages. Nevertheless, the team changed the idea and preferred to use Amazon Bucket directly without SQS extended messages (Fig 4) because allowing the clients to put and get files from the bucket was considered more scalable than sending all data through the TTP. Restrictions were applied to the bucket to prevent undesired read, write, and delete violations on documents by unauthorized entities.

To sum up the design (Fig 4), Alice generates a document key and stores a document in the bucket by using the document key. She also generates a transaction id to distinguish records and transactions and creates a signature over the document hash. Alice sends the document key, signature, and transaction id to TTP. It verifies the signature that was sent by Alice, if not at this point the transaction is terminated. Later, it stores the document key and signature with the transaction id in the Transactions table in the database. Then, TTP sends the document key with the transaction id to Bob. Bob generates a signature over the signature that was sent by TTP. Bob sends its signature with the transaction id to TTP. After TTP receives the signature of Bob, it retrieves the document key and the signature of Alice by using the transaction id. TTP verifies the signature of Bob with his public key. Later, it sends the document key to Bob and the signature of Bob to Alice. Upon successful completion Bob retrieves the document by using the document key and Alice receives a non-repudiation receipt. In addition, if TTP cannot verify the signatures of Bob or Alice, it terminates the transaction, sends termination messages to Bob and Alice, deletes the document in the bucket and the records in the database. The public keys of Alice and Bob are sent to the TTP at the beginning of a transaction. If TTP does not have the public keys, it waits for the public keys from the clients to verify the signatures and suspends the transaction.

About

Messaging application between Alice and Bob using AWS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages