Skip to content
This repository has been archived by the owner on May 22, 2023. It is now read-only.

Muen block session #65

Closed
jklmnn opened this issue Jun 11, 2019 · 5 comments
Closed

Muen block session #65

jklmnn opened this issue Jun 11, 2019 · 5 comments
Assignees

Comments

@jklmnn
Copy link
Member

jklmnn commented Jun 11, 2019

No description provided.

@jklmnn jklmnn self-assigned this Jun 11, 2019
@jklmnn
Copy link
Member Author

jklmnn commented Jun 17, 2019

Muen uses protocol identifiers to determine the channel protocol. As the channels are unidirectional there are two protocols, a request and a response. Furthermore the Linux module uses a distinct protocol identifier for each channel with the test that uses two channels resulting in the following values:

  • request: 0x9570208dca77db19, 0x9570208dca77db20
  • response: 0x9851be3282fef0dc, 0x9851be3282fef0dd

Running multiple channels with the same protocol identifier on different channels seems to work.

@jklmnn
Copy link
Member Author

jklmnn commented Jun 18, 2019

As there is currently no convention we use the first protocol id for each protocol.

Also since we can only supply one name per session but need two channels the channels will have a prefix and the supplied session name:

  • rsp:<session name> for the response channel
  • req:<session name> for the request channel

@jklmnn
Copy link
Member Author

jklmnn commented Jun 25, 2019

To allow session based channel detection each channel name will be further prefixed with a session name abbreviation:

  • blk:rsp:<channel name>
  • blk:req:<channel name>

@jklmnn
Copy link
Member Author

jklmnn commented Jun 25, 2019

The dispatching approach fits well with #50. Since it wouldn't make sense to implement a worse approach first that is already deprecated due to a planned feature I'll implement #50 first and continue the dispatcher afterwards.

Explanation:
The dispatcher on Muen is called in each scheduling cycle. But the dispatcher implemented by the component should only be called once the connection state has changed.
The connection state on Muen is detected by the Is_Active state of both the request and the response channel.
As the client activates the request channel (and probably already sends a request) in its initialization but the response channel has not yet been activated an incoming "connection request" can be detected. If the Server decides to accept this request the response channel will be activated. The same applies to the connection teardown. When the client finalizes it will deactivate the request channel. If the response channel is still active the dispatcher can finalize the server. So state detection can be summarized in the following table:

Request Active Request Inactive
Response Active Connection established Client disconnected
Response Inactive Client requested connection Disconnected

On both Genode and Muen the dispatcher generates the information for the Session_Request procedure in the context of its dispatching call. That means that the information is only valid when Dispatch is called. On Genode this information is stored on the stack and retrieved from it from inside another function. This will most probably generate segmentation faults when those procedures are called from another context. On Muen the Information would be stored in the component global session registry. To retrieve the correct information at least the session registry index needs to be passed to the internal dispatch call. In both cases it would be safer and simpler to use a opaque object that can be used to pass this information while further enforcing the call context.

@jklmnn
Copy link
Member Author

jklmnn commented Jun 28, 2019

The current block channel on Muen uses combined meta data and data inside the channel. While this reduces the implementation complexity it prevents the usage of larger requests or block sizes other than 4096. It also forces the reading end of each channel to either handle the blocks in order or cache them inside the application. Furthermore it requires at least one copy operation on each block.

An alternative approach would consist of two meta data only channels and a third shared memory region for blocks. The meta data channels would be similar to the current channels used but without block data.

The shared memory region would be used to exchange block data. To prevent race conditions the client is solely responsible for allocating and freeing blocks from this region. The client stores the region to be used by the server inside the request sent over the meta data channel. Tracking the allocation and deallocation of blocks in the shared memory region is up to the client and transparent to the server. The server will determine memory to be used for each request solely by the meta data.

A meta data event could have the following elements:

  • None, Read, Write, Sync, Size, Count
    • None: empty, invalid event, all other fields must be 0
    • Read: read request
    • Write: write request
    • Sync: sync request
    • Size: requesting block size from server
    • Count: requesting number of blocks from server
  • Block Id/Size data/Count data
  • Number of blocks
  • Shared memory offset
  • 16 byte private client data

The None request is an explicitly empty request that indicates an unused field in the shared memory channel. In case of a Read, Write and Sync request the next fields indicate the starting block number and the number of consecutive blocks that should be handled. In case of Read and Write the shared memory offset indicates the position inside the shared memory that should be used to read/write the data. The client needs to allocate enough consecutive memory for the complete request to fit. There are further 16 byte that are free to use for the client.
If the event is a Size or Count request the server will use the block id field to store the return values.

@jklmnn jklmnn closed this as completed Aug 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant