Skip to content

tekjar/cloudpubsub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLOUDPUBSUB

cloudpubsub is a customized version of mqtt client for higly reliable data transfer over unreliable networks by backing up data to disk during prolonged disconnections / slow network speeds.

reliability is acheived by

  • backing up old data to disk when in memory queues are full. this helps keeping memory usage to minimum with out dropping any packets
  • preserves publish sequence by transmitting old messages saved on disk first
  • transmits pending data from disk even across reboots. this keeps data loss during reboots to minimum (only data which is inmemory will be lost)

NOTE: you can configure max number of messages that you can backup to disk before cloudpubsub starts dropping packets

design considerations

Before reading below points, understand that cloudpubsub is not meant to be full fledged mqtt client. It doesn't not implement all the features of an mqtt client rather it strives to acheive (configurable) maximum reliability for data transmission using as little memory as possible

  1. ALL PUBLISHES AND SUBSCIBES ARE DONE ON QOS1: This ensures that all your packets are transmitted atleast once. But duplicates are possible during reconnections

  2. PUBLISHES & SUBSCRIBES HAPPEN ON SEPERATE NETWORK CONNECTIONS OVER 2 THREADS: The rationale behind this is that it's not really possible to do blocking reads and writes simultaneously on 2 different threads on a single network connection when TLS is involved. I considered these questions before deciding to use seperate n/w connections for pub & sub

Q. Isn't it possible to clone TlsStreams?

A. As far as I've read, no. When server requests a renegotiation, client's TLS read() call will do a write & TLS write() call can do a read. Since both the streams from 2 different threads share a single SSL object, and outstanding data in buffer causes races during renegotiations.

sfackler/rust-openssl#338

Q. What about creating read/write timeouts?

A. Though possible, this isn't a very scalable solution. writes might timeout when you write big data/during slow n/w speed and we have to take care of retransmission of remaining data. Same is the problem with reads. Partial data might be read before timing out and we wouldn't able to frame full packets in a go. This has to be handled by buffering partial reads. Handling partial reads/ writes manually might be error prone

Q. Maybe we could use tokio?

A. I tried using tokio but hanlding automatic reconnections isn't trivial. More over we shouldn't do blocking file IOs in event loop for reading pending messages from disk.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages