Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split message cache #251

Closed

Conversation

StanislawLeja
Copy link
Collaborator

Feature for grouping parts of splitted message by split with UDH and split with parameters.

@vetss
Copy link
Contributor

vetss commented Sep 11, 2017

Hello @StanislawLeja

is the adding of "MessageParts", "MessagePartNumber" and a messageId of a first segement of a splitted message into CDRs an only target of your update ? Creating of such issue firstly may help in discussion of a proper solution design.

If yes we need to start firstly of design discussion. May I ask you to open a separate issue for the task where we will able to discuss details ?

My draft comments:

  1. If we want to create a splitted message cache (that can be reused for different tasks for example it can be reused for prepare of a solid message from splitted segements) it must be hosted not from CDR generator but in some general project place - in SmscManagement class (like MProcManagement, SmscPropertiesManagement etc).

  2. You use now the cache for correlation "sourceAddress+refNumber" -> "messageId". We can reuse such cache for other tasks for example message reassemling if needed. Ideally if we can fill such cache when a message has come to SMSC GW and then we will reuse that data when CDR creating. This means that we will have to store extra data inside Sms class or read data from the cache at CDR generating. We can think of this for future staying a current behaviur as for now.

  3. We clear of cache in CDR generating for every message. This will overload of processing for sure. We need following algo.

  • for most of messages we can remove of an entry from the cache when all of segments are found. This is a rule for most of majority of a noraml behaviour
  • left cases can covered by removing of entrances periodically by some timer. Another solution may be following - we have two cache collections A and B. We are filling data into A firstly for time "removeOlderThanXSeconds / 2" then, we start to fill of collection B for same time. After it we can just drop of all members of collection A and start of filling it again. It allows to make of seaching of old records throught a collection spending of a processor time for it. Getting operation will take info from both caches (A and B).
  1. I do not see any multithreading support for the cache, we may have collisions. I also see some productivity affecting issues: you use of String operations "string" + "string" + "string" (better to use StringBuilder), you use operations like "if(exists in cache) { put to cache }" (ideally to have something like an atomic operation that will check if data is present and update / add them if needed).

@StanislawLeja
Copy link
Collaborator Author

Hello @vetss

Adding of "MessageParts", "MessagePartNumber" and a messageId was only reason of this update.

According to your tips I've modified things in the following way:

  1. I moved it to MessageUtil.java as parseSplitMessageData method.
  2. OK
  3. I implemented one of solution that you proposed me (with collection A and B, filling and dropping periodically)
  4. I replaced string operations for StringBuilder and I added multithreading support.

@vetss vetss mentioned this pull request Oct 31, 2017
@vetss
Copy link
Contributor

vetss commented Oct 31, 2017

Hello @StanislawLeja

apologise for delays, we are processing your PR now.
I created a ne issue for it #262
Please check my comments there.
I am closing this PR now.

@vetss vetss closed this Oct 31, 2017
@vetss vetss self-requested a review October 31, 2017 10:18
@vetss vetss added this to the 7.4.0 milestone Oct 31, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants