New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change length in slim protocol from 6 to 9 digits #319
Comments
This issue has been raised before: #115. What are you sending? |
just a lot of stuff, such as JSON serialized documents and ensuring that certain elements are in there via regexs... |
I hit this problem pretty frequently too. I encounter it when I'm loading my database with test data, but it doesn't take a particular large amount of data to hit the limit. In my mind the worst part of the limit is that FitNesse doesn't gracefully handle the problem but instead throws an exception. To load a test employee into our system, there are approx 50 columns of data: name, address, hire date, company, etc. I found I can put approx 50 employees into a decision table without an error. If I put many more than that, FitNesse produces an exception. I haven't investigated to determine if the exception occurs before the message is actually sent to the SLIM server, or it's after and the SLIM server doesn't handle a invalid message correctly, but whatever the case, since FitNesse has a string size limit of 999999 chars, FitNesse should handle the error gracefully. It would be nice if the limit were increased. Some where around 60-70 employees the decision table for loading our test data blows up - a table of approx 50-60 KB in actual text size. We load approximately 1500 employees for performance testing so we split data up into decision tables of 50 emps each. It's a hassle, but still worthwhile. We certainly could load the data outside of FitNesse, but we prefer not to because we are in fact exercising our system by loading them this way - the employee data itself varies and exercises many different test scenarios - plus we're testing the load process itself for performance. The data is also the basis for other functional and performance tests downstream. And having all of the employee data in FitNesse makes it easy for developers, business analysts and QA alike to review, modify and add new test cases. Of course a protocol change wouldn't necessarily require the 999999 limit to be changed... it could be managed by breaking a large string into segments, sending them separately and reconstructing them on the receiving end. However it's dealt with though, we'd sure appreciate the ability to handle larger table sizes, And whatever the case, size limits should be handled gracefully. |
IF we change the protocol, don't use a different fixed size, use a delimited field or some other variable length construct, e.g. 1234567# |
@ALuckyGuy what's the benefit of having a 1500x50 matrix in a wiki table? Nobody's gonna ever read that anyway, might as well just have a fixture read the data in from file. |
@benilovj I'm not necessarily arguing that a table that must support 1500 rows - just a larger limit than we have right now. More importantly, FitNesse should gracefully handle the error when the limit is reached. However, why do you think it's not beneficial to have the data in the wiki? Is having it in a separate file that the user can't readily see or edit easier to manage? How about when certain employees fail to load? These aren't 1500 duplicate records... they're 1500 carefully constructed employee records that have different data and meet different conditions for not only the load test, but later tests also. When certain employees fail during the load, it's convenient having the results right there on the page and easy to access. If our fixture loaded the file some other way, we'd then have to come up with a means to manage the results too. Don't get me wrong - FitNesse's wiki wasn't exactly designed to make maintaining such a large table convenient, and I don't fault it that. Thankfully when a lot of changes need to be made it's fairly easy to edit in a spreadsheet and that's how the group who came up with the data maintains it, but the real hassle comes when putting it back into FitNesse. Since the table can only handle about 50 records it is a maintenance headache to cut the data into 50 row chunks, duplicate the header for all that data and then paste that into FitNesse. If FitNesse didn't have a limit, it would be a piece of cake for anyone could maintain. |
I am working on tests right now that verify menu structures and the icons associated with the menu entries; sending the icons as embedded HTML images should be possible with the recent changes to the HTML escaping now. And the table would be used not only for testing but also for documenting and having a displayable icon in the table instead of just the icon filename helps. But sending the embedded images will get close to this nasty 1MB limit sooner than later... |
I ran into this issue recently. I have integrated Fitnesse with Selenium in such a way that I can "code" the Selenium script as a Fitnesse scenario table, and then invoke it from a decision table. However several instructions are able to take a screen-shot and return it as an image embedded in the HTML code. So I hit the limit with small tables. (I could resort to using files, but it would be a bigger hassle) To my surprise, the code I have read so far does enforce a maximum of 6 characters (actually I'd say it enforces a minimum). The only place I found that assumes a maximum length of 6 characters is the getLengthToRead method in SlimCommandRunningClient.java. So I modified it to keep reading numbers until the ':' character is found and everything seems to work ok now. However I'm not familiar enough with the protocol, so I might have committed heresy. Have I? or should I submit a patch? |
@4ld4z4b4l +1: I would be interested in that fix/enhancement... |
Should have been fixed by #504. |
I would like to change the 6 digits length encoding to 9 digits since I find the 1 MB limit very limiting. Maybe it would be enough to just change the first byte count length for a whole table to use a longer length encoding than the individual slim strings, but that would be a bad inconsistency than...
I am creating several tables automatically and am forced to split them up artificially in order to not hit the 1MB limit for the returned test results.
The text was updated successfully, but these errors were encountered: