Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

entry2str_internal_ext: array boundary wrote: bufsize=3459 wrote=3836 #1527

Closed
389-ds-bot opened this issue Sep 12, 2020 · 16 comments
Closed
Labels
closed: not a bug Migration flag - Issue

Comments

@389-ds-bot
Copy link

Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/48196

  • Created at 2015-06-10 21:28:13 by caprizmo
  • Closed as Invalid
  • Assigned to nobody

Hi there,

Can anybody tell me how to fix this error? I get this while enabling the replication from master1 to master 2.

@389-ds-bot 389-ds-bot added the closed: not a bug Migration flag - Issue label Sep 12, 2020
@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-10 22:10:01

Please provide the exact version of 389 you are using - rpm -q 389-ds-base - on both master1 and master2.

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-10 23:02:56

Master 1 : 389-ds-base-1.2.9.14-1.el6.x86_64
Master 2 : 389-ds-base-1.2.11.15-50.el6_6.x86_64
Replica 3 : 389-ds-base-1.2.9.14-1.el6.x86_64
Replica 4 : 389-ds-base-1.2.11.15-50.el6_6.x86_64

@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-11 00:03:57

are you getting the error on the 1.2.9.x systems or on the 1.2.11.x systems? If the former - 1.2.9 is extremely old and we cannot support it. You'll have to upgrade.

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-11 00:16:40

Master 1 : 389-ds-base-1.2.9.14-1.el6.x86_64 works fine! while trying to enable replication (sync) with Master 2 : 389-ds-base-1.2.11.15-50.el6_6.x86_64, it shows that error.

how can we get this fixed.

@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-11 00:20:59

Your statement is slightly ambiguous - just to confirm - you are saying that when you try to enable replication from the 1.2.9 server to the 1.2.11 server, you see the error message in the 1.2.11 errors log?

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-11 00:23:53

yes, you're right. and the same thing I see in my Replica 4 : 389-ds-base-1.2.11.15-50.el6_6.x86_64 also.

@389-ds-bot
Copy link
Author

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-11 00:35:38

Data is already there in master2, that's fine.
The error only comes when try to enable (keep sync) the replication from master1 ~ master2.

Do you tend to find out the cause?

@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-11 00:57:30

Replying to [comment:8 caprizmo]:

Data is already there in master2, that's fine.
The error only comes when try to enable (keep sync) the replication from master1 ~ master2.

Hmm, that's bad.

Do you tend to find out the cause?

This issue will be prioritized, then investigated in the order indicated by its priority.

If it is determined that this issue is a result of a bug in 1.2.9, then we are not going to issue a fix for 1.2.9.

In other words, the best way for you to resolve your issue may be to upgrade from 1.2.9 to 1.2.11, if that is going to be faster for you than waiting for a fix to 1.2.11.

AFAIK, no one else has ever reported this issue, or any issue that caused "entry2str_internal_ext: array boundary wrote: bufsize=3459 wrote=3836".

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-11 01:07:52

Ok, thank you for working on it and will expect your findings soon.
Also, the error is coming in master2 which is already at 1.2.11, so you mean I should upgrade master1 (which is working fine!!) to master2's version (where the error is!!)?

@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-11 01:34:15

Replying to [comment:10 caprizmo]:

Ok, thank you for working on it and will expect your findings soon.
Also, the error is coming in master2 which is already at 1.2.11, so you mean I should upgrade master1 (which is working fine!!) to master2's version (where the error is!!)?

Yes. If for no other reasons than 1.2.9 is very, very old, and is missing a lot of bug fixes for severe issues, and is almost impossible to support.

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2015-06-11 12:34:25

Then how come I see the error in 1.2.11 which is higher than 1.2.9.
Can you also recommend / advice that why I see the error in 1.2.11 where as being older version I should see it in 1.2.9? And how soon I can expect a fix if at all to be released?

Further, I would like to ask that this is a symptom of being a very old version, but what about the cause that led to the crash of the server everytime while enabling repl?

Thank you.

@389-ds-bot
Copy link
Author

Comment from rmeggins (@richm) at 2015-06-11 19:57:48

Replying to [comment:12 caprizmo]:

Then how come I see the error in 1.2.11 which is higher than 1.2.9.

I think (but have done no investigation) that 1.2.9 is sending incorrect replication metadata to 1.2.11, and I think that 1.2.9 will ignore or otherwise work correctly when it receives incorrect replication metadata. This is based on the fact that we have never seen this problem with 1.2.11 replication as a supplier. So, I believe that if you are doing 1.2.11 -> 1.2.11 replication, you will not see this problem. This is all pure speculation.

Can you also recommend / advice that why I see the error in 1.2.11 where as being older version I should see it in 1.2.9?

See above

And how soon I can expect a fix if at all to be released?

I have no idea.

Further, I would like to ask that this is a symptom of being a very old version, but what about the cause that led to the crash of the server everytime while enabling repl?

This is the first time you have mentioned a crash. We'll need to get a stack trace - http://www.port389.org/docs/389ds/FAQ/faq.html#debugging-crashes

Thank you.

@389-ds-bot
Copy link
Author

Comment from nhosoi (@nhosoi) at 2015-07-08 03:02:47

Hello caprizmo,

Could there be any progress in your investigation/debugging?

Are there any chance to upgrade 1.2.9 to 1.2.11 in your MMR topology?

Do you have any stack traces or valgrind output for the crash?

Thanks.

@389-ds-bot
Copy link
Author

Comment from nhosoi (@nhosoi) at 2015-07-09 23:11:20

Per 389-ds-base triage meeting, this issue is most likely fixed in the newer version of 1.2.11.

Since 1.2.9 is out of support phase, we recommend to upgrade the old servers to the newer ones.

Closing this ticket for now.

Please feel free to reopen it if you run into the same problem with the newer version of 389-ds-base.

@389-ds-bot
Copy link
Author

Comment from caprizmo at 2017-02-11 22:54:47

Metadata Update from @caprizmo:

  • Issue set to the milestone: N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed: not a bug Migration flag - Issue
Projects
None yet
Development

No branches or pull requests

1 participant