Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the JIT buffer relocation failure at the corner case #11266

Merged
merged 1 commit into from
May 23, 2023

Conversation

LoongT4o
Copy link
Contributor

Avoids missing possible candidates due to the large address range of the free segment. Eg, 

48000000-49400000 r-xs 08000000 00:0f 39322841      segment1
7ffff2ec8000-7ffff2f49000 rw-p 00000000 00:00 0              segment2
7ffff6fae000-7ffff735c000 r-xp 00200000 08:02 11538515      /usr/local/sbin/php-fpm

original code will miss the opportunity between [7ffff2ec**** - 7ffff2ec8000].

Fixs issue #11265.

@LoongT4o LoongT4o force-pushed the jit-buffer-relocation-opt branch 2 times, most recently from eaa5f9a to fb58591 Compare May 18, 2023 08:29
Copy link
Member

@dstogov dstogov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this. The minimal changes are requested.

Comment on lines 72 to 74
if (start - pre_seg_end >= requested_size + huge_page_size) {
last_candidate = start - requested_size - huge_page_size;
last_candidate = ZEND_MM_ALIGNED_SIZE_EX(last_candidate, huge_page_size);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code may make a single huge page "hole" between the .text segment and the SHM.
Can you check the aligned addresses to avoid this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment. 
An extra hugepage size space here gives sufficient space for JIT buffer just in case the address is not aligned. Since the virtual memory space is large enough, the problem of this single huge page "hole" was ignored before.  In order to avoid this, maybe additional judgment ("if") is required to resolve it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to avoid this, maybe additional judgment ("if") is required to resolve it.

Yeah. Additional ``if```, or check of aligned addresses in the first place.
I think it's better to merge this at once.

Copy link
Contributor Author

@LoongT4o LoongT4o May 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

#if defined(__linux__)
FILE *f;
uintptr_t last_candidate_end = 0;
uintptr_t pre_seg_end = 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please rename this to "prev_seg_end".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, done

@LoongT4o LoongT4o force-pushed the jit-buffer-relocation-opt branch 2 times, most recently from 0b541c0 to e58ed8b Compare May 23, 2023 02:56
@dstogov
Copy link
Member

dstogov commented May 23, 2023

I've found an easier fix the the same problem. Do you see any problems with the following patch?

diff --git a/ext/opcache/shared_alloc_mmap.c b/ext/opcache/shared_alloc_mmap.c
index 1414ef9614..52e56ce53a 100644
--- a/ext/opcache/shared_alloc_mmap.c
+++ b/ext/opcache/shared_alloc_mmap.c
@@ -68,7 +68,10 @@ static void *find_prefered_mmap_base(size_t requested_size)
 		if ((uintptr_t)execute_ex >= start) {
 			/* the current segment lays before PHP .text segment or PHP .text segment itself */
 			if (last_free_addr + requested_size <= start) {
-				last_candidate = last_free_addr;
+				last_candidate = ZEND_MM_ALIGNED_SIZE_EX(start - requested_size, huge_page_size);
+				if (last_candidate + requested_size > start) {
+					last_candidate -= huge_page_size;
+				}
 			}
 			if ((uintptr_t)execute_ex < end) {
 				/* the current segment is PHP .text segment itself */
@@ -117,7 +120,10 @@ static void *find_prefered_mmap_base(size_t requested_size)
 					if ((uintptr_t)execute_ex >= e_start) {
 						/* the current segment lays before PHP .text segment or PHP .text segment itself */
 						if (last_free_addr + requested_size <= e_start) {
-							last_candidate = last_free_addr;
+							last_candidate = ZEND_MM_ALIGNED_SIZE_EX(e_start - requested_size, huge_page_size);
+							if (last_candidate + requested_size > e_start) {
+								last_candidate -= huge_page_size;
+							}
 						}
 						if ((uintptr_t)execute_ex < e_end) {
 							/* the current segment is PHP .text segment itself */

@LoongT4o
Copy link
Contributor Author

I've found an easier fix the the same problem. Do you see any problems with the following patch?

diff --git a/ext/opcache/shared_alloc_mmap.c b/ext/opcache/shared_alloc_mmap.c
index 1414ef9614..52e56ce53a 100644
--- a/ext/opcache/shared_alloc_mmap.c
+++ b/ext/opcache/shared_alloc_mmap.c
@@ -68,7 +68,10 @@ static void *find_prefered_mmap_base(size_t requested_size)
 		if ((uintptr_t)execute_ex >= start) {
 			/* the current segment lays before PHP .text segment or PHP .text segment itself */
 			if (last_free_addr + requested_size <= start) {
-				last_candidate = last_free_addr;
+				last_candidate = ZEND_MM_ALIGNED_SIZE_EX(start - requested_size, huge_page_size);
+				if (last_candidate + requested_size > start) {
+					last_candidate -= huge_page_size;
+				}
 			}
 			if ((uintptr_t)execute_ex < end) {
 				/* the current segment is PHP .text segment itself */
@@ -117,7 +120,10 @@ static void *find_prefered_mmap_base(size_t requested_size)
 					if ((uintptr_t)execute_ex >= e_start) {
 						/* the current segment lays before PHP .text segment or PHP .text segment itself */
 						if (last_free_addr + requested_size <= e_start) {
-							last_candidate = last_free_addr;
+							last_candidate = ZEND_MM_ALIGNED_SIZE_EX(e_start - requested_size, huge_page_size);
+							if (last_candidate + requested_size > e_start) {
+								last_candidate -= huge_page_size;
+							}
 						}
 						if ((uintptr_t)execute_ex < e_end) {
 							/* the current segment is PHP .text segment itself */

What a great implementation! I tested it and there is no problem with 64b packing.

Avoid missing possible candidates due to the large address range of the free segment.
Eg, 

48000000-49400000 r-xs 08000000 00:0f 39322841               segment1
7ffff2ec8000-7ffff2f49000 rw-p 00000000 00:00 0              segment2
7ffff6fae000-7ffff735c000 r-xp 00200000 08:02 11538515       /usr/local/sbin/php-fpm

original code will miss the opportunity between [7ffff2ec** - 7ffff2ec8000].

Fix issue php#11265.

Signed-off-by: Long, Tao <tao.long@intel.com>
Signed-off-by: Dmitry Stogov <dmitrystogov@gmail.com>
@dstogov dstogov merged commit a2af8ac into php:master May 23, 2023
4 of 5 checks passed
arnaud-lb pushed a commit to arnaud-lb/php-src that referenced this pull request Jul 3, 2024
Avoid missing possible candidates due to the large address range of the free segment.
Eg, 

48000000-49400000 r-xs 08000000 00:0f 39322841               segment1
7ffff2ec8000-7ffff2f49000 rw-p 00000000 00:00 0              segment2
7ffff6fae000-7ffff735c000 r-xp 00200000 08:02 11538515       /usr/local/sbin/php-fpm

original code will miss the opportunity between [7ffff2ec** - 7ffff2ec8000].

Fix issue php#11265.

Signed-off-by: Long, Tao <tao.long@intel.com>
Signed-off-by: Dmitry Stogov <dmitrystogov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants