Permalink
Browse files

Respect the max open files set by the Blaze client in the server.

As it turns out, the JVM has a feature to unlimit its own view of what the
maximum number of open file descriptors should be.  This is controlled via
the MaxFDLimit option, which is enabled by default.

The problem is that, on macOS, MaxFDLimit restricts its upping of the open
files count to OPEN_MAX (per the setrlimit manpage), which is 10k.  This
limit is much smaller than what the system truly allows per process.

In unknown commit, I added logic to the Bazel client to unlimit its own
resources at startup time to the real limits of the system.  I obviously
expected those new limits to propagate to the server... but that's not the
case because of the above.

Fix this by disabling the MaxFDLimit option in the server's JVM and add a
test to ensure that the high limit computed by the client propagates to
the actions run by Bazel.

RELNOTES: None.
PiperOrigin-RevId: 230716686
  • Loading branch information...
jmmv authored and Copybara-Service committed Jan 24, 2019
1 parent f157053 commit 30dd8715847e4c797cd28da13e14eab7721b518d
Showing with 41 additions and 0 deletions.
  1. +11 −0 src/main/cpp/startup_options.cc
  2. +30 −0 src/test/shell/integration/execution_phase_tests.sh
@@ -538,6 +538,17 @@ blaze_exit_code::ExitCode StartupOptions::AddJVMArguments(
const string &server_javabase, std::vector<string> *result,
const vector<string> &user_options, string *error) const {
AddJVMLoggingArguments(result);

// Disable the JVM's own unlimiting of file descriptors. We do this
// ourselves in blaze.cc so we want our setting to propagate to the JVM.
//
// The reason to do this is that the JVM's unlimiting is suboptimal on
// macOS. Under that platform, the JVM limits the open file descriptors
// to the OPEN_MAX constant... which is much lower than the per-process
// kernel allowed limit of kern.maxfilesperproc (which is what we set
// ourselves to).
result->push_back("-XX:-MaxFDLimit");

return AddJVMMemoryArguments(server_javabase, result, user_options, error);
}

@@ -277,5 +277,35 @@ EOF
expect_log "WARNING: .*: foo warning"
}

function test_max_open_file_descriptors() {
echo "nfiles: hard $(ulimit -H -n), soft $(ulimit -S -n)"

local exp_nfiles="$(ulimit -H -n)"
if [[ "$(uname -s)" == Darwin && "${exp_nfiles}" == unlimited ]]; then
exp_nfiles="$(/usr/sbin/sysctl -n kern.maxfilesperproc)"
elif "${is_windows}"; then
# We do not implement the resources unlimiting feature on Windows at
# the moment... so just expect the soft limit to remain unchanged.
exp_nfiles="$(ulimit -S -n)"
fi
echo "Will expect soft nfiles to be ${exp_nfiles}"

mkdir -p "pkg" || fail "Could not create directory"
cat > pkg/BUILD <<'EOF' || fail "Could not create test file"
genrule(
name = "nfiles",
outs = ["nfiles-soft"],
cmd = "mkdir -p pkg && ulimit -S -n >$(location nfiles-soft)",
)
EOF
bazel build //pkg:nfiles >& "${TEST_log}" || fail "Expected success"
local soft="$(cat bazel-genfiles/pkg/nfiles-soft)"

# Make sure that the soft limit was raised to the expected hard value.
# Our code doesn't touch the hard limit (even in the case "unlimited" case
# handled above) and that's OK: if we were able to set the soft limit to a
# high value, the hard limit must already be the same or higher.
assert_equals "${exp_nfiles}" "${soft}"
}

run_suite "Integration tests of ${PRODUCT_NAME} using the execution phase."

0 comments on commit 30dd871

Please sign in to comment.