Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes to make Travis pass. #2

Merged
merged 7 commits into from
Oct 26, 2016
Merged

Fixes to make Travis pass. #2

merged 7 commits into from
Oct 26, 2016

Conversation

robertnishihara
Copy link
Collaborator

No description provided.

@robertnishihara robertnishihara changed the title Switch hiredis to a git subtree. Fixes to make Travis pass. Oct 26, 2016
@pcmoritz pcmoritz merged this pull request into master Oct 26, 2016
@pcmoritz pcmoritz deleted the subtree branch October 26, 2016 05:30
richardliaw referenced this pull request in richardliaw/ray Oct 3, 2017
* checkpointing

* small changes

* signatures

* more sigs

* removed yaml renaming

* final
osalpekar pushed a commit to osalpekar/ray that referenced this pull request Mar 16, 2018
11rohans pushed a commit to 11rohans/ray that referenced this pull request Apr 2, 2018
tulip3659 added a commit to tulip3659/ray that referenced this pull request Jan 3, 2019
Signed-off-by: tulip3659 <tulip3659@yahoo.com>
romilbhardwaj added a commit to romilbhardwaj/ray that referenced this pull request Apr 8, 2019
� This is the 1st commit message:

Rounding error fixes, removing cpu addition in cython and test fixes.

� This is the commit message ray-project#2:

Plumbing and support for dynamic custom resources.

� This is the commit message ray-project#3:

update cluster utils to use EntryType.

� This is the commit message ray-project#4:

Update node_manager to use new zero resource == deletion semantics.
eisber pushed a commit to eisber/ray that referenced this pull request Feb 27, 2020
rkooo567 referenced this pull request in rkooo567/ray Mar 24, 2020
[Hosted Dashboard] End to end flow
bentzinir pushed a commit to bentzinir/ray that referenced this pull request Jul 29, 2020
zcin added a commit that referenced this pull request Aug 5, 2022
Stefan-1313 pushed a commit to Stefan-1313/ray_mod that referenced this pull request Aug 18, 2022
We encountered SIGSEGV when running Python test `python/ray/tests/test_failure_2.py::test_list_named_actors_timeout`. The stack is:

```
#0  0x00007fffed30f393 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) ()
   from /lib64/libstdc++.so.6
ray-project#1  0x00007fffee707649 in ray::RayLog::GetLoggerName() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
ray-project#2  0x00007fffee70aa90 in ray::SpdLogMessage::Flush() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
ray-project#3  0x00007fffee70af28 in ray::RayLog::~RayLog() () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
ray-project#4  0x00007fffee2b570d in ray::asio::testing::(anonymous namespace)::DelayManager::Init() [clone .constprop.0] ()
   from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
ray-project#5  0x00007fffedd0d95a in _GLOBAL__sub_I_asio_chaos.cc () from /home/admin/dev/Arc/merge/ray/python/ray/_raylet.so
ray-project#6  0x00007ffff7fe282a in call_init.part () from /lib64/ld-linux-x86-64.so.2
ray-project#7  0x00007ffff7fe2931 in _dl_init () from /lib64/ld-linux-x86-64.so.2
ray-project#8  0x00007ffff7fe674c in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
ray-project#9  0x00007ffff7b82e79 in _dl_catch_exception () from /lib64/libc.so.6
ray-project#10 0x00007ffff7fe5ffe in _dl_open () from /lib64/ld-linux-x86-64.so.2
ray-project#11 0x00007ffff7d5f39c in dlopen_doit () from /lib64/libdl.so.2
ray-project#12 0x00007ffff7b82e79 in _dl_catch_exception () from /lib64/libc.so.6
ray-project#13 0x00007ffff7b82f13 in _dl_catch_error () from /lib64/libc.so.6
ray-project#14 0x00007ffff7d5fb09 in _dlerror_run () from /lib64/libdl.so.2
ray-project#15 0x00007ffff7d5f42a in dlopen@@GLIBC_2.2.5 () from /lib64/libdl.so.2
ray-project#16 0x00007fffef04d330 in py_dl_open (self=<optimized out>, args=<optimized out>)
    at /tmp/python-build.20220507135524.257789/Python-3.7.11/Modules/_ctypes/callproc.c:1369
```

The root cause is that when loading `_raylet.so`, `static DelayManager _delay_manager` is initialized and `RAY_LOG(ERROR) << "RAY_testing_asio_delay_us is set to " << delay_env;` is executed. However, the static variables declared in `logging.cc` are not initialized yet (in this case, `std::string RayLog::logger_name_ = "ray_log_sink"`).

It's better not to rely on the initialization order of static variables in different compilation units because it's not guaranteed. I propose to change all `RAY_LOG`s to `std::cerr` in `DelayManager::Init()`.

The crash happens in Ant's internal codebase. Not sure why this test case passes in the community version though.

BTW, I've tried different approaches:

1. Using a static local variable in `get_delay_us` and remove the global variable. This doesn't work because `init()` needs to access the variable as well.
2. Defining the global variable as type `std::unique_ptr<DelayManager>` and initialize it in `get_delay_us`. This works but it requires a lock to be thread-safe.

Signed-off-by: Stefan van der Kleij <s.vanderkleij@viroteq.com>
ericl pushed a commit that referenced this pull request Feb 23, 2023
scottsun94 pushed a commit to scottsun94/ray that referenced this pull request Mar 28, 2023
cassidylaidlaw pushed a commit to cassidylaidlaw/ray that referenced this pull request Mar 28, 2023
elliottower pushed a commit to elliottower/ray that referenced this pull request Apr 22, 2023
Signed-off-by: elliottower <elliot@elliottower.com>
pcmoritz pushed a commit that referenced this pull request Apr 22, 2023
Why are these changes needed?

Right now the theory is as follow.

pubsub io service is created and run inside the GcsServer. That means if pubsub io service is accessed after GCSServer GC'ed, it will segfault.
Right now, upon teardown, when we call rpc::DrainAndResetExecutor, this will recreate the Executor thread pool.
Upon teardown, If DrainAndResetExecutor -> GcsServer's internal pubsub posts new SendReply to the newly created threadpool -> GcsServer.reset -> pubsub io service GC'ed -> SendReply invoked from the newly created thread pool, it will segfault.
NOTE: the segfault is from pubsub service if you see the failure

#2 0x7f92034d9129 in ray::rpc::ServerCallImpl<ray::rpc::InternalPubSubGcsServiceHandler, ray::rpc::GcsSubscriberPollRequest, ray::rpc::GcsSubscriberPollReply>::HandleRequestImpl()::'lambda'(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>)::operator()(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>) const::'lambda'()::operator()() const /proc/self/cwd/bazel-out/k8-opt/bin/_virtual_includes/grpc_common_lib/ray/rpc/server_call.h:212:48
As a fix, I only drain the thread pool. And then reset it after all operations are fully cleaned up (only from tests). I think there's no need to reset for regular proc termination like raylet, gcs, core workers.

Related issue number

Closes #34344

Signed-off-by: SangBin Cho <rkooo567@gmail.com>
ProjectsByJackHe pushed a commit to ProjectsByJackHe/ray that referenced this pull request May 4, 2023
Why are these changes needed?

Right now the theory is as follow.

pubsub io service is created and run inside the GcsServer. That means if pubsub io service is accessed after GCSServer GC'ed, it will segfault.
Right now, upon teardown, when we call rpc::DrainAndResetExecutor, this will recreate the Executor thread pool.
Upon teardown, If DrainAndResetExecutor -> GcsServer's internal pubsub posts new SendReply to the newly created threadpool -> GcsServer.reset -> pubsub io service GC'ed -> SendReply invoked from the newly created thread pool, it will segfault.
NOTE: the segfault is from pubsub service if you see the failure

ray-project#2 0x7f92034d9129 in ray::rpc::ServerCallImpl<ray::rpc::InternalPubSubGcsServiceHandler, ray::rpc::GcsSubscriberPollRequest, ray::rpc::GcsSubscriberPollReply>::HandleRequestImpl()::'lambda'(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>)::operator()(ray::Status, std::__1::function<void ()>, std::__1::function<void ()>) const::'lambda'()::operator()() const /proc/self/cwd/bazel-out/k8-opt/bin/_virtual_includes/grpc_common_lib/ray/rpc/server_call.h:212:48
As a fix, I only drain the thread pool. And then reset it after all operations are fully cleaned up (only from tests). I think there's no need to reset for regular proc termination like raylet, gcs, core workers.

Related issue number

Closes ray-project#34344

Signed-off-by: SangBin Cho <rkooo567@gmail.com>
Signed-off-by: Jack He <jackhe2345@gmail.com>
@bveeramani bveeramani mentioned this pull request Jul 25, 2023
8 tasks
khluu added a commit to khluu/ray that referenced this pull request Jan 24, 2024
khluu added a commit to khluu/ray that referenced this pull request Jan 24, 2024
Signed-off-by: khluu <khluu000@gmail.com>
khluu added a commit to khluu/ray that referenced this pull request Jan 24, 2024
Signed-off-by: khluu <khluu000@gmail.com>
khluu added a commit to khluu/ray that referenced this pull request Jan 24, 2024
Signed-off-by: khluu <khluu000@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants