Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(cli): allow printing selected DDS messages to the console #1004

Closed
wants to merge 4 commits into from

Conversation

mikavilpas
Copy link
Contributor

The motivations for this change are:

  • easy debugging of DDS messages for yazi and plugin developers
  • allow external applications to monitor yazi events.
ya-sub-static-demo.mov

In neovim specifically, there is a limitation that the stdout and stderr streams cannot be monitored when displaying an embedded terminal application. A second yazi instance could theoretically be started, but the ui does currently not work when there is no screen to draw on.

@sxyazi
Copy link
Owner

sxyazi commented May 4, 2024

Ideally, I don't want to add new ya sub / ya sub-static because it overlaps with the functionality of yazi --remote-events, which means maintaining an additional set of code with identical functionality, something I'd prefer to avoid if possible.

Can you describe the issues you encountered when reading the output of yazi --remote-events in Neovim, ex what was the code like? I'll try it later to see if I can make it work.

@mikavilpas
Copy link
Contributor Author

The error I get is the following:

   Info  18:42:06 notify.info stderr: "\27[?1004l\27[?2004l\27[?1049l\27[0 q\27[?25hError: Device not configured (os error 6)\n"
   Info  18:42:06 notify.info stdout: nil

I realize it's not much to work with. Here's a code snippet that causes that error for me:

    local events_job = vim.system({ 'yazi', "--remote-events='hover'" }, {
      text = true,
      stderr = function(_, data)
        vim.notify('stderr: ' .. vim.inspect(data))
      end,
      stdout = function(_, data)
        vim.notify('stdout: ' .. vim.inspect(data))
      end,
    })

The error does not occur if I start lazygit instead (just as a test).


/// Allows for initialization without forcing the reading of command line
/// arguments.
pub fn init_with(local_events: HashSet<String>, remote_events: HashSet<String>) -> Boot {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This initialization process is unnecessary because reading local_events and remote_events is only necessary when sending messages, which only occurs in the yazi program, and ya cli doesn't need them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving out the initialization of BOOT seems to cause a panic at runtime:

Details

$ RUST_BACKTRACE=1 cargo run --bin ya -- sub-static hi,hey,hover,cd
    Finished dev [unoptimized + debuginfo] target(s) in 0.35s
     Running `target/debug/ya sub-static hi,hey,hover,cd`
thread 'tokio-runtime-worker' panicked at /Users/mikavilpas/git/yazi/yazi-shared/src/ro_cell.rs:50:9:
assertion failed: self.is_initialized()
stack backtrace:
   0: _rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::panicking::panic
   3: <yazi_shared::ro_cell::RoCell<T> as core::ops::deref::Deref>::deref
             at ./yazi-shared/src/ro_cell.rs:50:3
   4: yazi_dds::state::State::load::{{closure}}
             at ./yazi-dds/src/state.rs:81:43
   5: yazi_dds::state::State::load_or_create::{{closure}}
             at ./yazi-dds/src/state.rs:48:18
   6: yazi_dds::client::Client::connect::{{closure}}
             at ./yazi-dds/src/client.rs:161:35
   7: yazi_dds::client::Client::echo_events_to_stdout::{{closure}}
             at ./yazi-dds/src/client.rs:73:61
   8: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
   9: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
  10: tokio::runtime::task::core::Core<T,S>::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:13
  11: tokio::runtime::task::harness::poll_future::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
  12: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/core/src/panic/unwind_safe.rs:272:9
  13: std::panicking::try::do_call
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panicking.rs:552:40
  14: ___rust_try
  15: std::panicking::try
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panicking.rs:516:19
  16: std::panic::catch_unwind
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panic.rs:142:14
  17: tokio::runtime::task::harness::poll_future
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
  18: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
  19: tokio::runtime::task::harness::Harness<T,S>::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
  20: tokio::runtime::task::raw::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:271:5
  21: tokio::runtime::task::raw::RawTask::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
  22: tokio::runtime::task::LocalNotified<S>::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:427:9
  23: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:576:13
  24: tokio::runtime::coop::with_budget
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:107:5
  25: tokio::runtime::coop::budget
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/coop.rs:73:5
  26: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:575:9
  27: tokio::runtime::scheduler::multi_thread::worker::Context::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:526:24
  28: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:491:21
  29: tokio::runtime::context::scoped::Scoped<T>::set
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/scoped.rs:40:9
  30: tokio::runtime::context::set_scheduler::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:26
  31: std::thread::local::LocalKey<T>::try_with
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/thread/local.rs:270:16
  32: std::thread::local::LocalKey<T>::with
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/thread/local.rs:246:9
  33: tokio::runtime::context::set_scheduler
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context.rs:176:9
  34: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:486:9
  35: tokio::runtime::context::runtime::enter_runtime
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/context/runtime.rs:65:16
  36: tokio::runtime::scheduler::multi_thread::worker::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:478:5
  37: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/scheduler/multi_thread/worker.rs:447:45
  38: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/task.rs:42:21
  39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:328:17
  40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/loom/std/unsafe_cell.rs:16:9
  41: tokio::runtime::task::core::Core<T,S>::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/core.rs:317:13
  42: tokio::runtime::task::harness::poll_future::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:485:19
  43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/core/src/panic/unwind_safe.rs:272:9
  44: std::panicking::try::do_call
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panicking.rs:552:40
  45: ___rust_try
  46: std::panicking::try
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panicking.rs:516:19
  47: std::panic::catch_unwind
             at /private/tmp/rust-20240215-5649-scngkd/rustc-1.76.0-src/library/std/src/panic.rs:142:14
  48: tokio::runtime::task::harness::poll_future
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:473:18
  49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:208:27
  50: tokio::runtime::task::harness::Harness<T,S>::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/harness.rs:153:15
  51: tokio::runtime::task::raw::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:271:5
  52: tokio::runtime::task::raw::RawTask::poll
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/raw.rs:201:18
  53: tokio::runtime::task::UnownedTask<S>::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/task/mod.rs:464:9
  54: tokio::runtime::blocking::pool::Task::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:159:9
  55: tokio::runtime::blocking::pool::Inner::run
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:513:17
  56: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
             at /Users/mikavilpas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.37.0/src/runtime/blocking/pool.rs:471:13
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Conceptually, I think the DDS doesn't necessarily have to depend on yazi_boot::Boot at all, since in this case we're using yazi-cli, and in yazi_boot::Boot there are many things that are specific to yazi-fm. Ideally I think Boot could be split, and then this forced initialization could be dropped entirely.

What do you think?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because the logic of creating a server is entered - a Yazi process can be both a client and a server. When it is found that there are no other running servers, it will create a new one in the Client::connect() method.

The CLI should not go through this creation process; it should only act as a client to connect to an existing server process. So I suggest referring to Client::shot() to implement a separate connection for the CLI instead of calling Client::connect().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No server is now started. Boot is no longer used.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the init_with() since it's no longer used

The motivations for this change are:
- easy debugging of DDS messages for yazi and plugin developers
- allow external applications to monitor yazi events.

In neovim specifically, there is a limitation that the stdout and stderr
streams cannot be monitored when displaying an embedded terminal
application. A second yazi instance could theoretically be started, but
the ui does currently not work when there is no screen to draw on.
@mikavilpas
Copy link
Contributor Author

Just rebased to main and resolved all conflicts. No change in functionality was made.

/// Subscribe to messages from remote instance(s).
Sub(CommandSub),
/// Subscribe to messages from all remote instance(s).
SubStatic(CommandSubStatic),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to categorize them into "sub" and "sub static"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it's required 🤔 the idea was to be able to filter only the messages from a specific client, mirroring the pub and pub-static options.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not apply to static messages, because static messages may come from the client you want to filter, but do not have a client ID (instead they have a severity), so all static messages from the specific client you want to receive will be lost.

If you want to receive and filter all messages from a specific client (not just non-static messages) then we will need to change the protocol itself, which was previously covered by ‘yazi --local-events’ and therefore not considered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The support for listening to a specific peer has been dropped now.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be renamed as Sub if you want to listen for all messages rather than just static messages

yazi-dds/src/client.rs Outdated Show resolved Hide resolved
- don't start a new server. Instead, crash if the server is not running.
- remove support for listening for a specific peer. The protocol doesn't
  support it and it cannot be made to work.
- if the existing yazi closes, this cannot be detected, and the
  connection is never reestablished. Maybe this can be made to work in a
  future commit.
@mikavilpas
Copy link
Contributor Author

The suggestions have now all been addressed. However, some odd issues now remain that I would like to see fixed:

  • if no server is running, the command fails (I actually don't mind this myself)

  • when starting, some old events are reported every time on startup:

    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-dds/src/pump.rs"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-dds/src/pubsub.rs"}
    hover,0,200,{"tab":0,"url":"/var/folders/23/y7gf7hz129l56_83w_g49jl80000gn/T"}
    hover,0,200,{"tab":0,"url":"/var/folders/23/y7gf7hz129l56_83w_g49jl80000gn/T/yazi"}
    cd,0,100,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-cli/src"}
    hover,0,200,{"tab":0,"url":"/var/folders/23/y7gf7hz129l56_83w_g49jl80000gn/T/.ts-nodecDOP5F"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi.nvim/.nvmrc"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-dds/src/payload.rs"}
    hover,0,200,{"tab":2,"url":"/var/folders/23/y7gf7hz129l56_83w_g49jl80000gn/T/651E3D9C-9D65-4176-AC6E-BEF001DC35BF"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-cli/src/main.rs"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-dds/src/payload.rs"}
    hover,0,200,{"tab":0,"url":"/Users/mikavilpas/git/yazi/yazi-dds/src/pump.rs"}
    hey,0,1715359754665986,{"peers":{"1715359756825029":{"abilities":["hover","cd","hey","hi"]},"1715359754665986":{"abilities":["dds-cd"]}}}

    These seem to be from very old sessions and I'm not sure why they are still being reported.

  • when yazi closes and reopens, no events are ever reported again. I think this makes sense as for the new yazi instance, the handshake has not been done. However, I would like to be able to reestablish the connection. Do you have advice how this could be done?

@@ -9,6 +9,7 @@ homepage = "https://yazi-rs.github.io"
repository = "https://github.com/sxyazi/yazi"

[dependencies]
yazi-boot = { path = "../yazi-boot", version = "0.2.5" }
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also remove this, it's no longer used

@sxyazi
Copy link
Owner

sxyazi commented May 10, 2024

if no server is running, the command fails (I actually don't mind this myself)

Yeah this is the expected behavior.

when starting, some old events are reported every time on startup:

Try rm ~/.local/state/yazi/.dds and see if it still. This may be related to the dirty data tested before.

when yazi closes and reopens, no events are ever reported again. I think this makes sense as for the new yazi instance, the handshake has not been done. However, I would like to be able to reestablish the connection. Do you have advice how this could be done?

When next_line() returns None, it means that the server has stopped. At this time, you can call Stream::connect() to try to re-establish the connection (if other servers are started), but you need to add time::sleep() for each attempt to avoid CPU consumption caused by fast failure

If the connection cannot be made when starting, the client will fail and
not reconnect. However, if the connection is lost after the client has
started, it will forever attempt to reconnect with a 1s delay.

Also clean up the implementation by removing previous mistakes
@mikavilpas
Copy link
Contributor Author

When next_line() returns None, it means that the server has stopped. At this time, you can call Stream::connect() to try to re-establish the connection (if other servers are started), but you need to add time::sleep() for each attempt to avoid CPU consumption caused by fast failure

Thanks for the instructions! I was able to implement reconnecting, and it's much better now.

However, there is some odd behaviour when starting up: about 2000 empty messages are received.

As a proof of concept, I added a hacky counter to display the number of the current message. Here's a very short demo:

yazi-messages.mov

I'm not sure what this might mean. Could you try this out and see if you get the same behaviour?

@sxyazi
Copy link
Owner

sxyazi commented May 20, 2024

Why is this PR closed?

@mikavilpas
Copy link
Contributor Author

There are two reasons:

  1. it's been open for a long time and I'm starting to forget little details and
    ideas
  2. I'm no longer convinced it will be a good idea. In my use case with neovim,
    this would enable real time event reading, and thus writing a few minor
    features. But what I have in mind for the neovim integration (which I wrote
    about in Increase possibilities for yazi<->embedding editor integration #989) would require a deeper
    level of integration, such as RPC.

I could maybe see such an integration layer being implemented in lua, perhaps
also as a yazi plugin, but right now I consider this "lua route" blocked by
(although all of these are totally solvable issues!)

  • lack of typing for yazi apis (difficult to write lua bindings without them)
  • lack of testing facilities in yazi plugins (hard to write features and
    maintain them)
  • lack of versioning for yazi apis (upcoming breaking changes in yazi could
    break plugins without a clear way to detect this)
  • uncertainty about what your vision for yazi is regarding these things (hard to
    justify the effort)

As you explained in
#1057 (comment), I think in
general yazi is still taking shape. It's probably best to wait and simply
reimplement it later, or do something completely new.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants