Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue between explainable_ros and llama_ros #4

Open
kyle-redyeti opened this issue Jun 8, 2024 · 13 comments
Open

Issue between explainable_ros and llama_ros #4

kyle-redyeti opened this issue Jun 8, 2024 · 13 comments

Comments

@kyle-redyeti
Copy link

I was super excited to find this repo! I seem to be having a bit of an issue running this. It looks like an issue with launching llama_cpp.

I am running it on an Jetson Xavier in a ROS Humble Docker Container. The only oddity I saw in building was two warnings about the method used to install python packages but I THINK it all built correctly (This was for the ExplainableROS build. I have been having issue with running just Llama ROS https://github.com/mgonzs13/llama_ros by itself as well so potentially it is just an issue with the newest version of that project.

This was the only thing in the log which seems to be the same that was output to the screen:

1717876327.2159364 [INFO] [launch]: All log files can be found below /root/.ros/
log/2024-06-08-19-52-07-179907-5c48896c5719-4909
1717876327.2197232 [INFO] [launch]: Default logging verbosity is set to INFO
1717876328.8688183 [ERROR] [launch]: Caught exception in launch (see debug for t
raceback): create_llama_launch() got an unexpected keyword argument 'stop'

root@5c48896c5719:~/ros2_ws# ros2 launch explicability_bringup explicability_ros.launch.py
[INFO] [launch]: All log files can be found below /root/.ros/log/2024-06-08-19-52-07-179907-5c48896c5719-4909
[INFO] [launch]: Default logging verbosity is set to INFO
[ERROR] [launch]: Caught exception in launch (see debug for traceback): create_llama_launch() got an unexpected keyword argument 'stop'

@mgonzs13
Copy link
Collaborator

mgonzs13 commented Jun 9, 2024

Hey @kyle-redyeti, I have updated the create_llama_launch for the new versions of llama_ros. Btw, are you using CUDA inside the docker of the Jetson Xavier to run llama_ros?

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jun 9, 2024 via email

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jun 10, 2024

It looks like I am getting a different error. It is an error I have seen before related to the starting of the explicabilty_node:
[explicability_node-1] def generate_response(self, goal: GenerateResponse.Goal, feedback_cb: Callable = None) -> Tuple[GenerateResponse.Result | GoalStatus]:
[explicability_node-1] TypeError: unsupported operand type(s) for |: 'Metaclass_GenerateResponse_Result' and 'Metaclass_GoalStatus'
[ERROR] [explicability_node-1]: process has died [pid 3536, exit code 1, cmd '/root/ros2_ws/install/explicability_ros/lib/explicability_ros/explicability_node --ros-args -r __node:=explicability_node'].

It looks like the llama_node starts up though...
Steps to reproduce:
$ docker run -it --rm --network=host --runtime=nvidia --gpus=all dustynv/ros:humble-desktop-l4t-r35.4.1
$ mkdir -p ~/ros2_ws/src
$ cd ~/ros2_ws/src
$ git clone --recurse-submodules https://github.com/mgonzs13/llama_ros.git
$ pip3 install -r llama_ros/requirements.txt
$ git clone --recurse-submodules https://github.com/Dsobh/explainable_ros.git
$ cd ~/ros2_ws
$ colcon build
$ source install/setup.bash
$ ros2 launch explicability_bringup explicability_ros.launch.py

I appreciate any help you can provide!

Thanks Again!

Kyle

@mgonzs13
Copy link
Collaborator

@kyle-redyeti, which Python version are you using?

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jun 11, 2024 via email

@mgonzs13
Copy link
Collaborator

mgonzs13 commented Jun 11, 2024

I have just removed that typing format from 3.10. You can try it again if you like. If it still persists, you may edit the code removing the typing from 3.10.

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jun 11, 2024 via email

@kyle-redyeti
Copy link
Author

@mgonzs13 I ended up getting this code running... I did have to do a little bit of clean up for it to actually get "logs" to be piped into the langchain prompt. I ended up adding an ingest service so I could feed it other data rather than ROS2 topic data... It was pretty exciting to see it working. The problem I am trying to solve now is that it waits for a complete response before sending the response. I am hoping there is a way to get the langchain prompt to send the information back similar to the llama_msgs.action import GenerateResponse. I want to start processing the output as soon as it starts putting tokens together for the response. I have not used Langchain much at all and I am not sure if I am just not finding how this is done...

Also the Llama ROS project uses action/goal and this one uses service is there a reason for the difference?

@mgonzs13
Copy link
Collaborator

mgonzs13 commented Jul 14, 2024

@kyle-redyeti, you can use the stream function from LangChain. Here you have an example of how to use it with llama_ros.

Also the Llama ROS project uses action/goal and this one uses service is there a reason for the difference?

We have implemented a service instead of an action because the service is faster to implement so we may change it in the future.

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jul 14, 2024 via email

@kyle-redyeti
Copy link
Author

I was able to test this and it does not seem to return a token at a time as I was hoping. I think I will need to create a new callback based on langchain_core.callbacks BaseCallbackHandler So I can override the functionality of def on_llm_new_token Then I should be able to get a token at a time. I still may need to use stream rather than invoke to get the llama ros to provide output the way I want to consume it. Does that sound correct?

@mgonzs13
Copy link
Collaborator

The stream function from langchain should return each token when generated. Have you updated llama_ros?

@kyle-redyeti
Copy link
Author

kyle-redyeti commented Jul 16, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants