Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sensors not sending data to the hive #1543

Closed
SnakeSK opened this issue May 13, 2024 · 14 comments
Closed

Sensors not sending data to the hive #1543

SnakeSK opened this issue May 13, 2024 · 14 comments

Comments

@SnakeSK
Copy link

SnakeSK commented May 13, 2024

Successfully raise an issue

Before you post your issue make sure it has not been answered yet and provide ⚠️ BASIC SUPPORT INFORMATION (as requested below) if you come to the conclusion it is a new issue.

⚠️ Basic support information (commands are expected to run as root)

We happily take the time to improve T-Pot and take care of things, but we need you to take the time to create an issue that provides us with all the information we need.

  • What OS are you T-Pot running on? Rocky 9.4
  • What T-Pot version are you currently using (only T-Pot 24.04.x is currently supported)? 24.04
  • What architecture are you running on (i.e. hardware, cloud, VM, etc.)? VM
  • How long has your installation been running? Fresh install with 9 sensors
  • Did you install upgrades, packages or use the update script? No
  • Did you modify any scripts or configs? If yes, please attach the changes. No
  • How much free disk space is available (df -h)? 150GB for each VM
  • What is the current container status (dps)? Running
  • On Linux: What is the status of the T-Pot service (systemctl status tpot)? Running

Fresh install of 24.04 on a VMs. Tried fresh install too but the problem seems to be that sensors are not sending data to the hive, I see traffic between subnets, but nothing is being visualized in the ELK stack. If I try something on the HIVE which runs the honeypot, I see data straight away. At first I tried opening the 64294 port first, but eventually tried to open everyrhing (VMs can see themselves). Nothing is being collected on the hive from remote subnets.

@t3chn0m4g3
Copy link
Member

t3chn0m4g3 commented May 13, 2024

Based on the info provided I cannot reproduce the issue.
Did the deploy script run fine?
Can you see the modified .env on Hive and Sensors?
What about the logstash logs on the Sensors and the Hive?
Is the routing actually working? Does "see" actually mean ports are reachable?
Are the Sensors actually running in Sensor mode and setup in the .env as such?
What do the Tpotinit Logs tell you?

@SnakeSK
Copy link
Author

SnakeSK commented May 13, 2024

How can I check for .env?
I am not that familiar with logstash/docker, can you provide a path where to check the logs? Because the sensor itself registers the e.g. SSH fine, its just not being forwarded to HIVE
Yes the routing is working, yes the ports are reachable, in the meantime I opened all TCP ports between these two hosts.
Well I installed the HIVE as standard and sensors as sensors (I think there were 3 options, hive / standard / mobile)
Tpotinit reports everything fine (folder tpotce/data), and everything is running fine and is healthy

@t3chn0m4g3
Copy link
Member

You can find the ReadMe here.

@SnakeSK
Copy link
Author

SnakeSK commented May 13, 2024

On the sensors the .env file holds a correct IP, also TPOT_TYPE=SENSOR. Docker logstash logs shows mismatch of certificate :

Could not fetch URL {:url=>"https://192.168.85.236:64294", :method=>:post, :message=>"Certificate for <192.168.85.236> doesn't match any of the subject alternative names: [192.168.85.10]", :class=>Manticore::UnknownException, :will_retry=>false}

I assume this is because the IP changed. Any way how to regenerate the certificate? Thank you

@t3chn0m4g3
Copy link
Member

t3chn0m4g3 commented May 13, 2024

Stop T-Pot, delete data/uuid, data/nginx/cert/* and start T-Pot again. tpotinit should re-create the cert.
This also means, you need to re-deploy the sensors, since the certificate will no longer match.

@SnakeSK
Copy link
Author

SnakeSK commented May 13, 2024

I did indeed need to redeploy, but NATed sensors are detecting incorrect certificate still (since they are NATed), any way how to disable SSL verification for logstash for only two sensors? The routed ones are working now :)

Thank you

@t3chn0m4g3
Copy link
Member

There is no option to disable SSL certificate checking, Elastic simply does not offer it.
In that mixed scenario there is no out of the box solution available. At best you manually replace the certificate on the HIVE with one from a CA i.e. Let's Encrypt with a proper FQDN, adjust the http_output.conf with the changes needed according to the logstash documentation and mount the changed file as volume on the sensors. On the sensors the FQDN needs to resolve as well, you could achieve this with a host file entry if no split DNS is available.
Another solution would require you running logstash in host mode and use a tunneling solution to achieve this goal.
Or create your own self signed certificate with a FQDN instead of an IP (which is the default).

@t3chn0m4g3
Copy link
Member

Just saw that Elastic introduced a ssl_verification_mode setting. You can give that a try by adjusting the http_output.conf on the sensors in question and mounting it as volume on those sensors.

@t3chn0m4g3
Copy link
Member

You can give this a try as well, replace IPs / FQDNs, etc. and then redeploy the sensors.

openssl req \
    -nodes \
    -x509 \
    -sha512 \
    -newkey rsa:8192 \
    -keyout "nginx.key" \
    -out "nginx.crt" \
    -days 3650 \
    -subj '/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd' \
    -addext "subjectAltName = IP:192.168.1.1, IP:192.168.2.2, DNS:my.domain.name"

This will basically add more IPs to the SANs, do not know though how intensive logstash will check the cert.

@SnakeSK
Copy link
Author

SnakeSK commented May 13, 2024

Hello, so after further testing there seems to be a check on the HIVE side, so if the connection is NATted there is no response, we were however able to bypass the certificate check. In the meantime we implemented TPots in all internal networks, the external networks will have to be without them. Just for clarification - they are more like adjacent networks, not remote networks so we can do some tunneling between them.

You have been incredibly helpful and now all the sensors are reporting data to the HIVE. Thank you :)

@t3chn0m4g3
Copy link
Member

Great to hear and thanks for the feedback.

@devArnold
Copy link

devArnold commented May 17, 2024

Hello @t3chn0m4g3 and @SnakeSK ,

I encountered a similar problem as described above, but I managed to resolve it by obtaining a valid and trusted SSL certificate for the HIVE. (Used a domain instead of public IP address for TPOT_HIVE_IP in .env)

The HIVE, situated on the same LAN as the sensors, could receive data from local sensors without any issues. However, it failed to receive data from sensors out on the Internet. This issue became evident when inspecting the Logstash logs on the sensor using the command sudo docker logs --tail 50 --follow --timestamps logstash. The log entries indicated a failure in SSL certificate validation, preventing successful data transmission.

Sample log below after creating and installing a self-signed certificate on the HIVE with its public IP address as the common and alternate names:
2024-05-15T09:38:38.258186438Z [ERROR] 2024-05-15 09:38:38.257 [[http_output]>worker4] http - Could not fetch URL {:url=>"https://[HIVE_PUBLIC_IP_ADDRESS]:64294", :method=>:post, :message=>"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target", :class=>Manticore::ClientProtocolException, :will_retry=>true}

The solution involved replacing the existing SSL certificate (nginx.crt and nginx.key) in the ~/tpotce/data/nginx/cert directory on the HIVE with a valid and trusted certificate obtained from a certificate authority. I then re-installed T-Pot on the sensor (since I had made changes) and ran the deploy script to get it to work. Deploying other freshly installed sensors on the Internet resulted in proper log transmission after a valid certificate was added.

What did not work for me (but could work for somebody else):

  • Using a self-signed certificate with the HIVE's public IP address.
  • Installing the self-signed certificate as trusted on the sensor in both the Ubuntu certificate store and the Logstash Docker container

@t3chn0m4g3
Copy link
Member

Thanks for the info @devArnold
I will keep the issue open and update the documentation soon.

t3chn0m4g3 added a commit that referenced this issue May 22, 2024
@t3chn0m4g3
Copy link
Member

@SnakeSK @devArnold
Updated the ReadMe with 9957a13
Please let me know if that solves things for your setups as well.

t3chn0m4g3 added a commit that referenced this issue May 22, 2024
@t3chn0m4g3 t3chn0m4g3 added this to the 24.04.1 milestone Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants