Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on blocking metadata APIs? #791

Open
Plazmaz opened this Issue Jun 26, 2018 · 7 comments

Comments

Projects
None yet
3 participants
@Plazmaz
Copy link

commented Jun 26, 2018

Hi, I've noticed that a lot of tutorials/example configs for cowrie don't realize/acknowledge the risks of internal metadata APIs for something like AWS or Google Cloud. Would it be worthwhile to block access to these (or provide dummy data) from within the honeypot? I can see a lot of people not thinking to block those addresses, but they can allow for data leaking or control over the Amazon accounts hosting the honeypot.

A decent list of base endpoints for various APIs:
https://gist.github.com/BuffaloWill/fa96693af67e3a3dd3fb

To give some additional context, with only a bare minimum setup on AWS, attackers are able to wget these URLs, giving them access to certain elements around a user's hosting account, often including IAM credentials (accessible via wget http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)

@fe7ch

This comment has been minimized.

Copy link
Member

commented Jun 26, 2018

I think it's not a problem of the honeypot, but a problem of some setups. Therefore users must take appropriate actions when installing the honeypot in the cloud.

On the other hand, one might describe the problem and solutions in the honeypot's documentation.

@Plazmaz

This comment has been minimized.

Copy link
Author

commented Jun 26, 2018

@fe7ch maybe, but it might be worth explicitly cautioning users against that in the setup guide, particularly seeing as there's one for digitialocean, which has a metadata API

EDIT: Ah yep, saw your edit. I agree.

@micheloosterhof

This comment has been minimized.

Copy link
Member

commented Jul 8, 2018

This is a good idea.
Any ideas how we can distinguish between the internal sites and the internet?

@Plazmaz

This comment has been minimized.

Copy link
Author

commented Jul 8, 2018

@micheloosterhof Probably yeah, 169.254.0.0/16 and fe80::/10 are all considered link-local addresses, although I'm not sure if it'd be safe to block all of them:
https://en.wikipedia.org/wiki/Link-local_address
EDIT: Since this doesn't cover everything, it might be worth trying to connect from an external source, although I'm not sure exactly how that would work in this scenario

@Plazmaz

This comment has been minimized.

Copy link
Author

commented Jul 8, 2018

An alternative option would be having a predefined blacklist (AWS, DigitalOcean, Google Cloud, Azure, etc) and advise users to add to it if they have any other internal or hosting service provided resources that allow access to sensitive data. That way there's at least some basic protection.

@micheloosterhof

This comment has been minimized.

Copy link
Member

commented Jul 8, 2018

I'm perfectly fine with blocking 169.254/16 and 127/8. I don't think IPv6 works at the moment.

@Plazmaz

This comment has been minimized.

Copy link
Author

commented Jul 8, 2018

@micheloosterhof might be worth blocking all reserved ranges:
https://en.wikipedia.org/wiki/Reserved_IP_addresses

10.0.0.0/8
100.64.0.0/10
127.0.0.0/8
169.254.0.0/16
172.16.0.0/12
192.0.0.0/24
192.0.2.0/24
192.88.99.0/24
192.168.0.0/16
198.18.0.0/15
198.51.100.0/24
203.0.113.0/24
224.0.113.0/24
240.0.0/4
255.255.255.255/32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.