New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FR - API Endpoints to submit arpnip and macsuck results #893
Comments
had "a big think" all about arp/mac submission API endpoints. Like all things, more tricky than it first seems, but I believe I have a way forward. I'll need to refactor the existing arp/mac code a bit then can work on the API the tricky bit is that Netdisco is doing a lot of sanity checking and cleanup of the data it gets from the device, for example to filter VLANs, and most important to make sure that nodes appear on the switch they are on (basically discarding remote nodes, but it's more complex than that). I'd like to keep all this logic so that you just submit the MAC table and don't worry about uplinks/etc and Netdisco still applies all its filtering plus taking into account all the config options we have, like allowing "bleed" to upstream switches, ignoring specific VLANs or Ports, etc basically ... not safe just to dump a mac/arp table into the database, it'd screw up the secret-sauce of netdisco |
On 11/2/22 07:51, Oliver Gorwits wrote:
basically ... not safe just to dump a mac/arp table into the database,
it'd screw up the secret-sauce of netdisco
Ollie,
Don't give away all our secrets!
…-m
Message ID: ***@***.***>
|
Netdisco is written in Perl. We're safe. |
Excellent many thanks! No ETA, but I plan to extend ntcsuck to support all main Cisco OS variants and make it a viable option for snmpv3-only networks with many switches and vlans. We'll see how that goes :) |
It would be great if we could submit arpnip and macsuck results via API. Usecases:
I think this could open up new classes of devices with significantly lower barrier to entry than we had with e.g. the Worker Plugin-based SSH collector.
The text was updated successfully, but these errors were encountered: