Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow parallel read/write over the same netmiko session #407

Merged
merged 7 commits into from
Mar 14, 2017
Merged

Allow parallel read/write over the same netmiko session #407

merged 7 commits into from
Mar 14, 2017

Conversation

mirceaulinic
Copy link
Contributor

@mirceaulinic mirceaulinic commented Mar 13, 2017

Ref: napalm-automation/napalm-ios#120

With these changes, the library will be able to handle parallel read / write operations independently over the same SSH / Telnet session. However, the limitation will be that in case there's a high number of concomitant commands, NetMikoTimeoutException is raised as there's no mechanism to uniquely identify each request. Basically parallel requests are queued and served sequentially when the channel is again ready.
Commands executed over a different SSH session would not be affected by this, being handled in a separate netmiko instance.

The end user, or the developer should not see any change, and should not make any changes to their code.

I have tested this on a couple of Cisco 2811s using the napalm-ios driver and seems to work fine.

I have moved the logic from write_channel and read_channel into _ write_channel and _ read_channel helpers; the public methods call the corresponding helpers and will unlock the channel before returning. So in case there's any exception it will be also forwarded and the channel is unlocked before returning. @ktbyers please let me know if this hierarchical change would be too much, or we should simply move everything from read_channel and write_channel directly under the try-finally block?

@ktbyers
Copy link
Owner

ktbyers commented Mar 13, 2017

@mirceaulinic We will need to lock the direct read here also

https://github.com/mirceaulinic/netmiko/blob/5c2003ba03a9396eb08b8d5d424449fd0c7118f1/netmiko/base_connection.py#L264

Also, I think we should make the time.sleep longer than 10 mS. Screen scraping happens much more on the 3-10 second time frame so pointless to check every 10 mS. At a minimum this should be 50mS.

I also think the timeout for acquiring the lock will need to be longer than 8 second. Currently using the timeout which is set to 8 second. We need to figure out how to handle this as I am currently using timeout mostly as a read operation timeout.

NAPALM also sets this timeout (at least in napalm-ios to 60 seconds) which is not a very good choice given how I am using it. So we need to work out what is logical here (i.e. given how it is used in Netmiko, NAPALM-ios, and this locking use).

@mirceaulinic
Copy link
Contributor Author

I also think the timeout for acquiring the lock will need to be longer than 8 second. Currently using the timeout which is set to 8 second. We need to figure out how to handle this as I am currently using timeout mostly as a read operation timeout.

Yes, I have also thought about that. Perhaps we could introduce a separate timeout value, e.g. waittime having a higher default value? In NAPALM I think we can then either pass this as an optional arg, either have the same value as the timeout. Would you consider this as a suitable option?

@ktbyers
Copy link
Owner

ktbyers commented Mar 13, 2017

How about we add a new argument to __init__

self.session_timeout = 60         

Then map the lock timeout to this.

I also think (ultimately) we should map the NAPALM argument to this (and not the current timeout).

@mirceaulinic
Copy link
Contributor Author

Hi @ktbyers - I have introduced the new session_timeout arg, defaulting to 60s.
I have also decreased the polling frequency from 10ms to 100ms and added some more docstrings to the new introduced methods. Please let me know what you think!

@ktbyers ktbyers merged commit d85d90e into ktbyers:develop Mar 14, 2017
@mirceaulinic mirceaulinic deleted the parallel-lock branch March 29, 2017 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants