…that. This fixes 417 error (on git client) and "fatal: protocol error: bad line length character: #$@$" on server.
Old subprocess wrapper would cache subprocess output to disk and was a bit slow. This one caches nothing and streams the output directly to the HTTP Response. The new subprocess wrapper is also some 70% faster at scraping and feeding subp output. There is one thing though.. The new output streaming method relies on new WSGI "1.1", PEP3333 and now must run on PEP3333 compatible server ONLY.
…cement available New subprocessio module takes us in a direction separate from monkeypatching the submodule.Popen. We will use a subprocess wrapper iterator.
WSGI PEP3333 Allows an iterator with .close() to be passed to the server. This object is designed to wrap subprocess communication and present the output as an iterator.
…import statements for new structure.
path_prefix > content_path repo_url_marker > uri_marker This is to make the server more compatible with wrapping applications.
The git_http_backend module is NOT tied to CherryPy's WSGI server. It can run against any WSGI 1.1-compatible (WSGI 1.0 if you don't work with git packs larger than 1Mb) server. The "default" means - when we run git_http_backend.py on the command-line - the simplest deployment scenario - it envokes a local copy of Cherrypy.WSGIServer and runs against it.
…ncoding and WSGI 1.1 servers.
WSGIREF has prooven not to support chunked Transfer coding. git client sends chunked for packs over 1mb. I have almost no choice but to switch to something else.
…mory-based objects. Previously tempfile.TemporaryFile() was used and was passed to subprocess.Popen as stdout outlet. That would involve extra file operations for temp files in cases when output is relatively small (hundreds of bytes). After the subprocess.PopebIO.communicateIO introduction, we can now reasonably easily manage the output in memory from start to finish.
…en-scraping and rewiring the server code to rely on it. Adding git submodule for subprocessio.py (and its companion subprocess.py for IronPython). These modules are optimized for memory-efficient screen-scraping of the underlying process output. Normal subprocess.py module either stores the entire output in memory (not good for large git console outputs) or forces you to choose file-based storage from the start and pass that as stdout option to Popen. The modules added here have a busize option that sets a threashold which defines after what point memory-based output storage gets persisted to file. This way you can process multi-gigabyte and multi-byte git outputs without caring about blocking the PIPE, with the same exact command - communicateIO. Per my tests, this replacement pure-Python thread suprocess module is very close in speed to the native Win32-ABI-based subprocess implementation. Don't be shy about using it. The performance on IronPython in some cases exceeds that of cPython's native suprocess.py.
…ion. Renamed 2 classes to fall into naming convention. - Commented out gzip support. IronPython does not have gzip. WSGI PEP333 prescribes that the app itself does not mess with compression and let the server deal with it. I don't want to mess with that at all because the size improvement of using gzip is immaterial because git bundles are already well-packed. - Prunned the types and quantity of temp file object crated during spawning of subprocess. Specifically, trying to avoid creating IO-likes for stderr and stdin when not needed. This is dangerous in case of stderr as if the output is large, it will block, but we will go on this way, and will deal with problems when they show up. - Class lister was listing the classes in alphabetical order, not order of appearance. Got tired of that, renamed 2 related classes to start from name of superclass.
…g up module invocation for IronPython compatibility
…ile, to use nested subclass system.
…ility, slightly for performance.
…o different from process.wait() so we are simplifying.
…able to control the flow of data from the subprocess. It's not hooked up to anything yet.
…ush. Old clients need to have access to real info/refs file, which is now recreated after every push of a pack.