Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python 2.7 Windows Support #81

Closed
jhall6226 opened this issue Apr 12, 2017 · 4 comments
Closed

Python 2.7 Windows Support #81

jhall6226 opened this issue Apr 12, 2017 · 4 comments

Comments

@jhall6226
Copy link

Hello,

I stumbled across this while looking for an alternative to pyodbc for an ETL application that I've built using pandas and sqlalchemy. Because of existing design decisions and my organization's internal IT structure, I'm somewhat stuck using Windows Server and MS SQL Server for hosting the app and data that I'm processing. The current application is built in Python 2.7 and everything works great except for very slow insert times with pyodbc for large (10,000 - 400,000) datasets into MSSQL. I have adapted a "fast_load" workstream with ceODBC, but it doesn't seem to be actively developed and some of my data sets have NVARCHAR('max') fields that throw an error when the values are too large (doesn't happen with pyodbc).

Long story short, I'd like to replace them both with one module that is fast and can handle the large strings. Am I in the right place?

I saw that Windows support was recently released for Python 3.5+. Is there any chance of getting Python 2.7 support also? I'd be willing to help implement the necessary changes to downgrade, but I thought I'd see what the general thoughts on the level of difficulty is first.

@MathMagique
Copy link
Member

Hi! Thanks for coming here :-). To your questions: Yes, turbodbc can deal with strings, even large ones. There are a few caveats, though.

  • The first one is that turbodbc will allocate buffers that are sufficiently large to handle any supported value in a given column. For a VARCHAR(4000) that would be 4000 bytes, for a TEXT column that would be 2^31 bytes. This approach requires lots of memory for unrestricted columns. I have proposed a solution at High memory usage with MSSQL ntext #76 which I plan to implement soonish.
  • Currently, turbodbc cannot deal with retrieving result sets with columns that contain VARCHAR(MAX). Actually, I heard about this type from your issue :-). The thing is that ODBC drivers translate VARCHAR(MAX) to a VARCHAR type with length 0 (see this reference). Turbodbc happily uses this 0 as the actual length of the string and will allocate buffers for null termination characters. I would fix that in the same go as I fix High memory usage with MSSQL ntext #76, basically by passing turbodbc a maximum size that is reserved for such "unlimited" columns.

For inserting strings, turbodbc also uses a buffered approach, but the buffer size is dynamically adjusted to what you actually want to insert using an exponential strategy. So if you insert about 6000 characters, a buffer of approximately 6-8 kB will be allocated.

Regarding the Python 2.7 support, there is nothing intrinsic to turbodbc that forbids the use of Python 2.7. I continuously test it with Linux and OSX. The reason why I do not yet offer Python 2.7 support in Windows is that compiling the thing is more difficult for Python 2.7, apparently because Python 2.7 was built with an earlier compiler than I require. Once this problem is solved (don't know exactly how at the moment, but I am no Windows expert), I can release additional binary wheels. Changes should be limited to the build support files such as appveyor.yml, setup.py, and probably some CMakeLists.txt files.

I hope this has not scared you too much... :-)

@jhall6226
Copy link
Author

Thanks for the quick response. It sounds like the string issue is more in-depth than I had thought. Based on the other issue you linked (76), I'm not sure my use case fits with your goals for the package. The size of some of my NVARCHAR fields goes up to ~120,000 characters, and I'd like to preserve the content from the raw source if possible. I realize that storing large strings in this way isn't very efficient (might be better off storing what are essentially documents as binary and reading from them as needed). I'll try to rethink the uses of the information on my end, but I'll keep an eye on this for the implementation of 76 and see if it is a better fit at that point.

@MathMagique
Copy link
Member

That sounds like a plan. I'll implement #76 nevertheless, and if setting a limit of 200,000 for you is okay, then you can give turbodbc a spin in the future and see what happens.
But you are right, turbodbc's main benefit is not the handling of large strings. Still, there's no reason why it should handle large strings not at all ;-).

@MathMagique
Copy link
Member

@jhall6226 Small breakdown of things I did in a branch:

  • The CMake-based build with MSVC (Microsoft Visual Studio Compiler) 14 and Python 2.7 compiles and all tests pass.
  • Using python setup.py bdist_wheel uses MSVC 9, which leads to build errors.
  • After wading through source code and the internet, I could not identify a way to force python setup.py bdist_wheel to use a different version of MSVC for Python 2.7.

Doing further research on building Python extensions on Windows, there are statements from sources (1, 2) who know better than myself that extensions for Python 2.7 are supposed to be built with the compiler Python 2.7 was built with. That would be MSVC 9, also known as Visual Studio 2008. Visual Studio 2008 has no support for C++11 at all, and thus cannot be used to build turbodbc.

Failing to build Python 2.7 extensions with anything else than MSVC 9 would lead to a clash of C standard library versions, and this may lead to Bad Things (TM).

There are two hearts beating in my chest. One heart says: It is impossible to support Python 2.7 on Windows. Turbodbc relies on C++11 and pybind11, and pybind11 relies on C++11 even more. Without a compiler that knows C++11, there is no compiling turbodbc.

The other heart says: Well, the cmake build demonstrates MSVC 14 and turbodbc and Python 2.7 works fine, so what's the problem?

After some struggling with myself, I have to (grudgingly) close this story. I do not know how to make the setup.py build with MSVC 14 without patching setuptools with something that is considered a Bad Idea (TM). Even if the cmake build appears to work just fine for my set of integration tests, I cannot ignore the problems that will eventually surface. I am sorry, the problem just seems too fundamental.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants