-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python 2.7 Windows Support #81
Comments
Hi! Thanks for coming here :-). To your questions: Yes, turbodbc can deal with strings, even large ones. There are a few caveats, though.
For inserting strings, turbodbc also uses a buffered approach, but the buffer size is dynamically adjusted to what you actually want to insert using an exponential strategy. So if you insert about 6000 characters, a buffer of approximately 6-8 kB will be allocated. Regarding the Python 2.7 support, there is nothing intrinsic to turbodbc that forbids the use of Python 2.7. I continuously test it with Linux and OSX. The reason why I do not yet offer Python 2.7 support in Windows is that compiling the thing is more difficult for Python 2.7, apparently because Python 2.7 was built with an earlier compiler than I require. Once this problem is solved (don't know exactly how at the moment, but I am no Windows expert), I can release additional binary wheels. Changes should be limited to the build support files such as I hope this has not scared you too much... :-) |
Thanks for the quick response. It sounds like the string issue is more in-depth than I had thought. Based on the other issue you linked (76), I'm not sure my use case fits with your goals for the package. The size of some of my NVARCHAR fields goes up to ~120,000 characters, and I'd like to preserve the content from the raw source if possible. I realize that storing large strings in this way isn't very efficient (might be better off storing what are essentially documents as binary and reading from them as needed). I'll try to rethink the uses of the information on my end, but I'll keep an eye on this for the implementation of 76 and see if it is a better fit at that point. |
That sounds like a plan. I'll implement #76 nevertheless, and if setting a limit of |
@jhall6226 Small breakdown of things I did in a branch:
Doing further research on building Python extensions on Windows, there are statements from sources (1, 2) who know better than myself that extensions for Python 2.7 are supposed to be built with the compiler Python 2.7 was built with. That would be MSVC 9, also known as Visual Studio 2008. Visual Studio 2008 has no support for C++11 at all, and thus cannot be used to build turbodbc. Failing to build Python 2.7 extensions with anything else than MSVC 9 would lead to a clash of C standard library versions, and this may lead to Bad Things (TM). There are two hearts beating in my chest. One heart says: It is impossible to support Python 2.7 on Windows. Turbodbc relies on C++11 and pybind11, and pybind11 relies on C++11 even more. Without a compiler that knows C++11, there is no compiling turbodbc. The other heart says: Well, the cmake build demonstrates MSVC 14 and turbodbc and Python 2.7 works fine, so what's the problem? After some struggling with myself, I have to (grudgingly) close this story. I do not know how to make the |
Hello,
I stumbled across this while looking for an alternative to pyodbc for an ETL application that I've built using pandas and sqlalchemy. Because of existing design decisions and my organization's internal IT structure, I'm somewhat stuck using Windows Server and MS SQL Server for hosting the app and data that I'm processing. The current application is built in Python 2.7 and everything works great except for very slow insert times with pyodbc for large (10,000 - 400,000) datasets into MSSQL. I have adapted a "fast_load" workstream with ceODBC, but it doesn't seem to be actively developed and some of my data sets have NVARCHAR('max') fields that throw an error when the values are too large (doesn't happen with pyodbc).
Long story short, I'd like to replace them both with one module that is fast and can handle the large strings. Am I in the right place?
I saw that Windows support was recently released for Python 3.5+. Is there any chance of getting Python 2.7 support also? I'd be willing to help implement the necessary changes to downgrade, but I thought I'd see what the general thoughts on the level of difficulty is first.
The text was updated successfully, but these errors were encountered: