Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pyodbc with iodbc #444

Closed
PhilDeDakar opened this issue Aug 18, 2018 · 37 comments
Closed

pyodbc with iodbc #444

PhilDeDakar opened this issue Aug 18, 2018 · 37 comments

Comments

@PhilDeDakar
Copy link

hi,

Configuration : Ubuntu 18.04 - Python 2.7

i'm trying to compile pyodbc with iodbc.
I added to the Setup.py script:

settings['extra_compile_args'].append('-I/usr/include/iodbc')
after
settings['extra_compile_args'].append('-Wno-write-strings')

and

settings['extra_link_args'].append('-L/usr/lib/x86_64-linux-gnu/')
after
if ldflags:
settings['extra_link_args'].extend(ldflags.split())

Result of : python setup.py build
src/pyodbcmodule.cpp:1096:15: error: ‘SQL_CONVERT_GUID’ was not declared in this scope
MAKECONST(SQL_CONVERT_GUID),
^
src/pyodbcmodule.cpp:859:28: note: in definition of macro ‘MAKECONST’
#define MAKECONST(v) { #v, v }
^
src/pyodbcmodule.cpp:1096:15: note: suggested alternative: ‘SQL_CONVERT_BIT’
MAKECONST(SQL_CONVERT_GUID),
^
src/pyodbcmodule.cpp:859:28: note: in definition of macro ‘MAKECONST’
#define MAKECONST(v) { #v, v }
^
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

I put in comment the line
MAKECONST(SQL_CONVERT_GUID), into src file : pyodbcmodule.cpp
and the line
{ SQL_CONVERT_GUID, GI_UINTEGER }, into src file : connection.cpp

Compilation ok (but some warning)

Python test file : t01.py :

import pyodbc

cs = "DRIVER={HFSQL};Server Name=127.0.0.1;Server Port=4900;Database=test_db;UID=admin;PWD=a99"
#cs = "DSN=testdb;UID=admin;PWD=a99"
db = pyodbc.connect(cs)

db.close()

The two connexion string "cs" work's with iodbctest

Execution : python t01.py
Traceback (most recent call last):
File "t01.py", line 1, in
import pyodbc
ImportError: /home/phil/.local/lib/python2.7/site-packages/pyodbc.so: undefined symbol: SQLFetchScroll
symbol: SQLFetchScroll

Result of : grep -r 'SQLFetchScroll'
cursor.cpp: "Skips the next count records by calling SQLFetchScroll with SQL_FETCH_NEXT.\n"
cursor.cpp: // SQLFetchScroll(SQL_FETCH_RELATIVE, count), but it requires scrollable cursors which are often slower. I would
cursor.cpp: ret = SQLFetchScroll(cursor->hstmt, SQL_FETCH_NEXT, 0);
cursor.cpp: return RaiseErrorFromHandle(cursor->cnxn, "SQLFetchScroll", cursor->cnxn->hdbc, cursor->hstmt);

Is it possible for you to help me ?

@gordthompson
Copy link
Collaborator

Perhaps @TallTed may be able to help.

@timhaynesopenlink
Copy link

timhaynesopenlink commented Aug 20, 2018

Hi,

Pleasantly easy to fix: you haven't linked against -liodbc. There would have been an error at the end of python2.7 setup build to say as much (unless you have a libodbc.so from somewhere else).

This patch is sufficient, albeit inelegant:

diff --git a/setup.py b/setup.py
index bd9dad7..11ce2e1 100755
--- a/setup.py
+++ b/setup.py
@@ -121,7 +121,7 @@ def main():
 def get_compiler_settings(version_str):

     settings = {
-        'extra_compile_args' : [],
+        'extra_compile_args' : ["-I/usr/include/iodbc/"],
         'extra_link_args': [],
         'libraries': [],
         'include_dirs': [],
@@ -205,7 +205,7 @@ def get_compiler_settings(version_str):
 #            settings['define_macros'].append(('SQL_WCHART_CONVERT', '1'))

         # What is the proper way to detect iODBC, MyODBC, unixODBC, etc.?
-        settings['libraries'].append('odbc')
+        settings['libraries'].append('iodbc')

     return settings

diff --git a/src/connection.cpp b/src/connection.cpp
index b92023d..37bd9c1 100644
--- a/src/connection.cpp
+++ b/src/connection.cpp
@@ -621,7 +621,7 @@ static const GetInfoType aInfoTypes[] = {
     { SQL_CONVERT_INTERVAL_YEAR_MONTH, GI_UINTEGER },
     { SQL_CONVERT_WLONGVARCHAR, GI_UINTEGER },
     { SQL_CONVERT_WVARCHAR, GI_UINTEGER },
-    { SQL_CONVERT_GUID, GI_UINTEGER },
+    // { SQL_CONVERT_GUID, GI_UINTEGER },

     { SQL_ACCESSIBLE_PROCEDURES, GI_YESNO },
     { SQL_ACCESSIBLE_TABLES, GI_YESNO },
diff --git a/src/pyodbcmodule.cpp b/src/pyodbcmodule.cpp
index 8395425..b967f71 100644
--- a/src/pyodbcmodule.cpp
+++ b/src/pyodbcmodule.cpp
@@ -1093,7 +1093,7 @@ static const ConstantDef aConstants[] = {
     MAKECONST(SQL_CONVERT_DECIMAL),
     MAKECONST(SQL_CONVERT_DOUBLE),
     MAKECONST(SQL_CONVERT_FLOAT),
-    MAKECONST(SQL_CONVERT_GUID),
+    // MAKECONST(SQL_CONVERT_GUID),
     MAKECONST(SQL_CONVERT_INTEGER),
     MAKECONST(SQL_CONVERT_INTERVAL_DAY_TIME),
     MAKECONST(SQL_CONVERT_INTERVAL_YEAR_MONTH),

For the record this was on a brand new ubuntu 18.04 VM for the purpose; theoretically you shouldn't need more than
sudo apt install python2.7-dev libiodbc2-dev build-essential git
and the above patch, to get to this stage.

A more elegant fix would attempt to use iodbc-config from within setup.py.

HTH

~Tim

@mkleehammer
Copy link
Owner

Thanks Tim.

There are a couple of issues to work out:

  1. How do we pick a default library, unixODBC or iODBC?
  2. How to handle macOS where prebuilt binaries are common.

As you know (and your example above shows), it isn't possible to provide a build that works for both. We could make the code work using an #ifdef for features not supported by iODBC.

I used to have pyodbc detect whether iODBC was installed and use it where possible. This led to non-stop problems and questions, so I removed it when Apple (seemed to) back off of iODBC. (Apparently they still ship libraries, just not header files. That could just be because their software uses it.)

My proposal for (1) is to default to unixODBC everywhere and have a flag for those that want a custom build. Is that useful to anyone? It would mean you couldn't just use pip install, though. That seems like a real show stopper for a ton of people. Is there a way to pass flags in pip that I missed?

I'm hoping the flag above would also work for macOS and binary wheels. It's not possible to build a single binary that works with both unixODBC and iODBC, so I have to pick one if I'm going to provide binaries. And on macOS, I don't think everyone has Xcode configured. The binaries seem pretty popular. Would the same flag work here also? If you want a macOS iODBC build, you have to download the code and build it yourself, adding a flag to setup.cfg or the command line.

It seems like a can of worms, but I get that iODBC users want a solution.

Is there a better approach?

@timhaynesopenlink
Copy link

timhaynesopenlink commented Aug 21, 2018

Hi,

High Sierra here says:

zsh, psammite 10:50am tim/ % pkgutil --file-info /usr/lib/libiodbc.dylib
volume: /
path: /usr/lib/libiodbc.dylib

pkgid: com.apple.pkg.Core
pkg-version: 10.13.2.1.1.1512157600
install-time: 1515012284
uid: 0
gid: 0
mode: 755

So it sounds to me like iODBC is the default system-wide reliable option on macOS, while linux distros tend to default to unixODBC. ç'est la vie.

I'm not an expert on what's possible in setup.py, but how about:
a) you implement a conditional logic within setup.py: if iodbc-config exists, use it for the includes and libs; else check for unixODBC similarly; else fall back to some default assumptions
b) optionally add a commandline argument for forcing the issue (e.g. in case of having both DMs installed)
c) build your binaries with unixODBC for linux and iODBC for macOS - which should be easier with the above two points
?

I think this addresses both linux and macOS, both vanilla default installations and cluttered dev environments. Are people who want custom local builds really going to worry about using pip?

HTH

@gordthompson
Copy link
Collaborator

@timhaynesopenlink re: "build your binaries with unixODBC for linux and iODBC for macOS"

That makes sense to me if my understanding is correct that macOS ships with the iODBC binaries (much like how the Windows OS ships with its own ODBC driver manager) and therefore the macOS wheels built for iODBC will allow mere mortals to have pip install pyodbc "just work".

@TallTed
Copy link

TallTed commented Aug 21, 2018

@gordthompson - Yes, macOS has shipped with iODBC dylibs since 10.3.0, which you can verify on Apple's OpenSource listings.

@mkleehammer
Copy link
Owner

I actually knew the libraries were still there, but the real problem is that the ODBC drivers are not interchangeable between unixODBC and iODBC. Even if libraries are available, if everyone wants to use unixODBC drivers installed with brew, I'd be fielding a lot of questions.

Maybe I should register pyodbc-iodbc as a different pypi project or something.

@gordthompson
Copy link
Collaborator

gordthompson commented Aug 21, 2018

@mkleehammer re: "Maybe I should register pyodbc-iodbc as a different pypi project or something."

I had the same idea myself, but then I thought "Why would someone want to override (replace?) iODBC with unixODBC if iODBC is already there on macOS?". I suppose one possibility might be that pyodbc has gained a reputation of being tied to unixODBC (which it [sort of] is, at least currently), causing Mac users to assume that they need to jump through some hoops for unixODBC. Perhaps if it "just works" with an iODBC build for macOS then those brew/unixODBC questions will go away (eventually).

In other words, is it really that "everyone wants to use unixODBC", or is it that Mac users believe (reinforced by the Internet Infinite Memory Effect) that they have to?

@v-chojas
Copy link
Contributor

One of the biggest differences between iODBC and unixODBC is that the former is by default UTF-32 while the latter is UTF-16 for its Unicode APIs, and likewise any ODBC drivers used with them also need to have a compatible ABI. For example, ODBC Driver for SQL Server is UTF-16 only.

@TallTed
Copy link

TallTed commented Aug 22, 2018

Today, iODBC supports most Unicode encodings -- UCS-2, UCS-4 (a/k/a UTF-32), UTF-8, and UTF-32 (a/k/a UCS-4). UTF-16 support is expected in Q3 2018.

On macOS, native GUI ODBC interaction is only available with iODBC. Native GUI apps and ODBC drivers that want that native Mac interface -- such as Microsoft Excel, and ODBC drivers from many vendors -- are therefore linked to iODBC (and typically, but not always, to the iODBC Frameworks rather than the dylibs).

It is somewhat odd that Microsoft chose to tightly couple their ODBC Driver for SQL Server for macOS to UnixODBC, given that Microsoft Excel and Microsoft Query are bound to iODBC, and that Microsoft is aware that macOS ships with iODBC, but different teams there do not always communicate well with each other. At the same time, it is worth noting that multiple ODBC driver vendors (including my own employer, OpenLink Software) offer ODBC Drivers for SQL Server that are bound to iODBC.

For a more detailed Driver Manager comparison, you can check out this Gsheet.

@gordthompson
Copy link
Collaborator

@v-chojas Can "ODBC Driver x for SQL Server" work with pyodbc/iODBC, perhaps by using setencoding as is sometimes required for other drivers with pyodbc/unixODBC?

@v-chojas
Copy link
Contributor

v-chojas commented Aug 22, 2018

@gordthompson No. The driver is UTF-16 only, and iODBC is UTF-32 by default. pyODBC settings can't change that since they only control how it interacts with the DM; the problem here is an incompatible DM-driver interface.

unixODBC is chosen because it is extremely close in compatibility to Windows' DM. ODBC Driver for SQL Server itself calls back into DM for some functionality (e.g. BCP.)

@gordthompson
Copy link
Collaborator

gordthompson commented Aug 23, 2018

Okay, so it sounds like building macOS wheels with iODBC won't provide a "just works" solution for msodbcsql. At least not until iODBC adds UTF-16 support and macOS starts shipping those binaries as part of the base install (or perhaps pushes them out in an update to existing macOS users).

For users who just want basic connectivity to SQL Server, is FreeTDS a viable option? In other words, is installing FreeTDS_ODBC with iODBC and SSL support (the latter for Azure SQL) on macOS less of a hassle than installing unixODBC to make msodbcsql happy?

@PhilDeDakar
Copy link
Author

@timhaynesopenlink
Thanks for your help.
I made the changes you propose and running the test (python t01.py)
I get the error message: Erreur de segmentation (core dumped)
No error message in /var/log
->iodbctest "DRIVER={HFSQL};Server Name=127.0.0.1;Server Port=4900;Database=test_db;UID=admin;PWD=a99" Test ok
->iodbctest "DSN=mytest;UID=admin;PWD=a99" test ok
->iodbctest "DRIVER=/home/phil/odbc20/wd200hfo64.so;Server Name=127.0.0.1;Server Port=4900;Database=test_db;UID=admin;PWD=a99" test ok
->iodbcadm-gtk DSN=mytest test ok
I try with v20n v22 and v23 of my HFSQL odbc driver and i have the same message

iodbc-config --cflags -> -I/usr/include/iodbc
iodbc-config --libs -> -L/usr/lib/x86_64-linux-gnu -liodbc -liodbcinst

@mkleehammer
Copy link
Owner

mkleehammer commented Sep 3, 2018

It is not really possible to build a single pyodbc that works with both:

  • The driver managers are not ABI compatible, so you must either build against only one or build against both and dynamically choose one at runtime which is far too complicated.
  • The drivers have the same problem, so drivers are always built against unixODBC or iODBC.
  • Which driver manager a user wants is generally determined by the driver they want to use, not the OS. On macOS, they may want to use the prebuilt MySQL which seems to only be built against unixODBC.

I think the best solution might be to use different namespaces to make this decision more visible to users rather than hiding it underneath a single namespace like "pyodbc".

What about this design? @gordthompson @v-chojas @TallTed

The pyodbc namespace stays, but it only contains constants and other things common to all implementations. Three other subpackages are created which must be used to get the connect function:

from pyodbc import SQL_BIT
from pyodbc.ms import connect
from pyodbc.unixodbc import connect
from pyodbc.iodbc import connect

Each subpackage works by attempting to import a different binary. For example, the unixodbc package file might look like this:

try:
    import _pyodbc_unixodbc
    connect = _pyodbc_unixodbc.connect
except ImportError:
    raise ImportError("pyodbc.unixodbc is not installed")

To keep things simple, on Windows we'll bundle the _pyodbc_ms extension in the pyodbc installer.

c:\> pip install pyodbc

On other operating systems, we simply require that you pip install "pyodbc-unixodbc" and "pyodbc-iodbc". On macOS, we'd provide binary wheels for both in addition to the source.

$ pip install pyodbc pyodbc-unixodbc

Note that these are global packages and don't try to install under the pyodbc package. The last time I worked with packages that did tricky things, they always caused headaches for exe builders, etc.

@gordthompson
Copy link
Collaborator

Sounds reasonable to me. I hope that it won't cause too many headaches for existing users. For example, if the current "recipe" is

import pyodbc
cnxn = pyodbc.connect("DSN=foo")

would the new approach require

import pyodbc
from pyodbc.unixodbc import connect
cnxn = connect("DSN=foo")

and if so, might it be possible to do something like

from pyodbc.unixodbc import connect as pyodbc.connect

so that

cnxn = pyodbc.connect("DSN=foo")

would still work?

Also, a minor suggestion: Perhaps pyodbc.windows or pyodbc.win instead of pyodbc.ms. I'm thinking that some people may associate "ms" with "MS SQL Server" and think that it's some kind of special setup they need to use with msodbcsql regardless of platform. "windows" or "win" has a closer association with the platform itself. Of course, if Windows users can still simply do pip install pyodbc and have it continue to "just work" as before then the name of the subpackage may be invisible to them.

@keitherskine
Copy link
Collaborator

For the sake of backward-compatibility, I would certainly appreciate being able to continue using:

import pyodbc
cnxn = pyodbc.connect("DSN=foo")

from both Windows and Linux. If I have to change the pyodbc installation process for this, fair enough, but I've got a LOT of code that calls pyodbc.connect() and I really don't want to have to change it all.

@TallTed
Copy link

TallTed commented Sep 3, 2018

@mkleehammer - The comments from @keitherskine and @gordthompson make sense; especially about using win or windows or maybe mswin or even mdac instead of ms, as akin to iodbc and unixodbc, and about maintaining backward compatibility as much as possible. That said, much of this is beyond my ken; still, please do tag me when/if you tag the others below.

@timhaynesopenlink, @pkleef, @openlink, @OpenLinkSoftware - Please chime in.

@v-chojas
Copy link
Contributor

v-chojas commented Sep 4, 2018

@mkleehammer I think we need to consider this from the perspective of the common platforms:

Windows: there is only one DM (odbc32.dll), it comes with the OS, and basically all ODBC applications and drivers use it. Thus it makes no sense to distinguish between different DMs and the additional subpackaging/importing is only creating unnecessary complexity and confusion.

Linux and macOS: both iODBC and unixODBC can be encountered. A quick web search suggests unixODBC is somewhat more common (~240k results for unixODBC, ~60k for iODBC. ) It thus makes sense to distinguish between the two, but at what level? Is it expected that an application (i.e. python) use drivers from both DMs at once?

The fact that unixODBC has been the default DM for pyODBC on non-Windows platforms, and the desire to keep the following unchanged across them...

import pyodbc
conn = pyodbc.connect(...)

...suggests to me the best approach be

  • Nothing changes for pyODBC on Windows. It continues to use the regular DM, and the same name.
  • On non-Windows, pyodbc remains defaulted to unixODBC like before. pyodbc_iodbc (or perhaps pyiodbc?) is the name for using iODBC.

This benefits

  • Existing code (and documentation!) that does import pyodbc will continue to work as before across all platforms.
  • We can still distinguish between unixODBC and iODBC where necessary
  • Applications can use both at once if needed:
import pyodbc
import pyiodbc
conn1 = pyodbc.connect(" ... ")
conn2 = pyiodbc.connect(" ... ")

It is similar to a situation with the msodbcsql driver --- version 13.x package name was just msodbcsql, and since a lot of code using it was dependent on the driver's name ODBC Driver 13 for SQL Server, when the name changed to ODBC Driver 17 for SQL Server, the package also received the new name msodbcsql17 and allowed users to install and use both concurrently.

@mkleehammer
Copy link
Owner

I suspected moving the connect function would not be popular, but there is something else to think about. I've been wondering whether pyodbc 5 should provide a factory object instead anyway. There are so many configuration options for connections now, particularly encodings, that I thought it might be useful to provide an object where you can set these options and have them applied to all new connections.

Obviously you can do this yourself now, which I do by writing a db module in most of my projects that includes functions like connect, fetchrow, etc. which all go through a the common connection function which applies the correct settings.

However, I did think it could be a good place to provide some "shortcut" functions for common configurations like SQL Server on Windows, PostgreSQL using UTF-8, etc. It could also be useful to have some test functions so you can setup the initial connection string, and then have one or more functions that attempt to figure out the correct encodings for you and test things like whether numeric structures work, is SQLDescribeParam supported, etc.

Today I do a lot of this under the covers and store it in the CnxnInfo object. (I store them by the hash of the connection string in case there is a password in it.). Using the factory concept would make this more explicit and give us a place to store more and add helpful utilities before connecting.

Obviously it is something we can do in addition to the connect function, so it isn't something we have to discard if it is really that important to keep the connect in one place.

Thoughts from everyone?

One other thing, I'd seriously consider changing the function interface in one other way - I'd really prefer if the connection string keywords that pyodbc should not be looking at were separated from the pyodbc keywords like autocommit. I assume that is unpopular too?

@keitherskine
Copy link
Collaborator

One of pyodbc's great strengths is its ability to connect to most RDBMS's out there. However, I'm guessing that most applications use just one RDBMS at any one time. Hence, it would be convenient to be able to set a "default" RDBMS when pyodbc is installed so that no extra Python code is required when creating a connection.

For example, in an ideal world, it would be good to be able to install pyodbc like this:

pip install pyodbc --default-database mysql --default-driver-manager unixodbc

and thus make a connection in the usual way:

import pyodbc
cnxn = pyodbc.connect('dsn=my_dsn')
# here, the encoding on the connection would be automatically
# set up for mysql, and unixodbc would be used as the driver manager

If a different RDBMS is needed, they could be specified in the code, e.g.

cnxn = pyodbc.postgres.connect('dsn=my_dsn')
cnxn = pyodbc.connect('dsn=my_dsn', rdbms='postgres')

On the question of separating pyodbc-specific parameters from odbc-specific parameters on connect(), I'm guessing this is a reference to the way turbodbc does it, e.g.:

from turbodbc import connect, make_options
options = make_options(autocommit=True)
connect(dsn='my_dsn', turbodbc_options=options)

https://turbodbc.readthedocs.io/en/latest/pages/advanced_usage.html#advanced-usage

On the one hand, I'm kind of ok with this, but then again, they're all just parameters so does it really matter if one parameter is for pyodbc and one parameter is for odbc? Those turbodbc options are essentially just another dictionary object.

@timhaynesopenlink
Copy link

@keitherskine

cnxn = pyodbc.connect('dsn=my_dsn', rdbms='postgres')

Please, no. ODBC has DSNs already for a reason.

@keitherskine
Copy link
Collaborator

@timhaynesopenlink Well, if pyodbc can set the encoding based on the provided DSN, all the better, but I'm guessing that wouldn't be easy. If not, then how does pyodbc know what the target RDBMS is, in order to set the encoding correctly? If that can be figured out under the hood by pyodbc, great!

Maybe an "rdbms" parameter is too clunky, but I'm guessing something is needed.

@PhilDeDakar
Copy link
Author

Bonjour
I tested the method drivers with this code :

import pyodbc
lstdrv = pyodbc.drivers ()
print lstdrv

and I got the same error: Erreur de segmentation
To fix the error I made the following change in pyodbcmodule.cpp:

    SQLUSMALLINT nDirection = SQL_FETCH_FIRST;
    SQLCHAR      szDriverDesc[256];
    SQLSMALLINT driver_ret;
    SQLCHAR      attr[256];
    SQLSMALLINT attr_ret;

    for (;;)
    {
        Py_BEGIN_ALLOW_THREADS
        /*ret = SQLDrivers(henv, nDirection, szDriverDesc, _countof(szDriverDesc), &cbDriverDesc, 0, 0, &cbAttrs);*/
          ret = SQLDrivers(henv, nDirection, szDriverDesc, _countof(szDriverDesc), &driver_ret, attr, sizeof(attr), &attr_ret);
        Py_END_ALLOW_THREADS

@v-chojas
Copy link
Contributor

v-chojas commented Sep 6, 2018

That looks like a bug in iODBC:

https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqldrivers-function

If DriverAttributes is NULL, AttributesLengthPtr will still return the total number of bytes (excluding the null-termination character for character data) available to return in the buffer pointed to by DriverAttributes.

Also, regarding the "ODBC Driver Manager Comparison" linked above --- unixODBC does support UTF-8 encoding for the ANSI API --- it is how msodbcsql can be used through it.

@mkleehammer
Copy link
Owner

Hmmm... Even if it is a bug, it would be easy to work around with a comment indicating why we did it even though the specification doesn't require it. (Otherwise I might remove it later ;) )

@v-chojas
Copy link
Contributor

v-chojas commented Sep 6, 2018

iODBC is on GitHub so I suggest it be reported there first.

@PhilDeDakar
Copy link
Author

Bonjour,
with iodbc, for the methode connect() i have the message : Erreur de segmentation
The error come from cnxninfo.cpp, function : GetColumnSize

    if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, sqltype)) &&
        SQL_SUCCEEDED(SQLFetch(hstmt)) &&
        SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))

if i modify by :

    long int cbdata;
    if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, sqltype)) &&
        SQL_SUCCEEDED(SQLFetch(hstmt)) &&
        SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), &cbdata)))

the code work

@PhilDeDakar
Copy link
Author

Now i have this new error...

phil@ubuntu-desktop:~/test-python$ python
Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import pyodbc
db=pyodbc.connect("DSN=mytest;UID=admin;PWD=a99")
print db
<pyodbc.Connection object at 0x7f997f5b1c60>
cur=db.cursor()
cur.tables()
*** stack smashing detected ***: terminated
Abandon (core dumped)
phil@ubuntu-desktop:~/test-python$

@v-chojas
Copy link
Contributor

v-chojas commented Sep 6, 2018

Appears to be more iODBC bugs... the SQLGetData one is similar to the SQLDrivers one above:

https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlgetdata-function

StrLen_or_IndPtr
[Output] Pointer to the buffer in which to return the length or indicator value. If this is a null pointer, no length or indicator value is returned.

The cur.tables() one might be similar, and I wouldn't be surprised if the same is true of the other catalog functions --- and workarounds might be basically impossible, since a null means a very specific thing to those functions.

https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/arguments-in-catalog-functions

@mkleehammer
Copy link
Owner

I made a 444-iodbc branch to test building separate pyodbc and pyiodbc projects but I didn't get very far.

  • Since changing the interface wasn't very popular, I've created two subdirectories, each with its own setup.py file.
  • I updated the pgtests file to accept a --library argument to be able to test both. If it works, I'll do the rest with the others (except SS, maybe? we know it is either on Windows or unixODBC).

First, does anyone now how to properly get the iODBC header files? Their macOS docs are old and I couldn't find the installer they mentioned. (Are we sure this is worth the trouble?). It builds against the unixODBC headers, but won't work since there are some different data sizes.

Second, for testing (I don't want to have to pay to license something for this), is this the right driver?

Thanks. Let me know what you all think of this approach. Basically it is as you all suggested: pyodbc stays the same and is either Windows or unixODBC, pyiodbc is a new project that is always iODBC and is not supported on Windows.

@TallTed
Copy link

TallTed commented Sep 12, 2018

@mkleehammer --

You can download the latest iODBC 3.52.12 SDK, which includes header files, Mac-native administrators (32-bit and 64-bit), test apps, Frameworks, and dylibs, here or here.

We can provide a temporary license to whichever of our drivers you want to test with at no cost; just write to OpenLink Technical Support and reference this thread.

For most flexible testing, we'd recommend a Lite Edition Release 7 ODBC Driver for Virtuoso, MySQL, PostgreSQL, Oracle, Sybase, or Microsoft SQL Server. These drivers are all fat binaries (currently including both 32-bit and 64-bit for Intel) with no external dependencies.

(Express Edition drivers such as the one you linked to are built fat, but depend on the local Java, and have limitations due to the way Java is built for macOS. Express Edition Release 7 requires Java 8 or later, and can only be used by 64-bit ODBC apps. Express Edition Release 6 requires Java 6 or earlier, and can only be used by 32-bit ODBC apps.)

@TallTed
Copy link

TallTed commented Sep 12, 2018

@PhilDeDakar @v-chojas --

I don't understand the code well enough to accurately relay your reported issues (1) (2). We would appreciate it if you can report those directly, optimally via github issues, but you can raise them to a public iODBC mailing list (for bugs or macOS) or confidential OpenLink Support Case(s) if you prefer.

xeji added a commit to xeji/nixpkgs that referenced this issue Sep 15, 2018
Build with unixODBC instead of libiodbc, see discussion in
mkleehammer/pyodbc#444
xeji added a commit to NixOS/nixpkgs that referenced this issue Sep 15, 2018
Build with unixODBC instead of libiodbc, see discussion in
mkleehammer/pyodbc#444

(cherry picked from commit 13c500a)
eadwu pushed a commit to eadwu/nixpkgs that referenced this issue Sep 15, 2018
Build with unixODBC instead of libiodbc, see discussion in
mkleehammer/pyodbc#444
@ghost
Copy link

ghost commented Nov 21, 2018

Any further progress/info? cheers

@afreepenguin
Copy link

Anything on this?

@v-chojas
Copy link
Contributor

@afreepenguin Is there a specific reason you need to use iODBC instead of unixODBC?

@afreepenguin
Copy link

@afreepenguin Is there a specific reason you need to use iODBC instead of unixODBC?

Figured out my using unixODBC I just had to link dylib file in a weird way....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants