Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Import from Subversion 2.0.63; reworked versioning

  • Loading branch information...
commit c3f6b46d8a4f73722439b4755ef5df42adff81d9 0 parents
@mkleehammer authored
Showing with 11,386 additions and 0 deletions.
  1. +9 −0 .gitignore
  2. +13 −0 MANIFEST.in
  3. +143 −0 README.txt
  4. +165 −0 setup.py
  5. +58 −0 src/buffer.cpp
  6. +55 −0 src/buffer.h
  7. +751 −0 src/connection.cpp
  8. +55 −0 src/connection.h
  9. +2,046 −0 src/cursor.cpp
  10. +113 −0 src/cursor.h
  11. +300 −0 src/errors.cpp
  12. +52 −0 src/errors.h
  13. +573 −0 src/getdata.cpp
  14. +9 −0 src/getdata.h
  15. +746 −0 src/params.cpp
  16. +11 −0 src/params.h
  17. +136 −0 src/pyodbc.h
  18. +100 −0 src/pyodbc.rc
  19. +813 −0 src/pyodbcmodule.cpp
  20. +62 −0 src/pyodbcmodule.h
  21. +14 −0 src/resource.h
  22. +343 −0 src/row.cpp
  23. +34 −0 src/row.h
  24. +50 −0 src/wrapper.h
  25. +648 −0 tests/accesstests.py
  26. +850 −0 tests/dbapi20.py
  27. +43 −0 tests/dbapitests.py
  28. BIN  tests/empty.accdb
  29. BIN  tests/empty.mdb
  30. +422 −0 tests/pgtests.py
  31. +972 −0 tests/sqlservertests.py
  32. +99 −0 tests/testutils.py
  33. +1,166 −0 web/docs.html
  34. +234 −0 web/index.html
  35. +48 −0 web/license.html
  36. +131 −0 web/styles.css
  37. +122 −0 web/tutorial.html
9 .gitignore
@@ -0,0 +1,9 @@
+setup.cfg
+MANIFEST
+build
+dist
+*.pdb
+*.pyc
+*.pyo
+tmp
+web/*.cmd
13 MANIFEST.in
@@ -0,0 +1,13 @@
+include src\*.h
+include src\*.cpp
+include tests\*
+include README.txt
+prune setup.cfg
+
+include web\*
+prune web\*.cmd
+
+# For some reason, I keep getting setup.PY. Probably
+# because I use PATHEXT on Windows.
+prune setup.PY
+include setup.py
143 README.txt
@@ -0,0 +1,143 @@
+
+Overview
+========
+
+This project is a Python database module for ODBC that implements the Python DB API 2.0
+specification.
+
+ homepage: http://sourceforge.net/projects/pyodbc
+ source: http://github.com/mkleehammer/pyodbc
+
+This module requires:
+
+ * Python 2.4 or greater
+ * ODBC 3.0 or greater
+
+On Windows, the easiest way to install is to use the Windows installer program available at
+http://sourceforge.net/projects/pyodbc.
+
+Source can be obtained at
+
+To build from source, either check the source out of version control or download a source
+extract and run:
+
+ python setup.py build install
+
+Module Specific Behavior
+=======================
+
+General
+-------
+
+* The pyodbc.connect function accepts a single parameter: the ODBC connection string. This
+ string is not read or modified by pyodbc, so consult the ODBC documentation or your ODBC
+ driver's documentation for details. The general format is:
+
+ cnxn = pyodbc.connect('DSN=mydsn;UID=userid;PWD=pwd')
+
+* Connection caching in the ODBC driver manager is automatically enabled.
+
+* Autocommit is not supported. Always call cnxn.commit() since the DB API specification
+ requires a rollback when a connection is closed that was not specifically committed.
+
+* When a connection is closed, all cursors created from the connection are closed.
+
+
+Data Types
+----------
+
+* Dates, times, and timestamps use the Python datetime module's date, time, and datetime
+ classes. These classes can be passed directly as parameters and will be returned when
+ querying date/time columns.
+
+* Binary data is passed and returned in Python buffer objects.
+
+* Decimal and numeric columns are passed and returned using the Python 2.4 decimal class.
+
+
+Convenience Methods
+-------------------
+
+* Cursors are iterable and returns Row objects.
+
+ cursor.execute("select a,b from tmp")
+ for row in cursor:
+ print row
+
+
+* The DB API PEP does not specify the return type for Cursor.execute, so pyodbc tries to be
+ maximally convenient:
+
+ 1) If a SELECT is executed, the Cursor itself is returned to allow code like the following:
+
+ for row in cursor.execute("select a,b from tmp"):
+ print row
+
+ 2) If an UPDATE, INSERT, or DELETE statement is issued, the number of rows affected is
+ returned:
+
+ count = cursor.execute("delete from tmp where a in (1,2,3)")
+
+ 3) Otherwise (CREATE TABLE, etc.), None is returned.
+
+
+* An execute method has been added to the Connection class. It creates a Cursor and returns
+ whatever Cursor.execute returns. This allows for the following:
+
+ for row in cnxn.execute("select a,b from tmp"):
+ print row
+
+ or
+
+ rows = cnxn.execute("select * from tmp where a in (1,2,3)").fetchall()
+
+ Since each call creates a new Cursor, only use this when executing a single statement.
+
+
+* Both Cursor.execute and Connection.execute allow parameters to be passed as additional
+ parameters following the query.
+
+ cnxn.execute("select a,b from tmp where a=? or a=?", 1, 2)
+
+ The specification is not entirely clear, but most other drivers require parameters to be
+ passed in a sequence. To ensure compatibility, pyodbc will also accept this format:
+
+ cnxn.execute("select a,b from tmp where a=? or a=?", (1, 2))
+
+
+* Row objects are derived from tuple to match the API specification, but they also support
+ accessing columns by name.
+
+ for row in cnxn.execute("select A,b from tmp"):
+ print row.a, row.b
+
+
+* The following are not supported or are ignored: nextset, setinputsizes, setoutputsizes.
+
+
+* Values in Row objects can be replaced, either by name or index. Sometimes it is convenient
+ to "preprocess" values.
+
+ row = cursor.execute("select a,b from tmp").fetchone()
+
+ row.a = calc(row.a)
+ row[1] = calc(row.b)
+
+
+Goals / Design
+==============
+
+* This module should not require any 3rd party modules other than ODBC.
+
+* Only built-in data types should be used where possible.
+
+ a) Reduces the number of libraries to learn.
+
+ b) Reduces the number of modules and libraries to install.
+
+ c) Eventually a standard is usually introduced. For example, many previous database drivers
+ used the mxDate classes. Now that Python 2.3 has introduced built-in date/time classes,
+ using those modules is more complicated than using the built-ins.
+
+* It should adhere to the DB API specification, but be maximally convenient where possible.
+ The most common usages should be optimized for convenience and speed.
165 setup.py
@@ -0,0 +1,165 @@
+#!/usr/bin/python
+
+import sys, os, re
+from distutils.core import setup, Command
+from distutils.extension import Extension
+from distutils.errors import *
+from os.path import exists, abspath, dirname, join, isdir
+
+OFFICIAL_BUILD = 9999
+
+def main():
+
+ version_str, version = get_version()
+
+ files = [ 'pyodbcmodule.cpp', 'cursor.cpp', 'row.cpp', 'connection.cpp', 'buffer.cpp', 'params.cpp', 'errors.cpp', 'getdata.cpp' ]
+ files = [ join('src', f) for f in files ]
+ libraries = []
+
+ extra_compile_args = None
+ extra_link_args = None
+
+ if os.name == 'nt':
+ # Windows native
+ files.append(join('src', 'pyodbc.rc'))
+ libraries.append('odbc32')
+ extra_compile_args = [ '/W4' ]
+
+ # Add debugging symbols
+ extra_compile_args = [ '/W4', '/Zi', '/Od' ]
+ extra_link_args = [ '/DEBUG' ]
+
+ elif os.environ.get("OS", '').lower().startswith('windows'):
+ # Windows Cygwin (posix on windows)
+ # OS name not windows, but still on Windows
+ libraries.append('odbc32')
+
+ elif sys.platform == 'darwin':
+ # OS/X now ships with iODBC.
+ libraries.append('iodbc')
+
+ else:
+ # Other posix-like: Linux, Solaris, etc.
+ # What is the proper way to detect iODBC, MyODBC, unixODBC, etc.?
+ libraries.append('odbc')
+
+ if exists('MANIFEST'):
+ os.remove('MANIFEST')
+
+ setup (name = "pyodbc",
+ version = version_str,
+ description = "DB API Module for ODBC",
+
+ long_description = ('A Python DB API 2 module for ODBC. This project provides an up-to-date, '
+ 'convenient interface to ODBC using native data types like datetime and decimal.'),
+
+ maintainer = "Michael Kleehammer",
+ maintainer_email = "michael@kleehammer.com",
+
+ ext_modules = [ Extension('pyodbc', files,
+ libraries=libraries,
+ define_macros = [ ('PYODBC_%s' % name, value) for name,value in zip(['MAJOR', 'MINOR', 'MICRO', 'BUILD'], version) ],
+ extra_compile_args=extra_compile_args,
+ extra_link_args=extra_link_args
+ ) ],
+
+ classifiers = [ 'Development Status :: 5 - Production/Stable',
+ 'Intended Audience :: Developers',
+ 'Intended Audience :: System Administrators',
+ 'License :: OSI Approved :: MIT License',
+ 'Operating System :: Microsoft :: Windows',
+ 'Operating System :: POSIX',
+ 'Programming Language :: Python',
+ 'Topic :: Database',
+ ],
+
+ url = 'http://pyodbc.sourceforge.net',
+ download_url = 'http://github.com/pyodbc/pyodbc/tree/master')
+
+
+def get_version():
+ """
+ Returns the version of the product as (description, [major,minor,micro,beta]).
+
+ If the release is official, `beta` will be 9999 (OFFICIAL_BUILD).
+
+ 1. If in a git repository, use the latest tag (git describe).
+ 2. If in an unzipped source directory (from setup.py sdist),
+ read the version from the PKG-INFO file.
+ 3. Use 2.1.0.0 and complain a lot.
+ """
+ # My goal is to (1) provide accurate tags for official releases but (2) not have to manage tags for every test
+ # release.
+ #
+ # Official versions are tagged using 3 numbers: major, minor, micro. A build of a tagged version should produce
+ # the version using just these pieces, such as 2.1.4.
+ #
+ # Unofficial versions are "working towards" the next version. So the next unofficial build after 2.1.4 would be a
+ # beta for 2.1.5. Using 'git describe' we can find out how many changes have been made after 2.1.4 and we'll use
+ # this count as the beta id (beta1, beta2, etc.)
+ #
+ # Since the 4 numbers are put into the Windows DLL, we want to make sure the beta versions sort *after* the
+ # official, so we set the final build number to 9999, but we don't show it.
+
+ name = None # branch/feature name. Should be None for official builds.
+ numbers = None # The 4 integers that make up the version.
+
+ # If this is a source release the version will have already been assigned and be in the PKG-INFO file.
+
+ name, numbers = _get_version_pkginfo()
+
+ # If not a source release, we should be in a git repository. Look for the latest tag.
+
+ if not numbers:
+ name, numbers = _get_version_git()
+
+ if not numbers:
+ print 'WARNING: Unable to determine version. Using 2.1.0.0'
+ name, numbers = '2.1.0-unsupported', [2,1,0,0]
+
+ return name, numbers
+
+
+def _get_version_pkginfo():
+ filename = join(dirname(abspath(__file__)), 'PKG-INFO')
+ if exists(filename):
+ re_ver = re.compile(r'^Version: \s+ (\d+)\.(\d+)\.(\d+) (?: -beta(\d+))?', re.VERBOSE)
+ for line in open(filename):
+ match = re_ver.search(line)
+ if match:
+ name = line.split(':', 1)[1].strip()
+ numbers = [ int(n or 0) for n in match.groups() ]
+ return name, numbers
+
+ return None, None
+
+
+def _get_version_git():
+ n, result = getoutput('git describe --tags')
+ if n:
+ print 'WARNING: git describe failed with: %s %s' % (n, result)
+ return None, None
+
+ match = re.match(r'(\d+).(\d+).(\d+) (?: -(\d+)-g[0-9a-z]+)?', result, re.VERBOSE)
+ if not match:
+ return None, None
+
+ numbers = [ int(n or OFFICIAL_BUILD) for n in match.groups() ]
+ if numbers[-1] == OFFICIAL_BUILD:
+ name = '%s.%s.%s' % tuple(numbers[:3])
+ if numbers[-1] != OFFICIAL_BUILD:
+ # This is a beta of the next micro release, so increment the micro number to reflect this.
+ numbers[-2] += 1
+ name = '%s.%s.%s-beta%s' % tuple(numbers)
+ return name, numbers
+
+
+
+def getoutput(cmd):
+ pipe = os.popen(cmd, 'r')
+ text = pipe.read().rstrip('\n')
+ status = pipe.close() or 0
+ return status, text
+
+if __name__ == '__main__':
+ main()
58 src/buffer.cpp
@@ -0,0 +1,58 @@
+
+// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+// documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+// permit persons to whom the Software is furnished to do so.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
+// OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+#include "pyodbc.h"
+#include "buffer.h"
+#include "pyodbcmodule.h"
+
+Py_ssize_t
+PyBuffer_GetMemory(PyObject* buffer, const char** pp)
+{
+ PyBufferProcs* procs = buffer->ob_type->tp_as_buffer;
+
+ if (!procs || !PyType_HasFeature(buffer->ob_type, Py_TPFLAGS_HAVE_GETCHARBUFFER))
+ {
+ // Can't access the memory directly because the buffer object doesn't support it.
+ return -1;
+ }
+
+ if (procs->bf_getsegcount(buffer, 0) != 1)
+ {
+ // Can't access the memory directly because there is more than one segment.
+ return -1;
+ }
+
+#if PY_VERSION_HEX >= 0x02050000
+ char* pT = 0;
+#else
+ const char* pT = 0;
+#endif
+ Py_ssize_t cb = procs->bf_getcharbuffer(buffer, 0, &pT);
+
+ if (pp)
+ *pp = pT;
+
+ return cb;
+}
+
+Py_ssize_t
+PyBuffer_Size(PyObject* self)
+{
+ if (!PyBuffer_Check(self))
+ {
+ PyErr_SetString(PyExc_TypeError, "Not a buffer!");
+ return 0;
+ }
+
+ Py_ssize_t total_len = 0;
+ self->ob_type->tp_as_buffer->bf_getsegcount(self, &total_len);
+ return total_len;
+}
55 src/buffer.h
@@ -0,0 +1,55 @@
+
+// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+// documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+// permit persons to whom the Software is furnished to do so.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
+// OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+#ifndef _BUFFER_H
+#define _BUFFER_H
+
+// If the buffer object has a single, accessible segment, returns the length of the buffer. If 'pp' is not NULL, the
+// address of the segment is also returned. If there is more than one segment or if it cannot be accessed, -1 is
+// returned and 'pp' is not modified.
+Py_ssize_t
+PyBuffer_GetMemory(PyObject* buffer, const char** pp);
+
+// Returns the size of a Python buffer.
+//
+// If an error occurs, zero is returned, but zero is a valid buffer size (I guess), so use PyErr_Occurred to determine
+// if it represents a failure.
+Py_ssize_t
+PyBuffer_Size(PyObject* self);
+
+
+class BufferSegmentIterator
+{
+ PyObject* pBuffer;
+ Py_ssize_t iSegment;
+ Py_ssize_t cSegments;
+
+public:
+ BufferSegmentIterator(PyObject* _pBuffer)
+ {
+ pBuffer = _pBuffer;
+ PyBufferProcs* procs = pBuffer->ob_type->tp_as_buffer;
+ iSegment = 0;
+ cSegments = procs->bf_getsegcount(pBuffer, 0);
+ }
+
+ bool Next(byte*& pb, SQLLEN &cb)
+ {
+ if (iSegment >= cSegments)
+ return false;
+
+ PyBufferProcs* procs = pBuffer->ob_type->tp_as_buffer;
+ cb = procs->bf_getreadbuffer(pBuffer, iSegment++, (void**)&pb);
+ return true;
+ }
+};
+
+#endif
751 src/connection.cpp
@@ -0,0 +1,751 @@
+
+// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+// documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+// permit persons to whom the Software is furnished to do so.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
+// OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+#include "pyodbc.h"
+#include "connection.h"
+#include "cursor.h"
+#include "pyodbcmodule.h"
+#include "errors.h"
+
+static char connection_doc[] =
+ "Connection objects manage connections to the database.\n"
+ "\n"
+ "Each manages a single ODBC HDBC.";
+
+static Connection*
+Connection_Validate(PyObject* self)
+{
+ Connection* cnxn;
+
+ if (self == 0 || !Connection_Check(self))
+ {
+ PyErr_SetString(PyExc_TypeError, "Connection object required");
+ return 0;
+ }
+
+ cnxn = (Connection*)self;
+
+ if (cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ PyErr_SetString(ProgrammingError, "Attempt to use a closed connection.");
+ return 0;
+ }
+
+ return cnxn;
+}
+
+static bool Connect(PyObject* pConnectString, HDBC hdbc, bool fAnsi)
+{
+ // This should have been checked by the global connect function.
+ I(PyString_Check(pConnectString) || PyUnicode_Check(pConnectString));
+
+ const int cchMax = 600;
+
+ if (PySequence_Length(pConnectString) >= cchMax)
+ {
+ PyErr_SetString(PyExc_TypeError, "connection string too long");
+ return false;
+ }
+
+ // The driver manager determines if the app is a Unicode app based on whether we call SQLDriverConnectA or
+ // SQLDriverConnectW. Some drivers, notably Microsoft Access/Jet, change their behavior based on this, so we try
+ // the Unicode version first. (The Access driver only supports Unicode text, but SQLDescribeCol returns SQL_CHAR
+ // instead of SQL_WCHAR if we connect with the ANSI version. Obviously this causes lots of errors since we believe
+ // what it tells us (SQL_CHAR).)
+
+ // Python supports only UCS-2 and UCS-4, so we shouldn't need to worry about receiving surrogate pairs. However,
+ // Windows does use UCS-16, so it is possible something would be misinterpreted as one. We may need to examine
+ // this more.
+
+ SQLRETURN ret;
+
+ if (!fAnsi)
+ {
+ SQLWCHAR szConnectW[cchMax];
+ if (PyUnicode_Check(pConnectString))
+ {
+ Py_UNICODE* p = PyUnicode_AS_UNICODE(pConnectString);
+ for (int i = 0, c = PyUnicode_GET_SIZE(pConnectString); i <= c; i++)
+ szConnectW[i] = (wchar_t)p[i];
+ }
+ else
+ {
+ const char* p = PyString_AS_STRING(pConnectString);
+ for (int i = 0, c = PyString_GET_SIZE(pConnectString); i <= c; i++)
+ szConnectW[i] = (wchar_t)p[i];
+ }
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLDriverConnectW(hdbc, 0, szConnectW, SQL_NTS, 0, 0, 0, SQL_DRIVER_NOPROMPT);
+ Py_END_ALLOW_THREADS
+ if (SQL_SUCCEEDED(ret))
+ return true;
+
+ // The Unicode function failed. If the error is that the driver doesn't have a Unicode version (IM001), continue
+ // to the ANSI version.
+
+ PyObject* error = GetErrorFromHandle("SQLDriverConnectW", hdbc, SQL_NULL_HANDLE);
+ if (!HasSqlState(error, "IM001"))
+ {
+ PyErr_SetObject(PyObject_Type(error), error);
+ return false;
+ }
+ Py_XDECREF(error);
+ }
+
+ SQLCHAR szConnect[cchMax];
+ if (PyUnicode_Check(pConnectString))
+ {
+ Py_UNICODE* p = PyUnicode_AS_UNICODE(pConnectString);
+ for (int i = 0, c = PyUnicode_GET_SIZE(pConnectString); i <= c; i++)
+ {
+ if (p[i] > 0xFF)
+ {
+ PyErr_SetString(PyExc_TypeError, "A Unicode connection string was supplied but the driver does "
+ "not have a Unicode connect function");
+ return false;
+ }
+ szConnect[i] = (char)p[i];
+ }
+ }
+ else
+ {
+ const char* p = PyString_AS_STRING(pConnectString);
+ memcpy(szConnect, p, PyString_GET_SIZE(pConnectString) + 1);
+ }
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLDriverConnect(hdbc, 0, szConnect, SQL_NTS, 0, 0, 0, SQL_DRIVER_NOPROMPT);
+ Py_END_ALLOW_THREADS
+ if (SQL_SUCCEEDED(ret))
+ return true;
+
+ RaiseErrorFromHandle("SQLDriverConnect", hdbc, SQL_NULL_HANDLE);
+
+ return false;
+}
+
+
+PyObject* Connection_New(PyObject* pConnectString, bool fAutoCommit, bool fAnsi)
+{
+ // pConnectString
+ // A string or unicode object. (This must be checked by the caller.)
+ //
+ // fAnsi
+ // If true, do not attempt a Unicode connection.
+
+ //
+ // Allocate HDBC and connect
+ //
+
+ HDBC hdbc = SQL_NULL_HANDLE;
+ if (!SQL_SUCCEEDED(SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc)))
+ return RaiseErrorFromHandle("SQLAllocHandle", SQL_NULL_HANDLE, SQL_NULL_HANDLE);
+
+ if (!Connect(pConnectString, hdbc, fAnsi))
+ {
+ // Connect has already set an exception.
+ SQLFreeHandle(SQL_HANDLE_DBC, hdbc);
+ return 0;
+ }
+
+ //
+ // Connected, so allocate the Connection object.
+ //
+
+ // Set all variables to something valid, so we don't crash in dealloc if this function fails.
+
+ Connection* cnxn = PyObject_NEW(Connection, &ConnectionType);
+
+ if (cnxn == 0)
+ {
+ SQLFreeHandle(SQL_HANDLE_DBC, hdbc);
+ return 0;
+ }
+
+ cnxn->hdbc = hdbc;
+ cnxn->searchescape = 0;
+ cnxn->odbc_major = 3;
+ cnxn->odbc_minor = 50;
+ cnxn->nAutoCommit = fAutoCommit ? SQL_AUTOCOMMIT_ON : SQL_AUTOCOMMIT_OFF;
+ cnxn->supports_describeparam = false;
+ cnxn->datetime_precision = 19; // default: "yyyy-mm-dd hh:mm:ss"
+
+ //
+ // Initialize autocommit mode.
+ //
+
+ // The DB API says we have to default to manual-commit, but ODBC defaults to auto-commit. We also provide a
+ // keyword parameter that allows the user to override the DB API and force us to start in auto-commit (in which
+ // case we don't have to do anything).
+
+ if (fAutoCommit == false && !SQL_SUCCEEDED(SQLSetConnectAttr(cnxn->hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)cnxn->nAutoCommit, SQL_IS_UINTEGER)))
+ {
+ RaiseErrorFromHandle("SQLSetConnnectAttr(SQL_ATTR_AUTOCOMMIT)", cnxn->hdbc, SQL_NULL_HANDLE);
+ Py_DECREF(cnxn);
+ return 0;
+ }
+
+#ifdef TRACE_ALL
+ printf("cnxn.new cnxn=%p hdbc=%d\n", cnxn, cnxn->hdbc);
+#endif
+
+ //
+ // Gather connection-level information we'll need later.
+ //
+
+ // FUTURE: Measure performance here. Consider caching by connection string if necessary.
+
+ char szVer[20];
+ SQLSMALLINT cch = 0;
+ if (SQL_SUCCEEDED(SQLGetInfo(cnxn->hdbc, SQL_DRIVER_ODBC_VER, szVer, _countof(szVer), &cch)))
+ {
+ char* dot = strchr(szVer, '.');
+ if (dot)
+ {
+ *dot = '\0';
+ cnxn->odbc_major=(char)atoi(szVer);
+ cnxn->odbc_minor=(char)atoi(dot + 1);
+ }
+ }
+
+ char szYN[2];
+ if (SQL_SUCCEEDED(SQLGetInfo(cnxn->hdbc, SQL_DESCRIBE_PARAMETER, szYN, _countof(szYN), &cch)))
+ {
+ cnxn->supports_describeparam = szYN[0] == 'Y';
+ }
+
+ // What is the datetime precision? This unfortunately requires a cursor (HSTMT).
+
+ HSTMT hstmt = 0;
+ if (SQL_SUCCEEDED(SQLAllocHandle(SQL_HANDLE_STMT, cnxn->hdbc, &hstmt)))
+ {
+ if (SQL_SUCCEEDED(SQLGetTypeInfo(hstmt, SQL_TYPE_TIMESTAMP)) && SQL_SUCCEEDED(SQLFetch(hstmt)))
+ {
+ SQLINTEGER columnsize;
+ if (SQL_SUCCEEDED(SQLGetData(hstmt, 3, SQL_INTEGER, &columnsize, sizeof(columnsize), 0)))
+ {
+ cnxn->datetime_precision = columnsize;
+ }
+ }
+
+ SQLFreeStmt(hstmt, SQL_CLOSE);
+ }
+
+ return reinterpret_cast<PyObject*>(cnxn);
+}
+
+
+static int
+Connection_clear(Connection* cnxn)
+{
+ // Internal method for closing the connection. (Not called close so it isn't confused with the external close
+ // method.)
+
+ if (cnxn->hdbc != SQL_NULL_HANDLE)
+ {
+ // REVIEW: Release threads? (But make sure you zero out hdbc *first*!
+
+#ifdef TRACE_ALL
+ printf("cnxn.clear cnxn=%p hdbc=%d\n", cnxn, cnxn->hdbc);
+#endif
+
+ if (cnxn->nAutoCommit == SQL_AUTOCOMMIT_OFF)
+ SQLEndTran(SQL_HANDLE_DBC, cnxn->hdbc, SQL_ROLLBACK);
+ SQLDisconnect(cnxn->hdbc);
+ SQLFreeHandle(SQL_HANDLE_DBC, cnxn->hdbc);
+ cnxn->hdbc = SQL_NULL_HANDLE;
+ }
+
+ Py_XDECREF(cnxn->searchescape);
+ cnxn->searchescape = 0;
+
+ return 0;
+}
+
+static void
+Connection_dealloc(PyObject* self)
+{
+ Connection* cnxn = (Connection*)self;
+ Connection_clear(cnxn);
+ PyObject_Del(self);
+}
+
+static char close_doc[] =
+ "Close the connection now (rather than whenever __del__ is called).\n"
+ "\n"
+ "The connection will be unusable from this point forward and a ProgrammingError\n"
+ "will be raised if any operation is attempted with the connection. The same\n"
+ "applies to all cursor objects trying to use the connection.\n"
+ "\n"
+ "Note that closing a connection without committing the changes first will cause\n"
+ "an implicit rollback to be performed.";
+
+static PyObject*
+Connection_close(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return 0;
+
+ Connection_clear(cnxn);
+
+ Py_RETURN_NONE;
+}
+
+static PyObject*
+Connection_cursor(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return 0;
+
+ return (PyObject*)Cursor_New(cnxn);
+}
+
+static PyObject*
+Connection_execute(PyObject* self, PyObject* args)
+{
+ PyObject* result = 0;
+
+ Cursor* cursor;
+ Connection* cnxn = Connection_Validate(self);
+
+ if (!cnxn)
+ return 0;
+
+ cursor = Cursor_New(cnxn);
+ if (!cursor)
+ return 0;
+
+ result = Cursor_execute((PyObject*)cursor, args);
+
+ Py_DECREF((PyObject*)cursor);
+
+ return result;
+}
+
+enum
+{
+ GI_YESNO,
+ GI_STRING,
+ GI_UINTEGER,
+ GI_USMALLINT,
+};
+
+struct GetInfoType
+{
+ SQLUSMALLINT infotype;
+ int datatype; // GI_XXX
+};
+
+static const GetInfoType aInfoTypes[] = {
+ { SQL_ACCESSIBLE_PROCEDURES, GI_YESNO },
+ { SQL_ACCESSIBLE_TABLES, GI_YESNO },
+ { SQL_ACTIVE_ENVIRONMENTS, GI_USMALLINT },
+ { SQL_AGGREGATE_FUNCTIONS, GI_UINTEGER },
+ { SQL_ALTER_DOMAIN, GI_UINTEGER },
+ { SQL_ALTER_TABLE, GI_UINTEGER },
+ { SQL_ASYNC_MODE, GI_UINTEGER },
+ { SQL_BATCH_ROW_COUNT, GI_UINTEGER },
+ { SQL_BATCH_SUPPORT, GI_UINTEGER },
+ { SQL_BOOKMARK_PERSISTENCE, GI_UINTEGER },
+ { SQL_CATALOG_LOCATION, GI_USMALLINT },
+ { SQL_CATALOG_NAME, GI_YESNO },
+ { SQL_CATALOG_NAME_SEPARATOR, GI_STRING },
+ { SQL_CATALOG_TERM, GI_STRING },
+ { SQL_CATALOG_USAGE, GI_UINTEGER },
+ { SQL_COLLATION_SEQ, GI_STRING },
+ { SQL_COLUMN_ALIAS, GI_YESNO },
+ { SQL_CONCAT_NULL_BEHAVIOR, GI_USMALLINT },
+ { SQL_CONVERT_FUNCTIONS, GI_UINTEGER },
+ { SQL_CONVERT_VARCHAR, GI_UINTEGER },
+ { SQL_CORRELATION_NAME, GI_USMALLINT },
+ { SQL_CREATE_ASSERTION, GI_UINTEGER },
+ { SQL_CREATE_CHARACTER_SET, GI_UINTEGER },
+ { SQL_CREATE_COLLATION, GI_UINTEGER },
+ { SQL_CREATE_DOMAIN, GI_UINTEGER },
+ { SQL_CREATE_SCHEMA, GI_UINTEGER },
+ { SQL_CREATE_TABLE, GI_UINTEGER },
+ { SQL_CREATE_TRANSLATION, GI_UINTEGER },
+ { SQL_CREATE_VIEW, GI_UINTEGER },
+ { SQL_CURSOR_COMMIT_BEHAVIOR, GI_USMALLINT },
+ { SQL_CURSOR_ROLLBACK_BEHAVIOR, GI_USMALLINT },
+ { SQL_DATABASE_NAME, GI_STRING },
+ { SQL_DATA_SOURCE_NAME, GI_STRING },
+ { SQL_DATA_SOURCE_READ_ONLY, GI_YESNO },
+ { SQL_DATETIME_LITERALS, GI_UINTEGER },
+ { SQL_DBMS_NAME, GI_STRING },
+ { SQL_DBMS_VER, GI_STRING },
+ { SQL_DDL_INDEX, GI_UINTEGER },
+ { SQL_DEFAULT_TXN_ISOLATION, GI_UINTEGER },
+ { SQL_DESCRIBE_PARAMETER, GI_YESNO },
+ { SQL_DM_VER, GI_STRING },
+ { SQL_DRIVER_NAME, GI_STRING },
+ { SQL_DRIVER_ODBC_VER, GI_STRING },
+ { SQL_DRIVER_VER, GI_STRING },
+ { SQL_DROP_ASSERTION, GI_UINTEGER },
+ { SQL_DROP_CHARACTER_SET, GI_UINTEGER },
+ { SQL_DROP_COLLATION, GI_UINTEGER },
+ { SQL_DROP_DOMAIN, GI_UINTEGER },
+ { SQL_DROP_SCHEMA, GI_UINTEGER },
+ { SQL_DROP_TABLE, GI_UINTEGER },
+ { SQL_DROP_TRANSLATION, GI_UINTEGER },
+ { SQL_DROP_VIEW, GI_UINTEGER },
+ { SQL_DYNAMIC_CURSOR_ATTRIBUTES1, GI_UINTEGER },
+ { SQL_DYNAMIC_CURSOR_ATTRIBUTES2, GI_UINTEGER },
+ { SQL_EXPRESSIONS_IN_ORDERBY, GI_YESNO },
+ { SQL_FILE_USAGE, GI_USMALLINT },
+ { SQL_FORWARD_ONLY_CURSOR_ATTRIBUTES1, GI_UINTEGER },
+ { SQL_FORWARD_ONLY_CURSOR_ATTRIBUTES2, GI_UINTEGER },
+ { SQL_GETDATA_EXTENSIONS, GI_UINTEGER },
+ { SQL_GROUP_BY, GI_USMALLINT },
+ { SQL_IDENTIFIER_CASE, GI_USMALLINT },
+ { SQL_IDENTIFIER_QUOTE_CHAR, GI_STRING },
+ { SQL_INDEX_KEYWORDS, GI_UINTEGER },
+ { SQL_INFO_SCHEMA_VIEWS, GI_UINTEGER },
+ { SQL_INSERT_STATEMENT, GI_UINTEGER },
+ { SQL_INTEGRITY, GI_YESNO },
+ { SQL_KEYSET_CURSOR_ATTRIBUTES1, GI_UINTEGER },
+ { SQL_KEYSET_CURSOR_ATTRIBUTES2, GI_UINTEGER },
+ { SQL_KEYWORDS, GI_STRING },
+ { SQL_LIKE_ESCAPE_CLAUSE, GI_YESNO },
+ { SQL_MAX_ASYNC_CONCURRENT_STATEMENTS, GI_UINTEGER },
+ { SQL_MAX_BINARY_LITERAL_LEN, GI_UINTEGER },
+ { SQL_MAX_CATALOG_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_CHAR_LITERAL_LEN, GI_UINTEGER },
+ { SQL_MAX_COLUMNS_IN_GROUP_BY, GI_USMALLINT },
+ { SQL_MAX_COLUMNS_IN_INDEX, GI_USMALLINT },
+ { SQL_MAX_COLUMNS_IN_ORDER_BY, GI_USMALLINT },
+ { SQL_MAX_COLUMNS_IN_SELECT, GI_USMALLINT },
+ { SQL_MAX_COLUMNS_IN_TABLE, GI_USMALLINT },
+ { SQL_MAX_COLUMN_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_CONCURRENT_ACTIVITIES, GI_USMALLINT },
+ { SQL_MAX_CURSOR_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_DRIVER_CONNECTIONS, GI_USMALLINT },
+ { SQL_MAX_IDENTIFIER_LEN, GI_USMALLINT },
+ { SQL_MAX_INDEX_SIZE, GI_UINTEGER },
+ { SQL_MAX_PROCEDURE_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_ROW_SIZE, GI_UINTEGER },
+ { SQL_MAX_ROW_SIZE_INCLUDES_LONG, GI_YESNO },
+ { SQL_MAX_SCHEMA_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_STATEMENT_LEN, GI_UINTEGER },
+ { SQL_MAX_TABLES_IN_SELECT, GI_USMALLINT },
+ { SQL_MAX_TABLE_NAME_LEN, GI_USMALLINT },
+ { SQL_MAX_USER_NAME_LEN, GI_USMALLINT },
+ { SQL_MULTIPLE_ACTIVE_TXN, GI_YESNO },
+ { SQL_MULT_RESULT_SETS, GI_YESNO },
+ { SQL_NEED_LONG_DATA_LEN, GI_YESNO },
+ { SQL_NON_NULLABLE_COLUMNS, GI_USMALLINT },
+ { SQL_NULL_COLLATION, GI_USMALLINT },
+ { SQL_NUMERIC_FUNCTIONS, GI_UINTEGER },
+ { SQL_ODBC_INTERFACE_CONFORMANCE, GI_UINTEGER },
+ { SQL_ODBC_VER, GI_STRING },
+ { SQL_OJ_CAPABILITIES, GI_UINTEGER },
+ { SQL_ORDER_BY_COLUMNS_IN_SELECT, GI_YESNO },
+ { SQL_PARAM_ARRAY_ROW_COUNTS, GI_UINTEGER },
+ { SQL_PARAM_ARRAY_SELECTS, GI_UINTEGER },
+ { SQL_PROCEDURES, GI_YESNO },
+ { SQL_PROCEDURE_TERM, GI_STRING },
+ { SQL_QUOTED_IDENTIFIER_CASE, GI_USMALLINT },
+ { SQL_ROW_UPDATES, GI_YESNO },
+ { SQL_SCHEMA_TERM, GI_STRING },
+ { SQL_SCHEMA_USAGE, GI_UINTEGER },
+ { SQL_SCROLL_OPTIONS, GI_UINTEGER },
+ { SQL_SEARCH_PATTERN_ESCAPE, GI_STRING },
+ { SQL_SERVER_NAME, GI_STRING },
+ { SQL_SPECIAL_CHARACTERS, GI_STRING },
+ { SQL_SQL92_DATETIME_FUNCTIONS, GI_UINTEGER },
+ { SQL_SQL92_FOREIGN_KEY_DELETE_RULE, GI_UINTEGER },
+ { SQL_SQL92_FOREIGN_KEY_UPDATE_RULE, GI_UINTEGER },
+ { SQL_SQL92_GRANT, GI_UINTEGER },
+ { SQL_SQL92_NUMERIC_VALUE_FUNCTIONS, GI_UINTEGER },
+ { SQL_SQL92_PREDICATES, GI_UINTEGER },
+ { SQL_SQL92_RELATIONAL_JOIN_OPERATORS, GI_UINTEGER },
+ { SQL_SQL92_REVOKE, GI_UINTEGER },
+ { SQL_SQL92_ROW_VALUE_CONSTRUCTOR, GI_UINTEGER },
+ { SQL_SQL92_STRING_FUNCTIONS, GI_UINTEGER },
+ { SQL_SQL92_VALUE_EXPRESSIONS, GI_UINTEGER },
+ { SQL_SQL_CONFORMANCE, GI_UINTEGER },
+ { SQL_STANDARD_CLI_CONFORMANCE, GI_UINTEGER },
+ { SQL_STATIC_CURSOR_ATTRIBUTES1, GI_UINTEGER },
+ { SQL_STATIC_CURSOR_ATTRIBUTES2, GI_UINTEGER },
+ { SQL_STRING_FUNCTIONS, GI_UINTEGER },
+ { SQL_SUBQUERIES, GI_UINTEGER },
+ { SQL_SYSTEM_FUNCTIONS, GI_UINTEGER },
+ { SQL_TABLE_TERM, GI_STRING },
+ { SQL_TIMEDATE_ADD_INTERVALS, GI_UINTEGER },
+ { SQL_TIMEDATE_DIFF_INTERVALS, GI_UINTEGER },
+ { SQL_TIMEDATE_FUNCTIONS, GI_UINTEGER },
+ { SQL_TXN_CAPABLE, GI_USMALLINT },
+ { SQL_TXN_ISOLATION_OPTION, GI_UINTEGER },
+ { SQL_UNION, GI_UINTEGER },
+ { SQL_USER_NAME, GI_STRING },
+ { SQL_XOPEN_CLI_YEAR, GI_STRING },
+};
+
+static PyObject*
+Connection_getinfo(PyObject* self, PyObject* args)
+{
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return 0;
+
+ SQLUSMALLINT infotype;
+ if (!PyArg_ParseTuple(args, "l", &infotype))
+ return 0;
+
+ unsigned int i = 0;
+ for (; i < _countof(aInfoTypes); i++)
+ {
+ if (aInfoTypes[i].infotype == infotype)
+ break;
+ }
+
+ if (i == _countof(aInfoTypes))
+ return RaiseErrorV(0, ProgrammingError, "Invalid getinfo value: %d", infotype);
+
+ char szBuffer[0x1000];
+ SQLSMALLINT cch = 0;
+
+ if (!SQL_SUCCEEDED(SQLGetInfo(cnxn->hdbc, infotype, szBuffer, sizeof(szBuffer), &cch)))
+ {
+ RaiseErrorFromHandle("SQLGetInfo", cnxn->hdbc, SQL_NULL_HANDLE);
+ return 0;
+ }
+
+ PyObject* result = 0;
+
+ switch (aInfoTypes[i].datatype)
+ {
+ case GI_YESNO:
+ result = (szBuffer[0] == 'Y') ? Py_True : Py_False;
+ Py_INCREF(result);
+ break;
+
+ case GI_STRING:
+ result = PyString_FromStringAndSize(szBuffer, (Py_ssize_t)cch);
+ break;
+
+ case GI_UINTEGER:
+ {
+ SQLUINTEGER n = *(SQLUINTEGER*)szBuffer; // Does this work on PPC or do we need a union?
+ if (n <= (SQLUINTEGER)PyInt_GetMax())
+ result = PyInt_FromLong((long)n);
+ else
+ result = PyLong_FromUnsignedLong(n);
+ break;
+ }
+
+ case GI_USMALLINT:
+ result = PyInt_FromLong(*(SQLUSMALLINT*)szBuffer);
+ break;
+ }
+
+ return result;
+}
+
+
+static PyObject*
+Connection_endtrans(PyObject* self, PyObject* args, SQLSMALLINT type)
+{
+ UNUSED(args);
+
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return 0;
+
+#ifdef TRACE_ALL
+ printf("%s: cnxn=%p hdbc=%d\n", (type == SQL_COMMIT) ? "commit" : "rollback", cnxn, cnxn->hdbc);
+#endif
+
+ if (!SQL_SUCCEEDED(SQLEndTran(SQL_HANDLE_DBC, cnxn->hdbc, type)))
+ {
+ RaiseErrorFromHandle("SQLEndTran", cnxn->hdbc, SQL_NULL_HANDLE);
+ return 0;
+ }
+
+ Py_RETURN_NONE;
+}
+
+static PyObject*
+Connection_commit(PyObject* self, PyObject* args)
+{
+ return Connection_endtrans(self, args, SQL_COMMIT);
+}
+
+static PyObject*
+Connection_rollback(PyObject* self, PyObject* args)
+{
+ return Connection_endtrans(self, args, SQL_ROLLBACK);
+}
+
+static char cursor_doc[] =
+ "Return a new Cursor Object using the connection.";
+
+static char execute_doc[] =
+ "execute(sql, [params]) --> None | Cursor | count\n" \
+ "\n" \
+ "Creates a new Cursor object, calls its execute method, and returns its return\n" \
+ "value. See Cursor.execute for a description of the parameter formats and\n" \
+ "return values.\n" \
+ "\n" \
+ "This is a convenience method that is not part of the DB API. Since a new\n" \
+ "Cursor is allocated by each call, this should not be used if more than one SQL\n" \
+ "statement needs to be executed.";
+
+static char commit_doc[] =
+ "Commit any pending transaction to the database.";
+
+static char rollback_doc[] =
+ "Causes the the database to roll back to the start of any pending transaction.";
+
+static char getinfo_doc[] =
+ "getinfo(type) --> str | int | bool\n"
+ "\n"
+ "Calls SQLGetInfo, passing `type`, and returns the result formatted as a Python object.";
+
+
+PyObject*
+Connection_getautocommit(PyObject* self, void* closure)
+{
+ UNUSED(closure);
+
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return 0;
+
+ PyObject* result = (cnxn->nAutoCommit == SQL_AUTOCOMMIT_ON) ? Py_True : Py_False;
+ Py_INCREF(result);
+ return result;
+}
+
+static int
+Connection_setautocommit(PyObject* self, PyObject* value, void* closure)
+{
+ UNUSED(closure);
+
+ Connection* cnxn = Connection_Validate(self);
+ if (!cnxn)
+ return -1;
+
+ if (value == 0)
+ {
+ PyErr_SetString(PyExc_TypeError, "Cannot delete the autocommit attribute.");
+ return -1;
+ }
+
+ int nAutoCommit = PyObject_IsTrue(value) ? SQL_AUTOCOMMIT_ON : SQL_AUTOCOMMIT_OFF;
+ if (!SQL_SUCCEEDED(SQLSetConnectAttr(cnxn->hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)nAutoCommit, SQL_IS_UINTEGER)))
+ {
+ RaiseErrorFromHandle("SQLSetConnectAttr", cnxn->hdbc, SQL_NULL_HANDLE);
+ return -1;
+ }
+
+ cnxn->nAutoCommit = nAutoCommit;
+
+ return 0;
+}
+
+
+PyObject*
+Connection_getsearchescape(Connection* self, void* closure)
+{
+ UNUSED(closure);
+
+ if (!self->searchescape)
+ {
+ char sz[8] = { 0 };
+ SQLSMALLINT cch = 0;
+
+ if (!SQL_SUCCEEDED(SQLGetInfo(self->hdbc, SQL_SEARCH_PATTERN_ESCAPE, &sz, _countof(sz), &cch)))
+ return RaiseErrorFromHandle("SQLGetInfo", self->hdbc, SQL_NULL_HANDLE);
+
+ self->searchescape = PyString_FromStringAndSize(sz, (Py_ssize_t)cch);
+ }
+
+ Py_INCREF(self->searchescape);
+ return self->searchescape;
+}
+
+static struct PyMethodDef Connection_methods[] =
+{
+ { "cursor", (PyCFunction)Connection_cursor, METH_NOARGS, cursor_doc },
+ { "close", (PyCFunction)Connection_close, METH_NOARGS, close_doc },
+ { "execute", (PyCFunction)Connection_execute, METH_VARARGS, execute_doc },
+ { "commit", (PyCFunction)Connection_commit, METH_NOARGS, commit_doc },
+ { "rollback", (PyCFunction)Connection_rollback, METH_NOARGS, rollback_doc },
+ { "getinfo", (PyCFunction)Connection_getinfo, METH_VARARGS, getinfo_doc },
+ { 0, 0, 0, 0 }
+};
+
+static PyGetSetDef Connection_getseters[] = {
+ { "searchescape", (getter)Connection_getsearchescape, 0,
+ "The ODBC search pattern escape character, as returned by\n"
+ "SQLGetInfo(SQL_SEARCH_PATTERN_ESCAPE). These are driver specific.", 0 },
+ { "autocommit", Connection_getautocommit, Connection_setautocommit,
+ "Returns True if the connection is in autocommit mode; False otherwise.", 0 },
+ { 0 }
+};
+
+PyTypeObject ConnectionType =
+{
+ PyObject_HEAD_INIT(0)
+ 0, // ob_size
+ "pyodbc.Connection", // tp_name
+ sizeof(Connection), // tp_basicsize
+ 0, // tp_itemsize
+ (destructor)Connection_dealloc, // destructor tp_dealloc
+ 0, // tp_print
+ 0, // tp_getattr
+ 0, // tp_setattr
+ 0, // tp_compare
+ 0, // tp_repr
+ 0, // tp_as_number
+ 0, // tp_as_sequence
+ 0, // tp_as_mapping
+ 0, // tp_hash
+ 0, // tp_call
+ 0, // tp_str
+ 0, // tp_getattro
+ 0, // tp_setattro
+ 0, // tp_as_buffer
+ Py_TPFLAGS_DEFAULT, // tp_flags
+ connection_doc, // tp_doc
+ 0, // tp_traverse
+ 0, // tp_clear
+ 0, // tp_richcompare
+ 0, // tp_weaklistoffset
+ 0, // tp_iter
+ 0, // tp_iternext
+ Connection_methods, // tp_methods
+ 0, // tp_members
+ Connection_getseters, // tp_getset
+ 0, // tp_base
+ 0, // tp_dict
+ 0, // tp_descr_get
+ 0, // tp_descr_set
+ 0, // tp_dictoffset
+ 0, // tp_init
+ 0, // tp_alloc
+ 0, // tp_new
+ 0, // tp_free
+ 0, // tp_is_gc
+ 0, // tp_bases
+ 0, // tp_mro
+ 0, // tp_cache
+ 0, // tp_subclasses
+ 0, // tp_weaklist
+};
55 src/connection.h
@@ -0,0 +1,55 @@
+
+/*
+ * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+ * documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+ * WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
+ * OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef CONNECTION_H
+#define CONNECTION_H
+
+struct Cursor;
+
+extern PyTypeObject ConnectionType;
+
+struct Connection
+{
+ PyObject_HEAD
+
+ // Set to SQL_NULL_HANDLE when the connection is closed.
+ HDBC hdbc;
+
+ // Will be SQL_AUTOCOMMIT_ON or SQL_AUTOCOMMIT_OFF.
+ int nAutoCommit;
+
+ // The ODBC version the driver supports, from SQLGetInfo(DRIVER_ODBC_VER). This is set after connecting.
+ char odbc_major;
+ char odbc_minor;
+
+ // The escape character from SQLGetInfo. This is not initialized until requested, so this may be zero!
+ PyObject* searchescape;
+
+ // Will be true if SQLDescribeParam is supported. If false, we'll have to guess but the user will not be able
+ // to insert NULLs into binary columns.
+ bool supports_describeparam;
+
+ // The column size of datetime columns, obtained from SQLGetInfo(), used to determine the datetime precision.
+ int datetime_precision;
+};
+
+#define Connection_Check(op) PyObject_TypeCheck(op, &ConnectionType)
+#define Connection_CheckExact(op) ((op)->ob_type == &ConnectionType)
+
+/*
+ * Used by the module's connect function to create new connection objects. If unable to connect to the database, an
+ * exception is set and zero is returned.
+ */
+PyObject* Connection_New(PyObject* pConnectString, bool fAutoCommit, bool fAnsi);
+
+#endif
2,046 src/cursor.cpp
@@ -0,0 +1,2046 @@
+
+// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
+// documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+// permit persons to whom the Software is furnished to do so.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+// WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
+// OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+// Note: This project has gone from C++ (when it was ported from pypgdb) to C, back to C++ (where it will stay). If
+// you are making modifications, feel free to move variable declarations from the top of functions to where they are
+// actually used.
+
+#include "pyodbc.h"
+#include "cursor.h"
+#include "pyodbcmodule.h"
+#include "connection.h"
+#include "row.h"
+#include "buffer.h"
+#include "params.h"
+#include "errors.h"
+#include "getdata.h"
+
+enum
+{
+ CURSOR_REQUIRE_CNXN = 0x00000001,
+ CURSOR_REQUIRE_OPEN = 0x00000003, // includes _CNXN
+ CURSOR_REQUIRE_RESULTS = 0x00000007, // includes _OPEN
+ CURSOR_RAISE_ERROR = 0x00000010,
+};
+
+inline bool
+StatementIsValid(Cursor* cursor)
+{
+ return cursor->cnxn != 0 && ((Connection*)cursor->cnxn)->hdbc != SQL_NULL_HANDLE && cursor->hstmt != SQL_NULL_HANDLE;
+}
+
+extern PyTypeObject CursorType;
+
+inline bool
+Cursor_Check(PyObject* o)
+{
+ return o != 0 && o->ob_type == &CursorType;
+}
+
+
+Cursor* Cursor_Validate(PyObject* obj, DWORD flags)
+{
+ // Validates that a PyObject is a Cursor (like Cursor_Check) and optionally some other requirements controlled by
+ // `flags`. If valid and all requirements (from the flags) are met, the cursor is returned, cast to Cursor*.
+ // Otherwise zero is returned.
+ //
+ // Designed to be used at the top of methods to convert the PyObject pointer and perform necessary checks.
+ //
+ // Valid flags are from the CURSOR_ enum above. Note that unless CURSOR_RAISE_ERROR is supplied, an exception
+ // will not be set. (When deallocating, we really don't want an exception.)
+
+ Connection* cnxn = 0;
+ Cursor* cursor = 0;
+
+ if (!Cursor_Check(obj))
+ {
+ if (flags & CURSOR_RAISE_ERROR)
+ PyErr_SetString(ProgrammingError, "Invalid cursor object.");
+ return 0;
+ }
+
+ cursor = (Cursor*)obj;
+ cnxn = (Connection*)cursor->cnxn;
+
+ if (cnxn == 0)
+ {
+ if (flags & CURSOR_RAISE_ERROR)
+ PyErr_SetString(ProgrammingError, "Attempt to use a closed cursor.");
+ return 0;
+ }
+
+ if (IsSet(flags, CURSOR_REQUIRE_OPEN))
+ {
+ if (cursor->hstmt == SQL_NULL_HANDLE)
+ {
+ if (flags & CURSOR_RAISE_ERROR)
+ PyErr_SetString(ProgrammingError, "Attempt to use a closed cursor.");
+ return 0;
+ }
+
+ if (cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ if (flags & CURSOR_RAISE_ERROR)
+ PyErr_SetString(ProgrammingError, "The cursor's connection has been closed.");
+ return 0;
+ }
+ }
+
+ if (IsSet(flags, CURSOR_REQUIRE_RESULTS) && cursor->colinfos == 0)
+ {
+ if (flags & CURSOR_RAISE_ERROR)
+ PyErr_SetString(ProgrammingError, "No results. Previous SQL was not a query.");
+ return 0;
+ }
+
+ return cursor;
+}
+
+
+inline bool IsNumericType(SQLSMALLINT sqltype)
+{
+ switch (sqltype)
+ {
+ case SQL_DECIMAL:
+ case SQL_NUMERIC:
+ case SQL_REAL:
+ case SQL_FLOAT:
+ case SQL_DOUBLE:
+ case SQL_SMALLINT:
+ case SQL_INTEGER:
+ case SQL_TINYINT:
+ case SQL_BIGINT:
+ return true;
+ }
+
+ return false;
+}
+
+
+PyObject*
+PythonTypeFromSqlType(const SQLCHAR* name, SQLSMALLINT type)
+{
+ // Returns a type object ('int', 'str', etc.) for the given ODBC C type. This is used to populate
+ // Cursor.description with the type of Python object that will be returned for each column.
+ //
+ // name
+ // The name of the column, only used to create error messages.
+ //
+ // type
+ // The ODBC C type (SQL_C_CHAR, etc.) of the column.
+ //
+ // The returned object does not have its reference count incremented!
+
+ PyObject* pytype = 0;
+
+ switch (type)
+ {
+ case SQL_CHAR:
+ case SQL_VARCHAR:
+ case SQL_LONGVARCHAR:
+ case SQL_GUID:
+ pytype = (PyObject*)&PyString_Type;
+ break;
+
+ case SQL_DECIMAL:
+ case SQL_NUMERIC:
+ pytype = (PyObject*)decimal_type;
+
+ case SQL_REAL:
+ case SQL_FLOAT:
+ case SQL_DOUBLE:
+ pytype = (PyObject*)&PyFloat_Type;
+ break;
+
+ case SQL_SMALLINT:
+ case SQL_INTEGER:
+ case SQL_TINYINT:
+ pytype = (PyObject*)&PyInt_Type;
+ break;
+
+ case SQL_TYPE_DATE:
+ pytype = (PyObject*)PyDateTimeAPI->DateType;
+ break;
+
+ case SQL_TYPE_TIME:
+ pytype = (PyObject*)PyDateTimeAPI->TimeType;
+ break;
+
+ case SQL_TYPE_TIMESTAMP:
+ pytype = (PyObject*)PyDateTimeAPI->DateTimeType;
+ break;
+
+ case SQL_BIGINT:
+ pytype = (PyObject*)&PyLong_Type;
+ break;
+
+ case SQL_BIT:
+ pytype = (PyObject*)&PyBool_Type;
+ break;
+
+ case SQL_BINARY:
+ case SQL_VARBINARY:
+ case SQL_LONGVARBINARY:
+ pytype = (PyObject*)&PyBuffer_Type;
+ break;
+
+
+ case SQL_WCHAR:
+ case SQL_WVARCHAR:
+ case SQL_WLONGVARCHAR:
+ pytype = (PyObject*)&PyUnicode_Type;
+ break;
+
+ default:
+ return RaiseErrorV(0, 0, "ODBC data type %d is not supported. Cannot read column %s.", type, (const char*)name);
+ }
+
+ Py_INCREF(pytype);
+ return pytype;
+}
+
+
+static bool
+create_name_map(Cursor* cur, SQLSMALLINT field_count, bool lower)
+{
+ // Called after an execute to construct the map shared by rows.
+
+ bool success = false;
+ PyObject *desc = 0, *colmap = 0, *colinfo = 0, *type = 0, *index = 0, *nullable_obj=0;
+ SQLRETURN ret;
+
+ I(cur->hstmt != SQL_NULL_HANDLE && cur->colinfos != 0);
+
+ // These are the values we expect after free_results. If this function fails, we do not modify any members, so
+ // they should be set to something Cursor_close can deal with.
+ I(cur->description == Py_None);
+ I(cur->map_name_to_index == 0);
+
+ if (cur->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ return false;
+ }
+
+ desc = PyTuple_New((Py_ssize_t)field_count);
+ colmap = PyDict_New();
+ if (!desc || !colmap)
+ goto done;
+
+ for (int i = 0; i < field_count; i++)
+ {
+ SQLCHAR name[300];
+ SQLSMALLINT nDataType;
+ SQLULEN nColSize;
+ SQLSMALLINT cDecimalDigits;
+ SQLSMALLINT nullable;
+
+ SQLWCHAR name2[300];
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLDescribeCol(cur->hstmt, (SQLUSMALLINT)(i + 1), name, _countof(name), 0, &nDataType, &nColSize, &cDecimalDigits, &nullable);
+ ret = SQLDescribeColW(cur->hstmt, (SQLUSMALLINT)(i + 1), name2, _countof(name), 0, &nDataType, &nColSize, &cDecimalDigits, &nullable);
+ Py_END_ALLOW_THREADS
+
+ if (cur->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ goto done;
+ }
+
+ if (!SQL_SUCCEEDED(ret))
+ {
+ RaiseErrorFromHandle("SQLDescribeCol", cur->cnxn->hdbc, cur->hstmt);
+ goto done;
+ }
+
+#ifdef TRACE_ALL
+ printf("Col %d: type=%d colsize=%d\n", (i+1), (int)nDataType, (int)nColSize);
+#endif
+ if (lower)
+ _strlwr((char*)name);
+
+ type = PythonTypeFromSqlType(name, nDataType);
+ if (!type)
+ goto done;
+
+ switch (nullable)
+ {
+ case SQL_NO_NULLS:
+ nullable_obj = Py_False;
+ break;
+ case SQL_NULLABLE:
+ nullable_obj = Py_True;
+ break;
+ case SQL_NULLABLE_UNKNOWN:
+ default:
+ nullable_obj = Py_None;
+ break;
+ }
+
+ // The Oracle ODBC driver has a bug (I call it) that it returns a data size of 0 when a numeric value is
+ // retrieved from a UNION: http://support.microsoft.com/?scid=kb%3Ben-us%3B236786&x=13&y=6
+ //
+ // Unfortunately, I don't have a test system for this yet, so I'm *trying* something. (Not a good sign.) If
+ // the size is zero and it appears to be a numeric type, we'll try to come up with our own length using any
+ // other data we can get.
+
+ if (nColSize == 0 && IsNumericType(nDataType))
+ {
+ // I'm not sure how
+ if (cDecimalDigits != 0)
+ {
+ nColSize = cDecimalDigits + 3;
+ }
+ else
+ {
+ // I'm not sure if this is a good idea, but ...
+ nColSize = 42;
+ }
+ }
+
+ colinfo = Py_BuildValue("(sOOiOOO)",
+ (char*)name,
+ type, // type_code
+ Py_None, // display size
+ (int)nColSize, // internal_size
+ Py_None, // precision
+ Py_None, // scale
+ nullable_obj); // null_ok
+ if (!colinfo)
+ goto done;
+
+
+ nullable_obj = 0;
+
+ index = PyInt_FromLong(i);
+ if (!index)
+ goto done;
+
+ PyDict_SetItemString(colmap, (const char*)name, index);
+ Py_DECREF(index); // SetItemString increments
+ index = 0;
+
+ PyTuple_SET_ITEM(desc, i, colinfo);
+ colinfo = 0; // reference stolen by SET_ITEM
+ }
+
+ Py_XDECREF(cur->description);
+ cur->description = desc;
+ desc = 0;
+ cur->map_name_to_index = colmap;
+ colmap = 0;
+
+ success = true;
+
+ done:
+ Py_XDECREF(nullable_obj);
+ Py_XDECREF(desc);
+ Py_XDECREF(colmap);
+ Py_XDECREF(index);
+ Py_XDECREF(colinfo);
+
+ return success;
+}
+
+enum free_results_type
+{
+ FREE_STATEMENT,
+ KEEP_STATEMENT
+};
+
+static bool
+free_results(Cursor* self, free_results_type free_statement)
+{
+ // Internal function called any time we need to free the memory associated with query results. It is safe to call
+ // this even when a query has not been executed.
+
+ // If we ran out of memory, it is possible that we have a cursor but colinfos is zero. However, we should be
+ // deleting this object, so the cursor will be freed when the HSTMT is destroyed. */
+
+ if (self->colinfos)
+ {
+ free(self->colinfos);
+ self->colinfos = 0;
+ }
+
+ if (StatementIsValid(self))
+ {
+ if (free_statement == FREE_STATEMENT)
+ {
+ SQLRETURN ret;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLFreeStmt(self->hstmt, SQL_CLOSE);
+ Py_END_ALLOW_THREADS;
+ }
+ else
+ {
+ SQLRETURN ret;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLFreeStmt(self->hstmt, SQL_UNBIND);
+ ret = SQLFreeStmt(self->hstmt, SQL_RESET_PARAMS);
+ Py_END_ALLOW_THREADS;
+
+ }
+
+ if (self->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ return false;
+ }
+ }
+
+ if (self->description != Py_None)
+ {
+ Py_DECREF(self->description);
+ self->description = Py_None;
+ Py_INCREF(Py_None);
+ }
+
+ if (self->map_name_to_index)
+ {
+ Py_DECREF(self->map_name_to_index);
+ self->map_name_to_index = 0;
+ }
+
+ self->rowcount = -1;
+
+ return true;
+}
+
+
+static void
+closeimpl(Cursor* cur)
+{
+ // An internal function for the shared 'closing' code used by Cursor_close and Cursor_dealloc.
+ //
+ // This method releases the GIL lock while closing, so verify the HDBC still exists if you use it.
+
+ free_results(cur, FREE_STATEMENT);
+
+ FreeParameterInfo(cur);
+ FreeParameterData(cur);
+
+ if (StatementIsValid(cur))
+ {
+ HSTMT hstmt = cur->hstmt;
+ cur->hstmt = SQL_NULL_HANDLE;
+ Py_BEGIN_ALLOW_THREADS
+ SQLFreeHandle(SQL_HANDLE_STMT, hstmt);
+ Py_END_ALLOW_THREADS
+ }
+
+
+ Py_XDECREF(cur->pPreparedSQL);
+ Py_XDECREF(cur->description);
+ Py_XDECREF(cur->map_name_to_index);
+ Py_XDECREF(cur->cnxn);
+
+ cur->pPreparedSQL = 0;
+ cur->description = 0;
+ cur->map_name_to_index = 0;
+ cur->cnxn = 0;
+}
+
+static char close_doc[] =
+ "Close the cursor now (rather than whenever __del__ is called). The cursor will\n"
+ "be unusable from this point forward; a ProgrammingError exception will be\n"
+ "raised if any operation is attempted with the cursor.";
+
+static PyObject*
+Cursor_close(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_OPEN | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ closeimpl(cursor);
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static void
+Cursor_dealloc(Cursor* cursor)
+{
+ if (Cursor_Validate((PyObject*)cursor, CURSOR_REQUIRE_CNXN))
+ {
+ closeimpl(cursor);
+ }
+
+ PyObject_Del(cursor);
+}
+
+
+
+bool
+InitColumnInfo(Cursor* cursor, SQLUSMALLINT iCol, ColumnInfo* pinfo)
+{
+ // Initializes ColumnInfo from result set metadata.
+
+ SQLRETURN ret;
+
+ // REVIEW: This line fails on OS/X with the FileMaker driver : http://www.filemaker.com/support/updaters/xdbc_odbc_mac.html
+ //
+ // I suspect the problem is that it doesn't allow NULLs in some of the parameters, so I'm going to supply them all
+ // to see what happens.
+
+ SQLCHAR ColumnName[200];
+ SQLSMALLINT BufferLength = _countof(ColumnName);
+ SQLSMALLINT NameLength = 0;
+ SQLSMALLINT DataType = 0;
+ SQLULEN ColumnSize = 0;
+ SQLSMALLINT DecimalDigits = 0;
+ SQLSMALLINT Nullable = 0;
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLDescribeCol(cursor->hstmt, iCol,
+ ColumnName,
+ BufferLength,
+ &NameLength,
+ &DataType,
+ &ColumnSize,
+ &DecimalDigits,
+ &Nullable);
+ Py_END_ALLOW_THREADS
+
+ pinfo->sql_type = DataType;
+ pinfo->column_size = ColumnSize;
+
+ if (cursor->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ return false;
+ }
+
+ if (!SQL_SUCCEEDED(ret))
+ {
+ RaiseErrorFromHandle("SQLDescribeCol", cursor->cnxn->hdbc, cursor->hstmt);
+ return false;
+ }
+
+ // If it is an integer type, determine if it is signed or unsigned. The buffer size is the same but we'll need to
+ // know when we convert to a Python integer.
+
+ switch (pinfo->sql_type)
+ {
+ case SQL_TINYINT:
+ case SQL_SMALLINT:
+ case SQL_INTEGER:
+ case SQL_BIGINT:
+ {
+ SQLLEN f;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLColAttribute(cursor->hstmt, iCol, SQL_DESC_UNSIGNED, 0, 0, 0, &f);
+ Py_END_ALLOW_THREADS
+
+ if (cursor->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ return false;
+ }
+
+ if (!SQL_SUCCEEDED(ret))
+ {
+ RaiseErrorFromHandle("SQLColAttribute", cursor->cnxn->hdbc, cursor->hstmt);
+ return false;
+ }
+ pinfo->is_unsigned = (f == SQL_TRUE);
+ break;
+ }
+
+ default:
+ pinfo->is_unsigned = false;
+ }
+
+ return true;
+}
+
+
+static bool
+PrepareResults(Cursor* cur, int cCols)
+{
+ // Called after a SELECT has been executed to perform pre-fetch work.
+ //
+ // Allocates the ColumnInfo structures describing the returned data.
+
+ int i;
+ I(cur->colinfos == 0);
+
+ cur->colinfos = (ColumnInfo*)malloc(sizeof(ColumnInfo) * cCols);
+ if (cur->colinfos == 0)
+ {
+ PyErr_NoMemory();
+ return false;
+ }
+
+ for (i = 0; i < cCols; i++)
+ {
+ if (!InitColumnInfo(cur, (SQLSMALLINT)(i + 1), &cur->colinfos[i]))
+ {
+ free(cur->colinfos);
+ cur->colinfos = 0;
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static PyObject*
+execute(Cursor* cur, PyObject* pSql, PyObject* params, bool skip_first)
+{
+ // Internal function to execute SQL, called by .execute and .executemany.
+ //
+ // pSql
+ // A PyString, PyUnicode, or derived object containing the SQL.
+ //
+ // params
+ // Pointer to an optional sequence of parameters, and possibly the SQL statement (see skip_first):
+ // (SQL, param1, param2) or (param1, param2).
+ //
+ // skip_first
+ // If true, the first element in `params` is ignored. (It will be the SQL statement and `params` will be the
+ // entire tuple passed to Cursor.execute.) Otherwise all of the params are used. (This case occurs when called
+ // from Cursor.executemany, in which case the sequences do not contain the SQL statement.) Ignored if params is
+ // zero.
+
+ // Normalize the parameter variables.
+
+ int params_offset = skip_first ? 1 : 0;
+ Py_ssize_t cParams = params == 0 ? 0 : PySequence_Length(params) - params_offset;
+
+ SQLRETURN ret = 0;
+
+ free_results(cur, FREE_STATEMENT);
+
+ const char* szLastFunction = "";
+
+ if (cParams > 0)
+ {
+ // There are parameters, so we'll need to prepare the SQL statement and bind the parameters. (We need to
+ // prepare the statement because we can't bind a NULL (None) object without knowing the target datatype. There
+ // is no one data type that always maps to the others (no, not even varchar)).
+
+ if (!PrepareAndBind(cur, pSql, params, skip_first))
+ return 0;
+
+ szLastFunction = "SQLExecute";
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLExecute(cur->hstmt);
+ Py_END_ALLOW_THREADS
+ }
+ else
+ {
+ // REVIEW: Why don't we always prepare? It is highly unlikely that a user would need to execute the same SQL
+ // repeatedly if it did not have parameters, so we are not losing performance, but it would simplify the code.
+
+ Py_XDECREF(cur->pPreparedSQL);
+ cur->pPreparedSQL = 0;
+
+ szLastFunction = "SQLExecDirect";
+ if (PyString_Check(pSql))
+ {
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLExecDirect(cur->hstmt, (SQLCHAR*)PyString_AS_STRING(pSql), SQL_NTS);
+ Py_END_ALLOW_THREADS
+ }
+ else
+ {
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLExecDirectW(cur->hstmt, (SQLWCHAR*)PyUnicode_AsUnicode(pSql), SQL_NTS);
+ Py_END_ALLOW_THREADS
+ }
+ }
+
+ if (cur->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+
+ FreeParameterData(cur);
+
+ return RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ }
+
+ if (!SQL_SUCCEEDED(ret) && ret != SQL_NEED_DATA && ret != SQL_NO_DATA)
+ {
+ // We could try dropping through the while and if below, but if there is an error, we need to raise it before
+ // FreeParameterData calls more ODBC functions.
+ return RaiseErrorFromHandle("SQLExecDirectW", cur->cnxn->hdbc, cur->hstmt);
+ }
+
+ while (ret == SQL_NEED_DATA)
+ {
+ // We have bound a `buffer` object using SQL_DATA_AT_EXEC, so ODBC is asking us for the data now. We gave the
+ // buffer pointer to ODBC in SQLBindParameter -- SQLParamData below gives the pointer back to us.
+
+ szLastFunction = "SQLParamData";
+ PyObject* pParam;
+ ret = SQLParamData(cur->hstmt, (SQLPOINTER*)&pParam);
+
+ if (ret == SQL_NEED_DATA)
+ {
+ szLastFunction = "SQLPutData";
+ if (PyBuffer_Check(pParam))
+ {
+ // Buffers can have multiple segments, so we might need multiple writes. Looping through buffers isn't
+ // difficult, but we've wrapped it up in an iterator object to keep this loop simple.
+
+ BufferSegmentIterator it(pParam);
+ byte* pb;
+ SQLLEN cb;
+ while (it.Next(pb, cb))
+ SQLPutData(cur->hstmt, pb, cb);
+ }
+ else if (PyUnicode_Check(pParam))
+ {
+ // REVIEW: This will fail if PyUnicode != wchar_t
+ Py_UNICODE* p = PyUnicode_AS_UNICODE(pParam);
+ SQLLEN offset = 0;
+ SQLLEN cb = (SQLLEN)PyUnicode_GET_SIZE(pParam);
+ while (offset < cb)
+ {
+ SQLLEN remaining = min(MAX_VARCHAR_BUFFER, cb - offset);
+ SQLPutData(cur->hstmt, &p[offset], remaining * 2);
+ offset += remaining;
+ }
+ }
+ else if (PyString_Check(pParam))
+ {
+ const char* p = PyString_AS_STRING(pParam);
+ SQLLEN offset = 0;
+ SQLLEN cb = (SQLLEN)PyString_GET_SIZE(pParam);
+ while (offset < cb)
+ {
+ SQLLEN remaining = min(MAX_VARCHAR_BUFFER, cb - offset);
+ SQLPutData(cur->hstmt, (SQLPOINTER)&p[offset], remaining);
+ offset += remaining;
+ }
+ }
+ }
+ }
+
+ FreeParameterData(cur);
+
+ if (ret == SQL_NO_DATA)
+ {
+ // Example: A delete statement that did not delete anything.
+ cur->rowcount = 0;
+ return PyInt_FromLong(cur->rowcount);
+ }
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle(szLastFunction, cur->cnxn->hdbc, cur->hstmt);
+
+ SQLLEN cRows = -1;
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLRowCount(cur->hstmt, &cRows);
+ Py_END_ALLOW_THREADS
+
+ cur->rowcount = (int)cRows;
+
+#ifdef TRACE_ALL
+ printf("SQLRowCount: %d\n", cRows);
+#endif
+
+ SQLSMALLINT cCols = 0;
+ if (!SQL_SUCCEEDED(SQLNumResultCols(cur->hstmt, &cCols)))
+ {
+ // Note: The SQL Server driver sometimes returns HY007 here if multiple statements (separated by ;) were
+ // submitted. This is not documented, but I've seen it with multiple successful inserts.
+
+ return RaiseErrorFromHandle("SQLNumResultCols", cur->cnxn->hdbc, cur->hstmt);
+ }
+
+#ifdef TRACE_ALL
+ printf("SQLNumResultCols: %d\n", cCols);
+#endif
+
+ if (cur->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ return RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ }
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLRowCount", cur->cnxn->hdbc, cur->hstmt);
+
+ if (cCols != 0)
+ {
+ // A result set was created.
+
+ if (!PrepareResults(cur, cCols))
+ return 0;
+
+ if (!create_name_map(cur, cCols, lowercase()))
+ return 0;
+
+ // Return the cursor so the results can be iterated over directly.
+ Py_INCREF(cur);
+ return (PyObject*)cur;
+ }
+
+ return PyInt_FromLong(cur->rowcount);
+}
+
+inline bool
+IsSequence(PyObject* p)
+{
+ return PySequence_Check(p) && !PyString_Check(p) && !PyBuffer_Check(p) && !PyUnicode_Check(p);
+}
+
+static char execute_doc[] =
+ "C.execute(sql, [params]) --> None | Cursor | count\n"
+ "\n"
+ "Prepare and execute a database query or command.\n"
+ "\n"
+ "Parameters may be provided as a sequence (as specified by the DB API) or\n"
+ "simply passed in one after another (non-standard):\n"
+ "\n"
+ " cursor.execute(sql, (param1, param2))\n"
+ "\n"
+ " or\n"
+ "\n"
+ " cursor.execute(sql, param1, param2)\n"
+ "\n"
+ "The return value for this method is not specified in the API, so any use is\n"
+ "non-standard. For convenience, the type depends on the operation performed.\n"
+ "A select statement will return `self` so the results can be iterated\n"
+ "conveniently:\n"
+ "\n"
+ " for row in cursor.execute('select * from tmp'):\n"
+ " print row.customer_id\n"
+ "\n"
+ "An update or delete statement will return the number of records affected as an\n"
+ "integer:\n"
+ "\n"
+ " count = cursor.execute('delete from tmp')\n"
+ "\n"
+ "If any other statement will return None.";
+
+PyObject*
+Cursor_execute(PyObject* self, PyObject* args)
+{
+ Py_ssize_t cParams = PyTuple_Size(args) - 1;
+
+ bool skip_first = false;
+ PyObject *pSql, *params = 0, *result = 0;
+
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_OPEN | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ if (cParams < 0)
+ {
+ PyErr_SetString(PyExc_TypeError, "execute() takes at least 1 argument (0 given)");
+ goto done;
+ }
+
+ pSql = PyTuple_GET_ITEM(args, 0);
+
+ if (!PyString_Check(pSql) && !PyUnicode_Check(pSql))
+ {
+ PyErr_SetString(PyExc_TypeError, "The first argument to execute must be a string or unicode query.");
+ goto done;
+ }
+
+ // Figure out if there were parameters and how they were passed. Our optional parameter passing complicates this slightly.
+
+ if (cParams == 1 && IsSequence(PyTuple_GET_ITEM(args, 1)))
+ {
+ // There is a single argument and it is a sequence, so we must treat it as a sequence of parameters. (This is
+ // the normal Cursor.execute behavior.)
+
+ params = PyTuple_GET_ITEM(args, 1);
+ skip_first = false;
+ }
+ else if (cParams > 0)
+ {
+ params = args;
+ skip_first = true;
+ }
+
+ // Execute.
+
+ result = execute(cursor, pSql, params, skip_first);
+
+ done:
+
+ return result;
+}
+
+static PyObject*
+Cursor_executemany(PyObject* self, PyObject* args)
+{
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_OPEN | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ cursor->rowcount = -1;
+
+ PyObject *pSql, *param_seq;
+ if (!PyArg_ParseTuple(args, "OO", &pSql, &param_seq))
+ return 0;
+
+ if (!PyString_Check(pSql) && !PyUnicode_Check(pSql))
+ {
+ PyErr_SetString(PyExc_TypeError, "The first argument to execute must be a string or unicode query.");
+ return 0;
+ }
+
+ if (!IsSequence(param_seq))
+ {
+ PyErr_SetString(ProgrammingError, "The second parameter to executemany must be a sequence.");
+ return 0;
+ }
+
+ Py_ssize_t c = PySequence_Size(param_seq);
+
+ if (c == 0)
+ {
+ PyErr_SetString(ProgrammingError, "The second parameter to executemany must not be empty.");
+ return 0;
+ }
+
+ for (Py_ssize_t i = 0; i < c; i++)
+ {
+ PyObject* params = PySequence_GetItem(param_seq, i);
+ PyObject* result = execute(cursor, pSql, params, false);
+ bool success = result != 0;
+ Py_XDECREF(result);
+ Py_DECREF(params);
+ if (!success)
+ {
+ cursor->rowcount = -1;
+ return 0;
+ }
+ }
+
+ cursor->rowcount = -1;
+ Py_RETURN_NONE;
+}
+
+
+static PyObject*
+Cursor_fetch(Cursor* cur)
+{
+ // Internal function to fetch a single row and construct a Row object from it. Used by all of the fetching
+ // functions.
+ //
+ // Returns a Row object if successful. If there are no more rows, zero is returned. If an error occurs, an
+ // exception is set and zero is returned. (To differentiate between the last two, use PyErr_Occurred.)
+
+ SQLRETURN ret = 0;
+ int field_count, i;
+ PyObject** apValues;
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLFetch(cur->hstmt);
+ Py_END_ALLOW_THREADS
+
+ if (cur->cnxn->hdbc == SQL_NULL_HANDLE)
+ {
+ // The connection was closed by another thread in the ALLOW_THREADS block above.
+ return RaiseErrorV(0, ProgrammingError, "The cursor's connection was closed.");
+ }
+
+ if (ret == SQL_NO_DATA)
+ return 0;
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLFetch", cur->cnxn->hdbc, cur->hstmt);
+
+ field_count = PyTuple_GET_SIZE(cur->description);
+
+ apValues = (PyObject**)malloc(sizeof(PyObject*) * field_count);
+
+ if (apValues == 0)
+ return PyErr_NoMemory();
+
+ for (i = 0; i < field_count; i++)
+ {
+ PyObject* value = GetData(cur, i);
+
+ if (!value)
+ {
+ FreeRowValues(i, apValues);
+ return 0;
+ }
+
+ apValues[i] = value;
+ }
+
+ return (PyObject*)Row_New(cur->description, cur->map_name_to_index, field_count, apValues);
+}
+
+
+static PyObject*
+Cursor_fetchlist(Cursor* cur, Py_ssize_t max)
+{
+ // max
+ // The maximum number of rows to fetch. If -1, fetch all rows.
+ //
+ // Returns a list of Rows. If there are no rows, an empty list is returned.
+
+ PyObject* results;
+ PyObject* row;
+
+ results = PyList_New(0);
+ if (!results)
+ return 0;
+
+ while (max == -1 || max > 0)
+ {
+ row = Cursor_fetch(cur);
+
+ if (!row)
+ {
+ if (PyErr_Occurred())
+ {
+ Py_DECREF(results);
+ return 0;
+ }
+ break;
+ }
+
+ PyList_Append(results, row);
+ Py_DECREF(row);
+
+ if (max != -1)
+ max--;
+ }
+
+ return results;
+}
+
+static PyObject*
+Cursor_iter(PyObject* self)
+{
+ Py_INCREF(self);
+ return self;
+}
+
+
+static PyObject*
+Cursor_iternext(PyObject* self)
+{
+ // Implements the iterator protocol for cursors. Fetches the next row. Returns zero without setting an exception
+ // when there are no rows.
+
+ PyObject* result;
+
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_RESULTS | CURSOR_RAISE_ERROR);
+
+ if (!cursor)
+ return 0;
+
+ result = Cursor_fetch(cursor);
+
+ return result;
+}
+
+static PyObject*
+Cursor_fetchone(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+
+ PyObject* row;
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_RESULTS | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ row = Cursor_fetch(cursor);
+
+ if (!row)
+ {
+ if (PyErr_Occurred())
+ return 0;
+ Py_RETURN_NONE;
+ }
+
+ return row;
+}
+
+static PyObject*
+Cursor_fetchall(PyObject* self, PyObject* args)
+{
+ UNUSED(args);
+
+ PyObject* result;
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_RESULTS | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ result = Cursor_fetchlist(cursor, -1);
+
+ return result;
+}
+
+static PyObject*
+Cursor_fetchmany(PyObject* self, PyObject* args)
+{
+ long rows;
+ PyObject* result;
+
+ Cursor* cursor = Cursor_Validate(self, CURSOR_REQUIRE_RESULTS | CURSOR_RAISE_ERROR);
+ if (!cursor)
+ return 0;
+
+ rows = cursor->arraysize;
+ if (!PyArg_ParseTuple(args, "|l", &rows))
+ return 0;
+
+ result = Cursor_fetchlist(cursor, rows);
+
+ return result;
+}
+
+static char tables_doc[] =
+ "C.tables(table=None, catalog=None, schema=None, tableType=None) --> self\n"
+ "\n"
+ "Executes SQLTables and creates a results set of tables defined in the data\n"
+ "source.\n"
+ "\n"
+ "The table, catalog, and schema interpret the '_' and '%' characters as\n"
+ "wildcards. The escape character is driver specific, so use\n"
+ "`Connection.searchescape`.\n"
+ "\n"
+ "Each row fetched has the following columns:\n"
+ " 0) table_cat: The catalog name.\n"
+ " 1) table_schem: The schema name.\n"
+ " 2) table_name: The table name.\n"
+ " 3) table_type: One of 'TABLE', 'VIEW', SYSTEM TABLE', 'GLOBAL TEMPORARY'\n"
+ " 'LOCAL TEMPORARY', 'ALIAS', 'SYNONYM', or a data source-specific type name.";
+
+char* Cursor_tables_kwnames[] = { "table", "catalog", "schema", "tableType", 0 };
+
+static PyObject*
+Cursor_tables(PyObject* self, PyObject* args, PyObject* kwargs)
+{
+ const char* szCatalog = 0;
+ const char* szSchema = 0;
+ const char* szTableName = 0;
+ const char* szTableType = 0;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|ssss", Cursor_tables_kwnames, &szTableName, &szCatalog, &szSchema, &szTableType))
+ return 0;
+
+ Cursor* cur = Cursor_Validate(self, CURSOR_REQUIRE_OPEN);
+
+ if (!free_results(cur, FREE_STATEMENT))
+ return 0;
+
+ SQLRETURN ret = 0;
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLTables(cur->hstmt, (SQLCHAR*)szCatalog, SQL_NTS, (SQLCHAR*)szSchema, SQL_NTS,
+ (SQLCHAR*)szTableName, SQL_NTS, (SQLCHAR*)szTableType, SQL_NTS);
+ Py_END_ALLOW_THREADS
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLTables", cur->cnxn->hdbc, cur->hstmt);
+
+ SQLSMALLINT cCols;
+ if (!SQL_SUCCEEDED(SQLNumResultCols(cur->hstmt, &cCols)))
+ return RaiseErrorFromHandle("SQLNumResultCols", cur->cnxn->hdbc, cur->hstmt);
+
+ if (!PrepareResults(cur, cCols))
+ return 0;
+
+ if (!create_name_map(cur, cCols, true))
+ return 0;
+
+ // Return the cursor so the results can be iterated over directly.
+ Py_INCREF(cur);
+ return (PyObject*)cur;
+}
+
+
+static char columns_doc[] =
+ "C.columns(table=None, catalog=None, schema=None, column=None)\n\n"
+ "Creates a results set of column names in specified tables by executing the ODBC SQLColumns function.\n"
+ "Each row fetched has the following columns:\n"
+ " 0) table_cat\n"
+ " 1) table_schem\n"
+ " 2) table_name\n"
+ " 3) column_name\n"
+ " 4) data_type\n"
+ " 5) type_name\n"
+ " 6) column_size\n"
+ " 7) buffer_length\n"
+ " 8) decimal_digits\n"
+ " 9) num_prec_radix\n"
+ " 10) nullable\n"
+ " 11) remarks\n"
+ " 12) column_def\n"
+ " 13) sql_data_type\n"
+ " 14) sql_datetime_sub\n"
+ " 15) char_octet_length\n"
+ " 16) ordinal_position\n"
+ " 17) is_nullable";
+
+char* Cursor_column_kwnames[] = { "table", "catalog", "schema", "column", 0 };
+
+static PyObject*
+Cursor_columns(PyObject* self, PyObject* args, PyObject* kwargs)
+{
+ const char* szCatalog = 0;
+ const char* szSchema = 0;
+ const char* szTable = 0;
+ const char* szColumn = 0;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|ssss", Cursor_column_kwnames, &szTable, &szCatalog, &szSchema, &szColumn))
+ return 0;
+
+ Cursor* cur = Cursor_Validate(self, CURSOR_REQUIRE_OPEN);
+
+ if (!free_results(cur, FREE_STATEMENT))
+ return 0;
+
+ SQLRETURN ret = 0;
+
+ Py_BEGIN_ALLOW_THREADS
+ ret = SQLColumns(cur->hstmt, (SQLCHAR*)szCatalog, SQL_NTS, (SQLCHAR*)szSchema, SQL_NTS, (SQLCHAR*)szTable, SQL_NTS, (SQLCHAR*)szColumn, SQL_NTS);
+ Py_END_ALLOW_THREADS
+
+ if (!SQL_SUCCEEDED(ret))
+ return RaiseErrorFromHandle("SQLColumns", cur->cnxn->hdbc, cur->hstmt);
+
+ SQLSMALLINT cCols;
+ if (!SQL_SUCCEEDED(SQLNumResultCols(cur->hstmt, &cCols)))
+ return RaiseErrorFromHandle("SQLNumResultCols", cur->cnxn->hdbc, cur->hstmt);
+
+ if (!PrepareResults(cur, cCols))
+ return 0;
+
+ if (!create_name_map(cur, cCols, true))
+ return 0;
+
+ // Return the cursor so the results can be iterated over directly.
+ Py_INCREF(cur);
+ return (PyObject*)cur;
+}
+
+
+static char statistics_doc[] =
+ "C.statistics(catalog=None, schema=None, unique=False, quick=True) --> self\n\n"
+ "Creates a results set of statistics about a single table and the indexes associated with \n"
+ "the table by executing SQLStatistics.\n"
+ "unique\n"
+ " If True, only unique indexes are retured. Otherwise all indexes are returned.\n"
+ "quick\n"
+ " If True, CARDINALITY and PAGES are returned only if they are readily available\n"
+ " from the server\n"
+ "\n"
+ "Each row fetched has the following columns:\n\n"
+ " 0) table_cat\n"
+ " 1) table_schem\n"
+ " 2) table_name\n"
+ " 3) non_unique\n"
+ " 4) index_qualifier\n"
+ " 5) index_name\n"
+ " 6) type\n"
+ " 7) ordinal_position\n"
+ " 8) column_name\n"
+ " 9) asc_or_desc\n"
+ " 10) cardinality\n"
+ " 11) pages\n"
+ " 12) filter_condition";
+
+char* Cursor_statistics_kwnames[] = { "table", "catalog", "schema", "unique", "quick", 0 };
+
+static PyObject*
+Cursor_statistics(PyObject* self, PyObject* args, PyObject* kwargs)
+{
+ const char* szCatalog = 0;
+ const char* szSchema = 0;
+ const char* szTable = 0;
+ PyObject* pUnique = Py_False;
+ PyObject* pQuick = Py_True;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s|ssOO", Cursor_statistics_kwnames, &szTable, &szCatalog, &szSchema,
+ &pUnique, &pQuick))
+ return 0;
+
+ Cursor* cur = Cursor_Validate(self, CURSOR_REQUIRE_OPEN);
+
+ if (!free_results(cur, FREE_STATEMENT))
+ return 0;