Skip to content

Commit

Permalink
Clunky but serviceable way to set byte order.
Browse files Browse the repository at this point in the history
Addresses issue #98. There really hasn't been a document that just lays
out in detail how PL/Java's data coercions are chosen and especially how
the raw-chunk-based versions of SQLInput/SQLOutput work. Discovered,
while documenting that, the reason Point was showing bogus values (noted
in pull request #59 comments) ... byte order ... leading to a
last-minute rework of SQLInputFromChunk and SQLOutputToChunk allowing
byte order to be selected.

(It has to be selectable, not just fixed, in case PL/Java gets updated
at a site that has a bunch of data stored under the old byte order.
A switch allows dumping with the old order, then reloading with the
new. In fact, by choosing a different byte order for each conversion
direction, it is possible ... carefully ... to update in place.)

A flag is passed to SQLInputFromChunk_create and
SQLOutputFromChunk_create to indicate whether the type is
a 'java-based scalar' (otherwise, it is a non-composite mirror),
and this allows byte order to be specified separately for scalars
and mirrors: historically, it was always bigendian regardless of
architecture. Presumably that has never mattered much for java-based
scalars (because PostgreSQL itself doesn't have any independent way
of seeing into them, so no inconsistency was visible), but for a
mirror type it is definitely broken if the hardware is not bigendian,
with PostgreSQL and Java not seeing the values the same way.

Therefore, this change leaves the scalar default (for 1.5.0 at least)
at bigendian, in case some sites may have used Java-based scalars in
the past and have tables that contain them. Also, the current UDT
implementation gives every scalar type a binary send/receive/COPY
format that can't (yet) be decoupled from its internal stored form, and
the PostgreSQL docs specify big-endian for those binary transfer formats,
so it would be premature to change the scalar byte-order default before
it is also possible to have a separate transfer format.

For mirror types, on the other hand, the default is here immediately
changed to native ... it is less likely that sites were heavily
using mirror UDTs if they were seeing bogus values, and this will
make them work right by default.

Java properties are used (set with -D in the pljava.vmoptions GUC):
org.postgresql.pljava.udt.byteorder  to set everything the same,

org.postgresql.pljava.udt.byteorder.scalar
org.postgresql.pljava.udt.byteorder.mirror  to set them separately.

The allowable values for each property are big_endian, little_endian,
and native.

Even finer-grained control is available with:
org.postgresql.pljava.udt.byteorder.scalar.p2j
org.postgresql.pljava.udt.byteorder.scalar.j2p
org.postgresql.pljava.udt.byteorder.mirror.p2j
org.postgresql.pljava.udt.byteorder.mirror.j2p

for a very special purpose, should someone wish to attempt an
UPDATE where within a session, a column with values written
in one order gets rewritten in another, a fiddly and somewhat
unnerving process that turns out to actually work, and has a
full new page documenting it now.
  • Loading branch information
jcflack committed Mar 19, 2016
1 parent da80844 commit 460b25d
Show file tree
Hide file tree
Showing 15 changed files with 921 additions and 13 deletions.
7 changes: 4 additions & 3 deletions pljava-so/src/main/c/SQLInputFromChunk.c
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,13 @@ static jclass s_SQLInputFromChunk_class;
static jmethodID s_SQLInputFromChunk_init;
static jmethodID s_SQLInputFromChunk_close;

jobject SQLInputFromChunk_create(void* data, size_t sz)
jobject SQLInputFromChunk_create(void* data, size_t sz, bool isJavaBasedScalar)
{
jobject dbb;
dbb = JNI_newDirectByteBuffer(data, sz);
return
JNI_newObject(s_SQLInputFromChunk_class, s_SQLInputFromChunk_init, dbb);
JNI_newObject(s_SQLInputFromChunk_class, s_SQLInputFromChunk_init, dbb,
isJavaBasedScalar ? JNI_TRUE : JNI_FALSE);
}

void SQLInputFromChunk_close(jobject stream)
Expand All @@ -37,6 +38,6 @@ void SQLInputFromChunk_initialize(void)
{
s_SQLInputFromChunk_class = JNI_newGlobalRef(PgObject_getJavaClass("org/postgresql/pljava/jdbc/SQLInputFromChunk"));
s_SQLInputFromChunk_init = PgObject_getJavaMethod(s_SQLInputFromChunk_class,
"<init>", "(Ljava/nio/ByteBuffer;)V");
"<init>", "(Ljava/nio/ByteBuffer;Z)V");
s_SQLInputFromChunk_close = PgObject_getJavaMethod(s_SQLInputFromChunk_class, "close", "()V");
}
6 changes: 3 additions & 3 deletions pljava-so/src/main/c/SQLOutputToChunk.c
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ static jmethodID s_SQLOutputToChunk_init;
static jmethodID s_SQLOutputToChunk_close;
static jmethodID s_Buffer_position;

jobject SQLOutputToChunk_create(StringInfo data)
jobject SQLOutputToChunk_create(StringInfo data, bool isJavaBasedScalar)
{
jobject dbb;
Ptr2Long p2l;
Expand All @@ -26,7 +26,7 @@ jobject SQLOutputToChunk_create(StringInfo data)
if ( 0 < data->len )
JNI_callObjectMethodLocked(dbb, s_Buffer_position, data->len);
return JNI_newObject(s_SQLOutputToChunk_class, s_SQLOutputToChunk_init,
p2l.longVal, dbb);
p2l.longVal, dbb, isJavaBasedScalar ? JNI_TRUE : JNI_FALSE);
}

void SQLOutputToChunk_close(jobject stream)
Expand Down Expand Up @@ -57,7 +57,7 @@ void SQLOutputToChunk_initialize(void)
s_SQLOutputToChunk_class = JNI_newGlobalRef(PgObject_getJavaClass("org/postgresql/pljava/jdbc/SQLOutputToChunk"));
PgObject_registerNatives2(s_SQLOutputToChunk_class, methods);
s_SQLOutputToChunk_init = PgObject_getJavaMethod(s_SQLOutputToChunk_class,
"<init>", "(JLjava/nio/ByteBuffer;)V");
"<init>", "(JLjava/nio/ByteBuffer;Z)V");
s_SQLOutputToChunk_close = PgObject_getJavaMethod(s_SQLOutputToChunk_class, "close", "()V");

Buffer_class = PgObject_getJavaClass("java/nio/Buffer");
Expand Down
7 changes: 5 additions & 2 deletions pljava-so/src/main/c/type/UDT.c
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ static jobject coerceScalarDatum(UDT self, Datum arg)
jobject result;
int32 dataLen = Type_getLength((Type)self);
jclass javaClass = Type_getJavaClass((Type)self);
bool isJavaBasedScalar = 0 != self->toString;

if(dataLen == -2)
{
Expand Down Expand Up @@ -73,7 +74,8 @@ static jobject coerceScalarDatum(UDT self, Datum arg)
}
result = JNI_newObject(javaClass, self->init);

inputStream = SQLInputFromChunk_create(data, dataLen);
inputStream = SQLInputFromChunk_create(data, dataLen,
isJavaBasedScalar);
JNI_callVoidMethod(result, self->readSQL, inputStream, self->sqlTypeName);
SQLInputFromChunk_close(inputStream);
}
Expand All @@ -93,6 +95,7 @@ static Datum coerceScalarObject(UDT self, jobject value)
{
Datum result;
int32 dataLen = Type_getLength((Type)self);
bool isJavaBasedScalar = 0 != self->toString;
if(dataLen == -2)
{
jstring jstr = (jstring)JNI_callObjectMethod(value, self->toString);
Expand All @@ -119,7 +122,7 @@ static Datum coerceScalarObject(UDT self, jobject value)
else
enlargeStringInfo(&buffer, dataLen);

outputStream = SQLOutputToChunk_create(&buffer);
outputStream = SQLOutputToChunk_create(&buffer, isJavaBasedScalar);
JNI_callVoidMethod(value, self->writeSQL, outputStream);
SQLOutputToChunk_close(outputStream);

Expand Down
3 changes: 2 additions & 1 deletion pljava-so/src/main/include/pljava/SQLInputFromChunk.h
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ extern "C" {
*
***********************************************************************/

jobject SQLInputFromChunk_create(void* data, size_t dataSize);
jobject SQLInputFromChunk_create(void* data, size_t dataSize,
bool isJavaBasedScalar);
void SQLInputFromChunk_close(jobject input);

#ifdef __cplusplus
Expand Down
2 changes: 1 addition & 1 deletion pljava-so/src/main/include/pljava/SQLOutputToChunk.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ extern "C" {
***********************************************************************/
#include <lib/stringinfo.h>

jobject SQLOutputToChunk_create(StringInfo buffer);
jobject SQLOutputToChunk_create(StringInfo buffer, bool isJavaBasedScalar);
void SQLOutputToChunk_close(jobject output);

#ifdef __cplusplus
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,9 +146,14 @@ else if(perm instanceof PropertyPermission)
if(perm.getActions().indexOf("write") >= 0)
{
// We never allow this to be changed.
// As for UDT byteorder, the classes that use it only check
// once so it would be misleading to allow runtime changes;
// use pljava.vmoptions to provide an initial value.
//
String propName = perm.getName();
if(propName.equals("java.home"))
if ( propName.equals("java.home") || propName.matches(
"org\\.postgresql\\.pljava\\.udt\\.byteorder(?:\\..*)?")
)
throw new SecurityException();
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,36 @@ public static String hello(
*/
setPropertyIfNull( "sqlj.defaultconnection", "jdbc:default:connection");

/*
* Set the org.postgresql.pljava.udt.byteorder.{scalar,mirror}.{p2j,j2p}
* properties. For shorthand, defaults can be given in shorter property
* keys org.postgresql.pljava.udt.byteorder.{scalar,mirror} or even just
* org.postgresql.pljava.udt.byteorder for an overall default. These
* shorter keys are then removed from the system properties.
*/
String orderKey = "org.postgresql.pljava.udt.byteorder";
String orderAll = System.getProperty(orderKey);
String orderScalar = System.getProperty(orderKey + ".scalar");
String orderMirror = System.getProperty(orderKey + ".mirror");

if ( null == orderScalar )
orderScalar = null != orderAll ? orderAll : "big_endian";
if ( null == orderMirror )
orderMirror = null != orderAll ? orderAll : "native";

setPropertyIfNull(orderKey + ".scalar.p2j", orderScalar);
setPropertyIfNull(orderKey + ".scalar.j2p", orderScalar);

setPropertyIfNull(orderKey + ".mirror.p2j", orderMirror);
setPropertyIfNull(orderKey + ".mirror.j2p", orderMirror);

System.clearProperty(orderKey);
System.clearProperty(orderKey + ".scalar");
System.clearProperty(orderKey + ".mirror");

/*
* Construct the strings announcing the versions in use.
*/
String jreName = System.getProperty( "java.runtime.name");
String jreVer = System.getProperty( "java.runtime.version");

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import java.net.MalformedURLException;
import java.net.URL;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.sql.Array;
Expand Down Expand Up @@ -59,9 +60,44 @@ public class SQLInputFromChunk implements SQLInput
/* get rid of this once no longer supporting back to Java 6 */
private static final Charset UTF8 = Charset.forName("UTF-8");

public SQLInputFromChunk(ByteBuffer bb)
private static ByteOrder scalarOrder;
private static ByteOrder mirrorOrder;

public SQLInputFromChunk(ByteBuffer bb, boolean isJavaBasedScalar)
throws SQLException
{
m_bb = bb;
if ( isJavaBasedScalar )
{
if ( null == scalarOrder )
scalarOrder = getOrder(true);
m_bb.order(scalarOrder);
}
else
{
if ( null == mirrorOrder )
mirrorOrder = getOrder(false);
m_bb.order(mirrorOrder);
}
}

private ByteOrder getOrder(boolean isJavaBasedScalar) throws SQLException
{
ByteOrder result;
String key = "org.postgresql.pljava.udt.byteorder."
+ ( isJavaBasedScalar ? "scalar" : "mirror" ) + ".p2j";
String val = System.getProperty(key);
if ( "big_endian".equals(val) )
result = ByteOrder.BIG_ENDIAN;
else if ( "little_endian".equals(val) )
result = ByteOrder.LITTLE_ENDIAN;
else if ( "native".equals(val) )
result = ByteOrder.nativeOrder();
else
throw new SQLNonTransientException(
"System property " + key +
" must be big_endian, little_endian, or native", "F0000");
return result;
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import java.math.BigDecimal;
import java.nio.BufferOverflowException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.nio.charset.CharsetEncoder;
Expand Down Expand Up @@ -69,10 +70,46 @@ public class SQLOutputToChunk implements SQLOutput
private long m_handle;
private ByteBuffer m_bb;

public SQLOutputToChunk(long handle, ByteBuffer bb)
private static ByteOrder scalarOrder;
private static ByteOrder mirrorOrder;

public SQLOutputToChunk(long handle, ByteBuffer bb,
boolean isJavaBasedScalar)
throws SQLException
{
m_handle = handle;
m_bb = bb;
if ( isJavaBasedScalar )
{
if ( null == scalarOrder )
scalarOrder = getOrder(true);
m_bb.order(scalarOrder);
}
else
{
if ( null == mirrorOrder )
mirrorOrder = getOrder(false);
m_bb.order(mirrorOrder);
}
}

private ByteOrder getOrder(boolean isJavaBasedScalar) throws SQLException
{
ByteOrder result;
String key = "org.postgresql.pljava.udt.byteorder."
+ ( isJavaBasedScalar ? "scalar" : "mirror" ) + ".j2p";
String val = System.getProperty(key);
if ( "big_endian".equals(val) )
result = ByteOrder.BIG_ENDIAN;
else if ( "little_endian".equals(val) )
result = ByteOrder.LITTLE_ENDIAN;
else if ( "native".equals(val) )
result = ByteOrder.nativeOrder();
else
throw new SQLNonTransientException(
"System property " + key +
" must be big_endian, little_endian, or native", "F0000");
return result;
}

@Override
Expand Down Expand Up @@ -457,7 +494,10 @@ private void ensureCapacity(int c) throws SQLException
{
if(m_handle == 0)
throw new SQLException("Stream is closed");
ByteBuffer oldbb = m_bb;
m_bb = _ensureCapacity(m_handle, m_bb, m_bb.position(), c);
if ( m_bb != oldbb )
m_bb.order(oldbb.order());
}
}

Expand Down
Loading

0 comments on commit 460b25d

Please sign in to comment.