Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PGtokenizer is ignoring last token if last token is null or blank #1881

Closed
chaluvadi286 opened this issue Aug 31, 2020 · 3 comments · Fixed by #1882
Closed

PGtokenizer is ignoring last token if last token is null or blank #1881

chaluvadi286 opened this issue Aug 31, 2020 · 3 comments · Fixed by #1882
Assignees

Comments

@chaluvadi286
Copy link

chaluvadi286 commented Aug 31, 2020

Describe the issue
PGtokenizer is ignoring last token if last token is blank or NULL. For eg- (1,2EC1830300027,1,,) then it should return 5 tokens but it
is returning 4 tokens only. Sample code -
public static void main(String[] args) throws Exception {
String objectStr = "(1,2EC1830300027,1,,)";
PGtokenizer token = new PGtokenizer(PGtokenizer.removePara(objectStr), ',');
for (int i = 0; i < token.getSize(); i++) {
System.out.println(i + " = " + token.getToken(i) );
}
}

Output -
0 = 1
1 = 2EC1830300027
2 = 1
3 =

Driver Version?
postgresql-42.2.14.jar
Java Version?
1.8
OS Version?
Windows 10
PostgreSQL Version?
12
To Reproduce
Steps to reproduce the behaviour:
public static void main(String[] args) throws Exception {
String objectStr = "(1,2EC1830300027,1,,)";
PGtokenizer token = new PGtokenizer(PGtokenizer.removePara(objectStr), ',');
for (int i = 0; i < token.getSize(); i++) {
System.out.println(i + " = " + token.getToken(i) );
}
}

Expected behaviour
0 = 1
1 = 2EC1830300027
2 = 1
3 =
4 =

@chaluvadi286 chaluvadi286 changed the title PGtokenizer is ignoring last token it is null or blank PGtokenizer is ignoring last token if last token is null or blank Aug 31, 2020
@davecramer
Copy link
Member

Thx for the report

@davecramer davecramer self-assigned this Aug 31, 2020
@davecramer
Copy link
Member

How is this a problem in the code ? I just tried the same code with StringTokenizer and it only returns 3 tokens.

@chaluvadi286
Copy link
Author

To give background - I am trying to map the statement result to a Java Object in a generic way so that we don't need to touch this mapping even if there is a change (addition/modification/deletion) in underlying Type object.

Sample code that I am trying is pasted below. Since PGtokenizer is ignoring token which are null at the end, I am getting ArrayIndexOutofBounds exception.

public TenorObject readSQL(PGobject pgObject) throws Exception {
m_dcfTenor = new TenorObject();
Field[] flds = TenorObject.getPublicFields();

PGtokenizer token = new PGtokenizer(PGtokenizer.removePara(pgObject.getValue()), ',');

for (int i = 0; i < flds.length; i++) {
    if (flds[i].getModifiers() != Modifier.PRIVATE) {//This check is for private variables
	try {
	    if (flds[i].getType().getName().equals("FDate")) {
		flds[i].set(m_dcfTenor, FormatDate.string2FDate(token.getToken(i).replace("\"", ""), "yyyy-MM-dd HH:mm:ss"));
	    } else if (flds[i].getType().getName().equals("java.lang.String")) {
		flds[i].set(m_dcfTenor, token.getToken(i));
	    } else if (flds[i].getType().isInstance(new BigDecimal(0.0))) {
		if (token.getToken(i) != null && !token.getToken(i).equals("")) {
			flds[i].set(m_dcfTenor, new BigDecimal(token.getToken(i)));
		} else {
			flds[i].set(m_dcfTenor, null);
		}
	    }
	} catch (Exception e) {
	    e.printStackTrace();
	}
    }
}	
return m_dcfTenor;

}

If you compare with StringTokenizer, it ignores even in between null values whereas PGTokenizer considers that as value

davecramer added a commit to davecramer/pgjdbc that referenced this issue Sep 3, 2020
davecramer added a commit that referenced this issue Sep 10, 2020
* fix: PgTokenizer was ignoring last empty token fixes #1881
davecramer added a commit to davecramer/pgjdbc that referenced this issue Oct 1, 2020
* fix: PgTokenizer was ignoring last empty token fixes pgjdbc#1881
davecramer added a commit to davecramer/pgjdbc that referenced this issue Oct 2, 2020
* fix: PgTokenizer was ignoring last empty token fixes pgjdbc#1881
davecramer added a commit that referenced this issue Oct 6, 2020
* fix: avoid removal type annotations on "this" so the source archive is buildable

"this" type annotations are Java 8+, so we no longer need to remove them.

* fix: PgTokenizer was ignoring last empty token (#1882)

* fix: PgTokenizer was ignoring last empty token fixes #1881

* fix: handle smallserial correctly fixes #1897 (#1899)

* feat: add smallserial metadata (#899)

PostgreSQL 9.2 adds a SMALLSERIAL data type, this reports the correct metadata information when a column is a smallserial (int2 with sequence), similar to how a serial or bigserial data types are reported.

* fix:remove osgi from karaf fixes Issue #1891 (#1902)

* Change default of gssEncMode to ALLOW. PostgreSQL can deal with PREFER but there are cloud providers that did not implement the protocol properly. Using PREFER seems to cause more problems than it solves

Co-authored-by: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>
Co-authored-by: Jorge Solorzano <jorsol@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants