-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added clob-to-str to sql.utils #202
Conversation
+1 |
Is this really specific to what is conventionally known as "CLOB" á la DB2 and Oracle or is this reusable with things like PostgreSQL TEXT types? Also, if it's a CLOB type, then that means it might be seriously huge, is a "all-in-memory-at-once" really a good idea? I'm also concerned by the absence of a test. Generalizing this (if there is a comparable need for other types) might make it easier to test, but I just don't know a lot about the use-case here. In general, I'm basically looking for documentation/explanation of the problem being solved. |
I know that the korma.incubator project hasn't been used very much, but it seems reasonable to put Just an option to keep in mind. |
@MerelyAPseudonym I don't want to move my food around on my plate, I want to understand the purpose of the commit. |
@bitemyapp I agree, JDBC drivers for PostgreSQL (8.x, 9.x) contains a lot of bugs. For example: I had to make some workarounds in my Java projects to prevent some problems with encoded "bytea" data. So.. It is possible to make universal method, but it will be a little bit complex than this one. I use clob-to-str method in a tiny project with H2. It helps to read big text descriptions from DB. Unfortunately I can't use VARCHAR type (field value may be a little bit longer), so I thought that clob-to-str could be useful for others. Maybe I'm wrong.. |
"I use clob-to-str method in a tiny project with H2. It helps to read big text descriptions from DB." This is the problem I'm struggling with here, why would you ever read self-described "big" text descriptions from the DB into memory universally? That's a way to quickly eat up all your memory and hammer your poor server. I realize you might have relatively simple use-cases but I don't feel comfortable endorsing the loading in of arbitrarily large data that is intended for streaming in JDBC in a library like Korma, however flawed Korma may already be. Is there a safer approach/alternative to that would make more sense as a utility-fn in Korma? |
I'll try to explain my use-case:
Ofc, I can leave this functionality only in this project (if it is not useful for others). |
@vbauer my problem here is two-fold
There's no leverage or substantial saved effort being offered here beyond enabling somebody to avoid managing their own utility functions. I'm still open to explanations of why this is valuable but I'm not seeing it at present. I would be interested in "more convenient" but streaming based interfaces to these data types and enabling those interfaces optionally in the entity spec if somebody has an idea as to what they would look like. |
Here's how I arrived at suggesting the inclusion of A bit of background: I come from years of programming against the LAMP stack and Ruby on Rails. I am relatively new to Clojure and the Java libraries that underpin in. While working on a project using the Clojure Luminus framework backed onto h2, I needed a way to store more than 255 characters in a database field. Easily enough, I found the h2 equivalent of a MySQL Writing rows to the database was straightforward, but I encountered difficulties reading out the data from the I'd expected the result of my query to give me the string that I'd saved, instead I was given a From there, it was a arduous journey looking up the I might even add, that if it weren't for encountering the "The object is already closed" problem, I might not have found @vbauer's I share @bitemyapp's concerns about endorsing the loading of arbitrarily large data that is intended for streaming. I wholeheartedly agree that at some point, one should be expected to achieve a basic understanding of But at the same time, for someone starting out in Clojure, setting up and maintaining one's own utility functions just to read a slightly longer string from the database might be a bit overwhelming. So instead of:
I hope it could be more like this:
Hope this helps. |
Responding to an earlier question from @bitemyapp:
I've just discovered that while Korma returns a I imagine it'll be a similar thing with PostgreSQL TEXT? |
No, it's not similar with PostgreSQL TEXT. It works fine (9.2 db server version, postgresql "9.1-901.jdbc4" driver,org.clojure/java.jdbc "0.3.0-alpha5" and korma "0.3.0-RC6") in our setup without any tweaking. Which is very nice. And I think the default should be as described in this issue. Yes, there might be huge clobs and huge resultsets but I think it is an exception that one does not want to read the data as described here. If the use case is putting gigabytes of data into a relational database text field and streaming it, I think the relational database is not the right tool to use for persistent storage. It might happen occasionally but it's not the default case in my opinion. If it happens, the programmer will have so much headache anyway that Korma's stance on the matter is a minor issue. However, Korma's preference is a major issue when you run against varchar2 limits. With Oracle this doesn't take very much (4001 characters is too much for Oracle). There are many reasonable applications where you need more than 4000 but will not be storing gigabytes. It's not a "string" because Oracle is Oracle, but it is very seldom a "large binary object" in reality. |
@lokori What I meant to say is that PostgreSQL |
Ah, I misunderstood. Nevertheless we seem to agree that treating clobs as strings should be the default behaviour from the application programmer's point of view :) |
This would be useful when dealing with JdbcClob fields.
Example: