New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oracle support #244

Open
jqnatividad opened this Issue Jun 3, 2015 · 4 comments

Comments

Projects
None yet
3 participants
@jqnatividad

jqnatividad commented Jun 3, 2015

Using a similar approach as MySQL and MSSQL, i.e. using a connection string?

A lot of enterprise systems still run on Oracle and this would help migrations to PostgreSQL.

@dimitri

This comment has been minimized.

Show comment
Hide comment
@dimitri

dimitri Jun 4, 2015

Owner

It's definitely on the TODO list, still not something I am willing to work on on my free time. Care enough to sponsor the work?

Owner

dimitri commented Jun 4, 2015

It's definitely on the TODO list, still not something I am willing to work on on my free time. Care enough to sponsor the work?

@dimitri dimitri added the WishList label Jun 5, 2015

@cdhouch

This comment has been minimized.

Show comment
Hide comment
@cdhouch

cdhouch Jan 17, 2016

We're migrating Oracle to PostgreSQL right now. I'm using ora2pg to dump out the oracle databases per table, then using a shell script to convert the .sql files with inline COPY data into .csv files that pgloader can load into postgres. Some of our tables are 7+ billion rows and would not load as a regular .sql file, pgloader makes short work of it though. If pgloader could read the data directly out of the .sql files it would cut out a step that is really just rearranging the data to shoehorn it into a format pgloader will accept. In any case, if someone wants me to write up the process we're using currently, I could do that.

cdhouch commented Jan 17, 2016

We're migrating Oracle to PostgreSQL right now. I'm using ora2pg to dump out the oracle databases per table, then using a shell script to convert the .sql files with inline COPY data into .csv files that pgloader can load into postgres. Some of our tables are 7+ billion rows and would not load as a regular .sql file, pgloader makes short work of it though. If pgloader could read the data directly out of the .sql files it would cut out a step that is really just rearranging the data to shoehorn it into a format pgloader will accept. In any case, if someone wants me to write up the process we're using currently, I could do that.

@dimitri

This comment has been minimized.

Show comment
Hide comment
@dimitri

dimitri Jan 17, 2016

Owner

I think it would be even better if pgloader would connect to Oracle directly like it does for MySQL or MS SQL. What is needed is:

  • an oracle driver for common lisp, like https://github.com/archimag/cl-oracle
  • the introspection queries to get the list of table structures, indexes and foreign keys, plus other objects if we add support for more,
  • the casting rules when switching from Oracle to PostgreSQL (number to numeric and such).

Also, I would need a test database available for me to develop against, and some sponsoring for the time spent working on the feature.

It sounds way better to have pgloader connects to Oracle directly rather than tweak other tools output so that we can use an existing supported format here... anyway thanks for the details of your use case, I'm quite happy to see pgloader used on 7+ billion rows files ;-)

Also, pgloader supports COPY files already, so if you just cut the inline COPY parts of the sql files into per-table files, you should be good to go.

Owner

dimitri commented Jan 17, 2016

I think it would be even better if pgloader would connect to Oracle directly like it does for MySQL or MS SQL. What is needed is:

  • an oracle driver for common lisp, like https://github.com/archimag/cl-oracle
  • the introspection queries to get the list of table structures, indexes and foreign keys, plus other objects if we add support for more,
  • the casting rules when switching from Oracle to PostgreSQL (number to numeric and such).

Also, I would need a test database available for me to develop against, and some sponsoring for the time spent working on the feature.

It sounds way better to have pgloader connects to Oracle directly rather than tweak other tools output so that we can use an existing supported format here... anyway thanks for the details of your use case, I'm quite happy to see pgloader used on 7+ billion rows files ;-)

Also, pgloader supports COPY files already, so if you just cut the inline COPY parts of the sql files into per-table files, you should be good to go.

@dimitri

This comment has been minimized.

Show comment
Hide comment
@dimitri

dimitri Mar 27, 2016

Owner

In case anyone is interested in direct Oracle support and want to sponsor: http://pgloader.io/pgloader-moral-license.html

Owner

dimitri commented Mar 27, 2016

In case anyone is interested in direct Oracle support and want to sponsor: http://pgloader.io/pgloader-moral-license.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment