New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow performance on selects with joins #171
Comments
Did you When you join several foreign tables, the tables are pulled into PostgreSQL and the join is performed there. You can define a foreign table on a join by using a query in parentheses for the
Then the join is performed on the Oracle side, and the query might be more efficient. |
Nope. I did it right now, and now it works 15 secords for second query, good! |
Hi Laurez, How can I join tables in the options you indicate in the comment I mention in this answer? |
@fernandorb10 you can also try to disable hash and merge join sometime I found that it can "help" PG planner to send the full query to the remote server. Not in this case with oracle_fdw but for example:
At least one JOIN have been pushed down :-D I have found that on some FDW the entire statement is executed remotely. Of course if the original plan doesn't use hash or merge join this is useless. Note that it doesn't mean that it will be faster but you could try. Otherwise the only valuable solution for you is what @laurenz mentioned, create a foreign table that call your query, for example:
then use |
@fernandorb10, please start a new issue for your question. You provide too little information for a meaningful answer: What is unclear about my comment? If you need it spelled out for your particular use case, please show the table definitions and your query. |
Hi, hope u can do me advice in my problem.
I use oracle fdw 1.5.0, Pg 9.6.3 server, and Oracle client 11.2.0.4.0 client and server.
I also build pg and oracle_fdw from sources on Linux Suse 13.
For doing queries I use pgAdmin III 1.22.1.
I do simple query:
It works well, around 12 seconds and about 70.000 records.
I add another table:
And this one work very very slow, right now it 10+ minutes and this still works...
The result also about 75.000 records.
The text was updated successfully, but these errors were encountered: