Skip to content

suggest to add pandas.to_sql() #74

@josecw

Description

@josecw

Is it possible to have ability similar to function below:

gluecontext.write_dynamic_frame.from_jdbc_conf() as below?
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = datasource0, catalog_connection = "test_red", connection_options = {"preactions":"truncate table target_table;","dbtable": "target_table", "database": "redshiftdb"}, redshift_tmp_dir = 's3://s3path', transformation_ctx = "datasink4")

Currently the way we do is:

  1. Get SQL from S3 file and pass into pandas.read_sql_athena()
  2. use SQLAlchemy to execute preactions SQL. For our case is delete before load
  3. use SQLAlchemy and pandas.to_sql() to append dataframe into aurora table

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingenhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions