Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Directly from Hive to AWS Glue | No tables created #6

Closed
addictedanshul opened this issue Nov 14, 2017 · 5 comments
Closed

Comments

@addictedanshul
Copy link

hive_databases_S3.txt
hive_tables_S3.txt

I am trying to migrate directly from Hive to AWS Glue.
I created proper Glue job with Hive connection.
Tested connection, and it successfully connected.
Basically followed all steps and all was successful.
But eventually I can't see tables in AWS Glue catalog.
No error logs in job and normal logs say run status as succeeded.

I even tried Migrate from Hive to AWS Glue using Amazon S3 Objects.
And that too was successful, but no tables were created in Glue catalog.
I could find metastore in S3 buckets exported from Hive (files attached).

Now I thought of running this code from my local Eclipse to debug.
Can you please tell me how to debug from local Eclipse or in Glue console.

@dichenli
Copy link

Thank you for using AWS Glue.

If the tables are missing, did the script create any database in your Glue Data Catalog? You may find the list of databases on Glue console.

Another way to check if the tables are showing up is to use AWS CLI to call Glue service directly. For example, you could try the following shell commands:

$ aws glue get-database --name myHiveMetastore_default
$ aws glue get-table --database-name myHiveMetastore_default --name myHiveMetastore_customers

for instructions to install or update AWS CLI, you may refer to http://docs.aws.amazon.com/cli/latest/userguide/installing.html

The script hive_metastore_migration.py is completely open-source Spark code, so you could directly test and debug the script using Spark local mode. But the script import_into_datacatalog.py include Glue specific library, so you need to run it from a Glue job. You could modify the script to add print line statements to see the job output though.

If you have further questions, you can also use the AWS Glue Forum: https://forums.aws.amazon.com/forum.jspa?forumID=262.

@addictedanshul
Copy link
Author

Thanks dichenli for prompt response.
So firstly, yes database is also missing. In my previous post, I had attached database and tables files generated by 2nd approach and exported metastore to S3.
Both these things are not visible in Glue catalog after running direct from Hive to Glue.

I ran hive_metastore_migration.py and it is working fine because it created files in S3 using 2nd approach (attached in previous post).
It is import_into_datacatalog.py which is not working and problem is that no error is there. Job succeeds with no error logs but tables and databases are not seen. I have not edited a single char. in the script.
Yes I would be trying now with exhaustive print statements now.

@dichenli
Copy link

dichenli commented Nov 15, 2017

I tried to run the migration job from S3 to Glue, with the databases and tables files you provided. It succeeded and the databases and tables correctly show up in my Glue console. Could you check if your job configuration is the same as mine?

Script Path: s3://someBucket/import_into_datacatalog.py
Temporary directory: s3://someBucket
Required connections: empty (for migration from S3)
Python library path: s3://someBucket/hive_metastore_migration.py
Job parameters:
-m: from-s3
-D: s3://someBucket/output_from_previous_job/databases
-T: s3://someBucket/output_from_previous_job/tables
-P: s3://someBucket/output_from_previous_job/partitions

Notice the -m, -D, -T and -P are short names for --mode, --database-input-path, --table-input-path, --partition-input-path, you could use either.

I manually created the s3://someBucket/output_from_previous_job/databases, tables and partitions folders, and uploaded databases.txt and tables.txt files to the respective folders. partitions folder is empty.

@dichenli
Copy link

Hi Anshul,
I'll close the issue. If you are still blocked on the same problem, please feel free to reopen it. If you have any new questions, you may open a new issue, or post on AWS Glue forum https://forums.aws.amazon.com/forum.jspa?forumID=262. Thank you!

@kjudahlookout
Copy link

Hello Anshul and Dichen,
I am having the same issue when I use the scripts to directly migrate from hive metastore to glue data catalog. The job runs successfully in Glue but I don't see anything migrated to Glue data catalog. I am wondering if you guys were able to resolve this issue. Help on this issue is very appreciated.

Thanks,
Kshitij

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants