-
Notifications
You must be signed in to change notification settings - Fork 800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate Directly from Hive to AWS Glue | No tables created #6
Comments
Thank you for using AWS Glue. If the tables are missing, did the script create any database in your Glue Data Catalog? You may find the list of databases on Glue console. Another way to check if the tables are showing up is to use AWS CLI to call Glue service directly. For example, you could try the following shell commands: $ aws glue get-database --name myHiveMetastore_default for instructions to install or update AWS CLI, you may refer to http://docs.aws.amazon.com/cli/latest/userguide/installing.html The script hive_metastore_migration.py is completely open-source Spark code, so you could directly test and debug the script using Spark local mode. But the script import_into_datacatalog.py include Glue specific library, so you need to run it from a Glue job. You could modify the script to add print line statements to see the job output though. If you have further questions, you can also use the AWS Glue Forum: https://forums.aws.amazon.com/forum.jspa?forumID=262. |
Thanks dichenli for prompt response. I ran hive_metastore_migration.py and it is working fine because it created files in S3 using 2nd approach (attached in previous post). |
I tried to run the migration job from S3 to Glue, with the databases and tables files you provided. It succeeded and the databases and tables correctly show up in my Glue console. Could you check if your job configuration is the same as mine? Script Path: s3://someBucket/import_into_datacatalog.py Notice the -m, -D, -T and -P are short names for --mode, --database-input-path, --table-input-path, --partition-input-path, you could use either. I manually created the s3://someBucket/output_from_previous_job/databases, tables and partitions folders, and uploaded databases.txt and tables.txt files to the respective folders. partitions folder is empty. |
Hi Anshul, |
Hello Anshul and Dichen, Thanks, |
hive_databases_S3.txt
hive_tables_S3.txt
I am trying to migrate directly from Hive to AWS Glue.
I created proper Glue job with Hive connection.
Tested connection, and it successfully connected.
Basically followed all steps and all was successful.
But eventually I can't see tables in AWS Glue catalog.
No error logs in job and normal logs say run status as succeeded.
I even tried Migrate from Hive to AWS Glue using Amazon S3 Objects.
And that too was successful, but no tables were created in Glue catalog.
I could find metastore in S3 buckets exported from Hive (files attached).
Now I thought of running this code from my local Eclipse to debug.
Can you please tell me how to debug from local Eclipse or in Glue console.
The text was updated successfully, but these errors were encountered: