Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSCode disconnects after credentials refresh. #59

Open
harish-kamath opened this issue Apr 21, 2024 · 6 comments
Open

VSCode disconnects after credentials refresh. #59

harish-kamath opened this issue Apr 21, 2024 · 6 comments

Comments

@harish-kamath
Copy link

Thank you for the great library! It works fine when I SSH in directly, no issues there. However, if I connect with VSCode, it'll work fine for the most part - until I see a log:

[sagemaker-ssh-helper][sm-setup-ssh][start-ssh] 2024-04-21 07:28:33 INFO [CredentialRefresher] Next credential rotation will be in 29.997197441433332 minutes

Then, the machine will fail within a minute or two of this message, and give the error InternalServerError: We encountered an internal error. Please try again. in Sagemaker.

Weirdly enough, this is regardless of the instance type, amount of memory, etc. Also - it doesn't always happen the first time that message is sent, so I'm not sure if it's exactly that issue, or something else. Regardless, the machine fails with an internal server error only when I connect with VS Code after some amount of time connected.

@ivan-khvostishkov
Copy link
Contributor

Hi, @harish-kamath , very nice that you like this library! I have few questions:
1/ When you say InternalServerError: We encountered an internal error. Please try again - which log file exactly do you see this message at? Is it in CloudWatch?
2/ Do you know which process generates this message, e.g., is there any prefix in front of this line?
3/ What SageMaker component you are connecting to, e.g., SageMaker Training or Studio, or Inference?

@harish-kamath
Copy link
Author

@ivan-khvostishkov

Upon further digging - I'm not sure if it is actually the credentials (only).

I noticed that the ssm agent will first pull AWS credentials from the environment variables - so I tried including explicit AWS access key and secret key in my training job. The logs still show that the credentials are being refreshed, but the machine doesn't actually crash ~1m after the credentials are refreshed anymore. However, now it just crashes otherwise (even if there is nothing running, so no chance that it's a resource issue).

  1. This error message is on the Sagemaker console, under Training Jobs. Here's an example:
    image

And here is the last cloudwatch log:
image

(Note that in this case, I did not do sm-wait stop, but it doesn't actually matter for this error. It occurs even if I do that)

  1. There's no prefix or process unfortunately. Since it just crashes the machine, and there's no persistent storage, I'm not sure how I can actually debug after a crash either.

  2. Sagemaker Training Jobs

@harish-kamath
Copy link
Author

On the bright side, it no longer crashes always after 30 mins of being connected. However, it is still crashing within an hour.

@harish-kamath
Copy link
Author

Never mind, just got another crash in <30 minutes.

I'm pretty sure it is still this package, because connecting over plain SSH is still fine and never causes a crash.

@ivan-khvostishkov
Copy link
Contributor

Hi, @harish-kamath , apologies for delay, this indeed seems to me very strange. I will have to investigate it further since in the short-running tests it never crashed like that before. In a meantime, is it possible for you to raise the support case from AWS Console? Please, add the link to this issue and mention my name:

https://docs.aws.amazon.com/awssupport/latest/user/case-management.html

@ivan-khvostishkov
Copy link
Contributor

@harish-kamath I am using the following manual test to try to reproduce the issue. Without connection from VS Code the job successfully stops in 3 hours without any "Internal Server Error". I have few more asks and questions:

1/ What instance types did you try?

2/ Could you please run the test on ml.g4dn.xlarge:

def test_train_placeholder_manual():
bucket = sagemaker.Session().default_bucket()
checkpoints_prefix = f"s3://{bucket}/checkpoints/"
estimator = PyTorch(
entry_point=os.path.basename('source_dir/training_placeholder/train_placeholder.py'),
source_dir='source_dir/training_placeholder/',
dependencies=[SSHEstimatorWrapper.dependency_dir()], # <--NEW
# (alternatively, add sagemaker_ssh_helper into requirements.txt
# inside source dir) --
base_job_name='ssh-training-manual',
framework_version='1.9.1',
py_version='py38',
instance_count=1,
instance_type='ml.g4dn.xlarge',
max_run=60 * 60 * 3,
keep_alive_period_in_seconds=1800,
container_log_level=logging.INFO,
checkpoint_s3_uri=checkpoints_prefix
)
ssh_wrapper = SSHEstimatorWrapper.create(estimator, connection_wait_time_seconds=0) # <--NEW--
estimator.fit(wait=False)
ssh_wrapper.print_ssh_info()
ssh_wrapper.wait_training_job()

Status Start time End time Description
Starting 5/11/2024, 10:41:50 AM 5/11/2024, 10:42:32 AM Preparing the instances for training
Downloading 5/11/2024, 10:42:32 AM 5/11/2024, 10:46:08 AM Downloading the training image
Training 5/11/2024, 10:46:08 AM 5/11/2024, 1:45:42 PM Training image download completed. Training in progress.
Stopping 5/11/2024, 1:45:42 PM 5/11/2024, 1:45:43 PM Stopping the training job
Uploading 5/11/2024, 1:45:43 PM 5/11/2024, 1:45:55 PM Uploading generated training model
MaxRuntimeExceeded 5/11/2024, 1:45:55 PM 5/11/2024, 1:45:55 PM Resource released due to keep alive period expiry

3/ Don't connect with VS Code and wait for some time until credentials will refresh automatically for one or two times. The job should not crash. Then try to connect with VS Code.

So far for me it seems that the issue has nothing to do with Credential refresh, because they are refreshed all the time automatically and this is expected.

4/ How likely is that the VS Code runs some heavy process inside and the instance is running out of RAM?

Could you please check the utilization on the job page in AWS Console? The successful run looks like this:

image

Make the note at which exact time the credentials were refreshed and what time you've connected with VS Code.

I hope the above steps will help you to localize and isolate the issue down to some process that VS Code starts inside the container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants