-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling transient errors when backing up to Azure URL #50
Comments
There is currently no retry mechanism in DatabaseBackup, but it is a very good idea. I would like to check some things. Could you send me the output - file from a failed backup job. |
I'm not sure what you mean by an output - file. If you mean the file
specified in the @output_file_name for the job step, that gets overwritten
with every execution. I can update the job to append if that would be
helpful. I could then send you the relevant section when the next failure
occurs.
…On Mon, May 14, 2018 at 5:07 PM Ola Hallengren ***@***.***> wrote:
There is currently no retry mechanism in DatabaseBackup, but it is a very
good idea.
I would like to check some things. Could you send me the output - file
from a failed backup job.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHjPm512tCpQlLZIKljKh2jvswgsd2mks5tyfIdgaJpZM4T-P-6>
.
|
Unfortunately I don't see any good way to implement this. If I was going to do this, it should only be for certain errors, where the retry makes sense. The backup command returns two error. The first error is the interesting one, and the second error is a generic backup error. The problem is that you can only get the second error in T-SQL. |
Yes, this is the output - file I mean. Please set it to append or use the SQL Server Agent tokens for date and time.
|
If I was going to do this, it should only be for certain errors, where
the retry makes sense.
That makes sense.
The backup command returns two error. The first error is the interesting one, and the second error is a generic backup error. The problem is that you can only get the second error in T-SQL.
That's sad. I've run into similar issues in other areas before, where T-SQL
hides the useful error information.
I checked the CommandLog, and the backup shows an ErrorNumber of 3013, but
a NULL ErrorMessage.
…On Mon, May 14, 2018 at 5:28 PM Ola Hallengren ***@***.***> wrote:
Unfortunately I don't see any good way to implement this.
If I was going to do this, it should only be for certain errors, where the
retry makes sense.
The backup command returns two error. The first error is the interesting
one, and the second error is a generic backup error. The problem is that
you can only get the second error in T-SQL.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHjPqabgiW1-b7kSqiu-7E5gdPzeZ3Cks5tyfcBgaJpZM4T-P-6>
.
|
I set the jobs (one FULL, one LOG) to append to their output files. I'll
let you know the next time I get an error. They are random - sometimes on
consecutive days, sometimes not for weeks.
…On Mon, May 14, 2018 at 5:33 PM Ola Hallengren ***@***.***> wrote:
Yes, this is the output - file I mean. Please set it to append or use the
SQL Server Agent tokens for date and time.
$(ESCAPE_SQUOTE(SQLLOGDIR))\DatabaseBackup_FULL_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(STRTDT))_$(ESCAPE_SQUOTE(STRTTM)).txt
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGHjPiQ-62FdLT1kTrSJlmdJzpza8Bl7ks5tyfgNgaJpZM4T-P-6>
.
|
Yes, this is the same issue. I am using the error variable. That only gives me the last error, for the logging, and no error message. However I will still get both errors in the output - file. Here is an article about this issue: |
Just to chime in here, we are experiencing the same issue and are in the process of dealing with MS support. If any error files are needed (or anything else) I will be more than willing to accommodate. |
We are looking at this issue now. Will update once we have a solution. |
I have confirmation from a Microsoft Support Engineer that the next CU for 2014 will be shipping with some retry logic for the BackupToURL functionality. |
We currently experience the same random errors you are seeing and we are backing up exclusively to blob storage from a Azure D15 VM running Microsoft SQL Server 2016 (SP2-CU2) 13.0.5153.0 with 800 plus databases. Retry logic built into the BackupToURL functionality would be extremely beneficial and I would welcome the addition in all SQL versions. As a workaround you could create a job to look at the CommandLog and generate your own syntax to reattempt the backup again. |
I'm also experiencing what seems to be the same problem (on 2012 SP4). |
I have had confirmation that the fix for this has been delayed until the
next CU, and is affecting 2012, 2014, 2016, 2017 and all relevant SPs
…On Sat, 8 Sep 2018, 02:42 Stuart Cuthbertson, ***@***.***> wrote:
I'm also experiencing what seems to be the same problem (on 2012 SP4).
Having found this thread, I'm experimenting over the weekend with using
the built-in retry functionality in SQL job steps - as I run DatabaseBackup
from a job anyway.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#50 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/Ac4jszG30kkqgoDAe3xjqaArQ43w9XR4ks5uYyBpgaJpZM4T-P-6>
.
|
@GeorgePalacios: Nice! Last I heard, they were not going to provide this fix for 2012, so I'm glad to hear this. |
Microsoft has now released a fix for SQL Server 2014, SQL Server 2016, and SQL Server 2017. |
@olahallengren I found this topic and am experiencing very similar behavior on SQL Server 2017 EE, CU17. We backup dozens of DB's to Azure blob storage ranging from GB's to 5TB using your solution. It works great most of the time, but I get the occasional 3013. It typically works the next attempt, but occasionally multi-attempt failures do occur. I have set the transfer and block size, I backup to 64 files, and the failures are usually not on the largest of the DB's. I do not see any details in the log, error messages are null in CommandLog table. Any help or guidance to help achieve more stability would be great. Thanks for your time. |
@durilai, what would you like to have? A retry parameter? |
I occasionally see failed backups in one of my instances in an Azure VM that uses the backup to Azure BLOB storage feature. The errors generally look like this in the ERRORLOG:
Working with Microsoft Premier Support, they had me add some trace flags that cause me to get BackupToUrl log files. The one that matches with the error above (timezone differences) has the following entries:
After consulting with their internal groups, the Support Engineer replied to me today with the following (the bolding was highlighting in the email from Microsoft):
The DatabaseBackup stored procedure does not seem to have any mechanism to retry on error. While I would think that Microsoft should have built this into the BACKUP command code when they added the backup to URL feature, I'm not optimistic about that happening any time soon.
I found this: https://support.microsoft.com/en-us/help/4023679/fix-timeout-when-you-back-up-a-large-database-to-url-in-sql-server-201, which claims the issue was fixed in SQL Server 2012 SP3 CU10, but we are running 2012 SP4 and the problem obviously still exists. We've had the problem when backing up a 16 MB ldf, so their assertion that this happens only on "large" databases doesn't seem right. Or they fixed a different problem from the one I'm seeing. Regardless, I'm hoping that DatabaseBackup can be enhanced to retry on the appropriate errors.
The text was updated successfully, but these errors were encountered: