/
atom.xml
377 lines (232 loc) · 119 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Exploring Azure, DevOps and Software Development</title>
<subtitle>Welcome to Ricky's Blog</subtitle>
<link href="https://clouddev.blog/atom.xml" rel="self"/>
<link href="https://clouddev.blog/"/>
<updated>2023-07-16T09:20:37.501Z</updated>
<id>https://clouddev.blog/</id>
<author>
<name>Ricky Gummadi</name>
</author>
<generator uri="https://hexo.io/">Hexo</generator>
<entry>
<title>Extracting GZip & Tar Files Natively in .NET Without External Libraries</title>
<link href="https://clouddev.blog/Azure/Function-Apps/NET/extracting-gzip-tar-files-natively-in-net-without-external-libraries/"/>
<id>https://clouddev.blog/Azure/Function-Apps/NET/extracting-gzip-tar-files-natively-in-net-without-external-libraries/</id>
<published>2023-06-24T12:00:00.000Z</published>
<updated>2023-07-16T09:20:37.501Z</updated>
<content type="html"><![CDATA[<h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Imagine being in a scenario where a file of type .tar.gz lands in your Azure Blob Storage container. This file, when uncompressed, yields a collection of individual files. The trigger event for the arrival of this file is an Azure function, which springs into action, decompressing the contents and transferring them into a different container.</p><p>In this context, a team may instinctively reach out for a robust library like SharpZipLib. However, what if there is a mandate to accomplish this without external dependencies? This becomes a reality with .NET 7.</p><p>In .NET 7, native support for Tar files has been introduced, and GZip is catered to via <code>System.IO.Compression</code>. This means we can decompress a .tar.gz file natively in .NET 7, bypassing any need for external libraries.</p><p>This post will walk you through this process, providing a practical example using .NET 7 to show how this can be achieved.</p><h2 id="NET-7-Native-TAR-Support"><a href="#NET-7-Native-TAR-Support" class="headerlink" title=".NET 7: Native TAR Support"></a>.NET 7: Native TAR Support</h2><p>As of .NET 7, the <code>System.Formats.Tar</code> namespace was introduced to deal with TAR files, adding to the toolkit of .NET developers:</p><ul><li><code>System.Formats.Tar.TarFile</code> to pack a directory into a TAR file or extract a TAR file to a directory</li><li><code>System.Formats.Tar.TarReader</code> to read a TAR file</li><li><code>System.Formats.Tar.TarWriter</code> to write a TAR file</li></ul><p>These new capabilities significantly simplify the process of working with TAR files in .NET. Lets dive in an have a look at a code sample that demonstrates how to extract a .tar.gz file natively in .NET 7.</p><span id="more"></span><h2 id="A-Simple-Example-In-NET-7"><a href="#A-Simple-Example-In-NET-7" class="headerlink" title="A Simple Example In .NET 7"></a>A Simple Example In .NET 7</h2><p>Below is an example demonstrating the extraction of a .tar.gz file natively in .NET 7 in a simple console app to extract the contents of a .tar.gz file to a directory</p><figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">using</span> System;</span><br><span class="line"><span class="keyword">using</span> System.IO;</span><br><span class="line"><span class="keyword">using</span> System.IO.Compression;</span><br><span class="line"><span class="keyword">using</span> System.Formats.Tar;</span><br><span class="line"></span><br><span class="line"><span class="keyword">class</span> <span class="title">Program</span></span><br><span class="line">{</span><br><span class="line"> <span class="function"><span class="keyword">static</span> <span class="keyword">void</span> <span class="title">Main</span>(<span class="params"><span class="built_in">string</span>[] args</span>)</span></span><br><span class="line"> {</span><br><span class="line"> <span class="built_in">string</span> sourceTarGzFilePath = <span class="string">@"C:\_Temp\test.tar.gz"</span>;</span><br><span class="line"> <span class="built_in">string</span> targetDirectory = <span class="string">@"C:\_Temp\ExtractedFiles\"</span>;</span><br><span class="line"></span><br><span class="line"> <span class="built_in">string</span> tarFilePath = Path.ChangeExtension(sourceTarGzFilePath, <span class="string">".tar"</span>);</span><br><span class="line"></span><br><span class="line"> Directory.CreateDirectory(targetDirectory);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Decompress the .gz file</span></span><br><span class="line"> <span class="keyword">using</span> (FileStream originalFileStream = File.OpenRead(sourceTarGzFilePath))</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">using</span> (FileStream decompressedFileStream = File.Create(tarFilePath))</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">using</span> (GZipStream decompressionStream = <span class="keyword">new</span> GZipStream(originalFileStream, CompressionMode.Decompress))</span><br><span class="line"> {</span><br><span class="line"> decompressionStream.CopyTo(decompressedFileStream);</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Extract the .tar file</span></span><br><span class="line"> <span class="keyword">using</span> (FileStream tarStream = File.OpenRead(tarFilePath))</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">using</span> (TarReader tarReader = <span class="keyword">new</span> TarReader(tarStream))</span><br><span class="line"> {</span><br><span class="line"> TarEntry entry;</span><br><span class="line"> <span class="keyword">while</span> ((entry = tarReader.GetNextEntryAsync().Result) != <span class="literal">null</span>)</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">if</span> (entry.EntryType <span class="keyword">is</span> TarEntryType.SymbolicLink <span class="keyword">or</span> TarEntryType.HardLink <span class="keyword">or</span> TarEntryType.GlobalExtendedAttributes)</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">continue</span>;</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> Console.WriteLine(<span class="string">$"Extracting <span class="subst">{entry.Name}</span>"</span>);</span><br><span class="line"> entry.ExtractToFileAsync(Path.Combine(targetDirectory, entry.Name), <span class="literal">true</span>).Wait();</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Delete the temporary .tar file</span></span><br><span class="line"> File.Delete(tarFilePath);</span><br><span class="line"></span><br><span class="line"> Console.WriteLine(<span class="string">"Extraction Completed"</span>);</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>You can also find this on <a href="https://gist.github.com/Ricky-G/5562922ca29ab8f8a349dc07917d65af">GitHub Gist</a>.</p><h2 id="Wrapping-Up"><a href="#Wrapping-Up" class="headerlink" title="Wrapping Up"></a>Wrapping Up</h2><p>The introduction of System.Formats.Tar in .NET 7 marks a significant milestone for developers dealing with .tar.gz files. It provides us with the ability to decompress these file types natively, without relying on external libraries. This functionality is a game-changer as it reduces complexity, minimizes external dependencies, and enhances the versatility of .NET applications.</p><p>The new namespace <code>System.Formats.Tar</code>, along with the established <code>System.IO.Compression</code>, effectively handle TAR and GZip files. This considerably simplifies the process, making the .NET environment more self-contained and versatile.</p><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li>Thumbnail image [was taken from the DotNet brand repo]<a href="https://github.com/dotnet/brand">https://github.com/dotnet/brand</a>)</li><li>Main image generated by [was taken from the DotNet brand repo]<a href="https://github.com/dotnet/brand">https://github.com/dotnet/brand</a>)</li></ul>]]></content>
<summary type="html"><h2 id="Introduction"><a href="#Introduction" class="headerlink" title="Introduction"></a>Introduction</h2><p>Imagine being in a scenario where a file of type .tar.gz lands in your Azure Blob Storage container. This file, when uncompressed, yields a collection of individual files. The trigger event for the arrival of this file is an Azure function, which springs into action, decompressing the contents and transferring them into a different container.</p>
<p>In this context, a team may instinctively reach out for a robust library like SharpZipLib. However, what if there is a mandate to accomplish this without external dependencies? This becomes a reality with .NET 7.</p>
<p>In .NET 7, native support for Tar files has been introduced, and GZip is catered to via <code>System.IO.Compression</code>. This means we can decompress a .tar.gz file natively in .NET 7, bypassing any need for external libraries.</p>
<p>This post will walk you through this process, providing a practical example using .NET 7 to show how this can be achieved.</p>
<h2 id="NET-7-Native-TAR-Support"><a href="#NET-7-Native-TAR-Support" class="headerlink" title=".NET 7: Native TAR Support"></a>.NET 7: Native TAR Support</h2><p>As of .NET 7, the <code>System.Formats.Tar</code> namespace was introduced to deal with TAR files, adding to the toolkit of .NET developers:</p>
<ul>
<li><code>System.Formats.Tar.TarFile</code> to pack a directory into a TAR file or extract a TAR file to a directory</li>
<li><code>System.Formats.Tar.TarReader</code> to read a TAR file</li>
<li><code>System.Formats.Tar.TarWriter</code> to write a TAR file</li>
</ul>
<p>These new capabilities significantly simplify the process of working with TAR files in .NET. Lets dive in an have a look at a code sample that demonstrates how to extract a .tar.gz file natively in .NET 7.</p></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/categories/Azure/Function-Apps/"/>
<category term=".NET" scheme="https://clouddev.blog/categories/Azure/Function-Apps/NET/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term=".NET" scheme="https://clouddev.blog/tags/NET/"/>
<category term="Azure Blob Storage" scheme="https://clouddev.blog/tags/Azure-Blob-Storage/"/>
<category term="GZip" scheme="https://clouddev.blog/tags/GZip/"/>
<category term="Tar" scheme="https://clouddev.blog/tags/Tar/"/>
</entry>
<entry>
<title>Unzipping and Shuffling GBs of Data Using Azure Functions</title>
<link href="https://clouddev.blog/Azure/Function-Apps/unzipping-and-shuffling-gbs-of-data-using-azure-functions/"/>
<id>https://clouddev.blog/Azure/Function-Apps/unzipping-and-shuffling-gbs-of-data-using-azure-functions/</id>
<published>2023-05-18T12:00:00.000Z</published>
<updated>2023-05-21T22:22:01.497Z</updated>
<content type="html"><![CDATA[<p>Consider this situation: you have a zip file stored in an Azure Blob Storage container (or any other location for that matter). This isn’t just any zip file; it’s large, containing gigabytes of data. It could be big data sets for your machine learning projects, log files, media files, or backups. The specific content isn’t the focus - the size is.</p><p>The task? We need to unzip this massive file(s) and relocate its contents to a different Azure Blob storage container. This task might seem daunting, especially considering the size of the file and the potential number of files that might be housed within it.</p><p>Why do we need to do this? The use cases are numerous. Handling large data sets, moving data for analysis, making backups more accessible - these are just a few examples. The key here is that we’re looking for a scalable and reliable solution to handle this task efficiently.</p><p><strong>Azure Data Factory is arguably a better fit for this sort of task, but In this blog post, we will specifically demonstrate how to establish this process using Azure Functions</strong>. Specifically we will try to achieve this within the constraints of the Consumption plan tier, where the maximum memory is capped at 1.5GB, with the supporting roles of Azure CLI and PowerShell in our setup.</p><h2 id="Setting-Up-Our-Azure-Environment"><a href="#Setting-Up-Our-Azure-Environment" class="headerlink" title="Setting Up Our Azure Environment"></a>Setting Up Our Azure Environment</h2><p>Before we dive into scripting and code, we need to set the stage - that means setting up our Azure environment. We’re going to create a storage account with two containers, one for our Zipped files and the other for Unzipped files.</p><p>To create this setup, we’ll be using the Azure CLI. Why? Because it’s efficient and lets us script out the whole process if we need to do it again in the future.</p><ol><li><p>Install Azure CLI: If you haven’t already installed Azure CLI on your local machine, <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli">you can get it from here</a>.</p></li><li><p>Login to Azure: Open your terminal and type the following command to login to your Azure account. You’ll be prompted to enter your credentials.</p> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az login </span><br></pre></td></tr></table></figure></li><li><p>Create a Resource Group: We’ll need a Resource Group to keep our resources organized. We’ll call this rg-function-app-unzip-test and create it in the eastus location (you can ofcourse choose which ever region you like).</p> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az group create --name rg-function-app-unzip-test --location eastus </span><br></pre></td></tr></table></figure><span id="more"></span></li><li><p>Create a Storage Account: Next, we’ll create a storage account within our Resource Group. We’ll name it unziptststorageacct.</p> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az storage account create --name unziptststorageacct --resource-group rg-function-app-unzip-test --location eastus --sku Standard_LRS </span><br></pre></td></tr></table></figure></li><li><p>Create the Blob Containers: Finally, we’ll create our two containers, ‘Zipped’ and ‘Unzipped’ in the unziptststorageacct storage account.</p> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">az storage container create --name zipped --account-name unziptststorageacct</span><br><span class="line">az storage container create --name unzipped --account-name unziptststorageacct </span><br></pre></td></tr></table></figure><p>Now your Azure environment is ready with the specific resource group and storage account names you provided! We’ve got our storage account unziptststorageacct and two containers ‘Zipped’ and ‘Unzipped’ set up for our operations. The next step is to create our zip file.</p></li></ol><h2 id="Concocting-Our-Data-With-PowerShell"><a href="#Concocting-Our-Data-With-PowerShell" class="headerlink" title="Concocting Our Data With PowerShell"></a>Concocting Our Data With PowerShell</h2><p>Our next task is to create a large zip file filled with multiple 100MB files, all brimming with random text. In a real world scenario you would already have these large files, but since we are simulating lets use PowerShell to create them.</p><article class="message is-success"> <div class="message-body"> <p>If you already have an existing zip file with large’ish files for testing, you can skip this step and use that file instead.</p> </div> </article><figure class="highlight powershell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Set the number of files we want to create</span></span><br><span class="line"><span class="variable">$fileCount</span> = <span class="number">10</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># The path where you want to create the TestFiles directory</span></span><br><span class="line"><span class="variable">$directory</span> = <span class="string">"C:\_Temp\TestFiles"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># Create a new directory for our files if it doesn't already exist</span></span><br><span class="line"><span class="keyword">if</span>(<span class="operator">-not</span> (<span class="built_in">Test-Path</span> <span class="literal">-Path</span> <span class="variable">$directory</span>)){</span><br><span class="line"> <span class="built_in">New-Item</span> <span class="literal">-ItemType</span> Directory <span class="literal">-Path</span> <span class="variable">$directory</span></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment"># Loop through and create our files</span></span><br><span class="line"><span class="keyword">for</span> (<span class="variable">$i</span>=<span class="number">1</span>; <span class="variable">$i</span> <span class="operator">-le</span> <span class="variable">$fileCount</span>; <span class="variable">$i</span>++){</span><br><span class="line"> <span class="comment"># Generate a 100MB file filled with random text and save it in our new directory</span></span><br><span class="line"> <span class="variable">$fileContent</span> = <span class="built_in">New-Object</span> byte[] <span class="number">104857600</span></span><br><span class="line"> (<span class="built_in">New-Object</span> Random).NextBytes(<span class="variable">$fileContent</span>)</span><br><span class="line"> [<span class="type">System.IO.File</span>]::WriteAllBytes(<span class="string">"<span class="variable">$directory</span>\File<span class="variable">$i</span>.txt"</span>, <span class="variable">$fileContent</span>)</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="comment"># Now that we have all our files, let's zip them up</span></span><br><span class="line"><span class="built_in">Compress-Archive</span> <span class="literal">-Path</span> <span class="string">"<span class="variable">$directory</span>\*"</span> <span class="literal">-DestinationPath</span> <span class="string">"<span class="variable">$directory</span>.zip"</span></span><br></pre></td></tr></table></figure><p>This is a simple script that is creating 10 files, each 100MB in size, and then zipping them up into a single file. The resulting zip file should be around the 1GB in size.</p><blockquote><p>Incase you are wondering how we end up with a 1GB+ file by compressing 1GB worth of data? we are generating files filled with random bytes. Compression algorithms work by finding and eliminating redundancy in the data. Since random data has no redundancy, it cannot be compressed. In fact, trying to compress random data can even result in output that is slightly larger than the input, due to the overhead of the compression format.</p></blockquote><p>We’ll use this file to test our Azure Function.</p><h2 id="Azure-Function-To-Unzip"><a href="#Azure-Function-To-Unzip" class="headerlink" title="Azure Function To Unzip"></a>Azure Function To Unzip</h2><p>We’re going to create a Function that magically springs into action the moment a blob (our zipped file) lands in the ‘Zipped’ container. This function will stream the data, unzip the files, and stores them neatly as individual files in the ‘Unzipped’ container.</p><p>Before we begin, ensure that you’ve installed the <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=v4,windows,csharp,portal,bash#v2">Azure Functions Core Tools</a> locally. You’d also need the <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions">Azure Functions Extension for Visual Studio Code</a>.</p><p>First lets use the CLI to create our consumption plan function app. We’ll call it unzipfunctionapp and use the unziptststorageacct storage account we created earlier. We’ll also specify the runtime as dotnet and the functions version as 4. We are using the consumption plan to demonstrate that this solution can work within the constraints of the consumption plan, where the maximum memory is capped at 1.5GB.</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az functionapp create --resource-group rg-function-app-unzip-test --consumption-plan-location eastus --runtime dotnet --functions-version 4 --name unzipfunctionapp123 --storage-account unziptststorageacct</span><br></pre></td></tr></table></figure><article class="message is-warning"> <div class="message-body"> <p>You might need to change the function name in the example about from ‘unzipfunctionapp123’. This could already be taken; this is because, Azure function app name must have Globally unique name.<br>When you create Azure function app, you specify the name which becomes part of URL <azurefunctionname>.azurewebsites.net</p><p>If the function app name is already taken you will get an error ‘Website with given name unzipfunctionapp already exists.’ when you run the cli command above.</p> </div> </article><p>Now that we have a consumption plan function infra, lets see the full code that will do the actual task of unzipping and uploading<br>There are two code samples and both are quite similar in their basic approach. They both handle the data in a streaming manner, which allows them to deal with large files without consuming a lot of memory.</p><p>However, there are some differences in the details of how they handle the streaming, which may have implications for their performance and resource usage:</p><blockquote><ul><li>The first code sample uses the ZipArchive class from the .NET Framework, which provides a high-level, user-friendly interface for dealing with zip files. The second code sample uses the ZipInputStream class from the SharpZipLib library, which provides a lower-level, more flexible interface.</li><li>In the first code sample, the ZipArchive automatically takes care of reading from the blob stream and unzipping the data. It provides an Open method for each entry in the zip file, which returns a stream that you can read the unzipped data from. In the second code sample, you manually read from the ZipInputStream and write to the blob stream using the StreamUtils.Copy method.</li><li>The second code sample manually handles the buffer size with new byte[4096] for copying data from the zip input stream to the blob output stream. In contrast, the first code sample relies on the default buffer size provided by the UploadFromStreamAsync method.</li></ul></blockquote><article class="message is-warning"> <div class="message-body"> <p>Memory wise both are similar (i.e.: they don’t download the entire zip file into memory), but the first script takes around 20 minutes to process a 1GB zip file (with 10 * 100 MB files), whereas the second script takes about 10 minutes for the same 1GB zip file. This mainly comes down to setting the custom buffer size and the optimizations in the SharpZipLib library</p><p> First script has the benefit of not importing any custom library, but cant not run on an Azure consumption plan, at the time of this writing, consumption plan has a max 10 minute runtime.<br> Second script can potentially run on a consumption plan, but comes at a cost of having to import a 3rd party library.</p> </div> </article><script src="//gist.github.com/Ricky-G/a9d670728a4f554b1234e4cb3ee74189.js"></script><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li>Thumbnail image <a href="https://azure.microsoft.com/svghandler">was taken from the Azure site</a></li><li>Main image generated by <a href="https://openai.com/blog/dall-e/">DALL-E</a></li></ul>]]></content>
<summary type="html"><p>Consider this situation: you have a zip file stored in an Azure Blob Storage container (or any other location for that matter). This isn’t just any zip file; it’s large, containing gigabytes of data. It could be big data sets for your machine learning projects, log files, media files, or backups. The specific content isn’t the focus - the size is.</p>
<p>The task? We need to unzip this massive file(s) and relocate its contents to a different Azure Blob storage container. This task might seem daunting, especially considering the size of the file and the potential number of files that might be housed within it.</p>
<p>Why do we need to do this? The use cases are numerous. Handling large data sets, moving data for analysis, making backups more accessible - these are just a few examples. The key here is that we’re looking for a scalable and reliable solution to handle this task efficiently.</p>
<p><strong>Azure Data Factory is arguably a better fit for this sort of task, but In this blog post, we will specifically demonstrate how to establish this process using Azure Functions</strong>. Specifically we will try to achieve this within the constraints of the Consumption plan tier, where the maximum memory is capped at 1.5GB, with the supporting roles of Azure CLI and PowerShell in our setup.</p>
<h2 id="Setting-Up-Our-Azure-Environment"><a href="#Setting-Up-Our-Azure-Environment" class="headerlink" title="Setting Up Our Azure Environment"></a>Setting Up Our Azure Environment</h2><p>Before we dive into scripting and code, we need to set the stage - that means setting up our Azure environment. We’re going to create a storage account with two containers, one for our Zipped files and the other for Unzipped files.</p>
<p>To create this setup, we’ll be using the Azure CLI. Why? Because it’s efficient and lets us script out the whole process if we need to do it again in the future.</p>
<ol>
<li><p>Install Azure CLI: If you haven’t already installed Azure CLI on your local machine, <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli">you can get it from here</a>.</p>
</li>
<li><p>Login to Azure: Open your terminal and type the following command to login to your Azure account. You’ll be prompted to enter your credentials.</p>
<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az login </span><br></pre></td></tr></table></figure></li>
<li><p>Create a Resource Group: We’ll need a Resource Group to keep our resources organized. We’ll call this rg-function-app-unzip-test and create it in the eastus location (you can ofcourse choose which ever region you like).</p>
<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az group create --name rg-function-app-unzip-test --location eastus </span><br></pre></td></tr></table></figure></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/categories/Azure/Function-Apps/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/tags/Function-Apps/"/>
<category term="Azure Blob Storage" scheme="https://clouddev.blog/tags/Azure-Blob-Storage/"/>
<category term="PowerShell" scheme="https://clouddev.blog/tags/PowerShell/"/>
<category term="Azure CLI" scheme="https://clouddev.blog/tags/Azure-CLI/"/>
</entry>
<entry>
<title>Azure DevTest Labs Policies</title>
<link href="https://clouddev.blog/Azure/DevTest-Labs/azure-devtest-labs-policies/"/>
<id>https://clouddev.blog/Azure/DevTest-Labs/azure-devtest-labs-policies/</id>
<published>2023-01-31T11:00:00.000Z</published>
<updated>2023-05-21T22:20:12.482Z</updated>
<content type="html"><![CDATA[<p>Azure DevTest Labs offers a powerful cloud-based development workstation environment and great alternative to a local development workstation/laptop when it comes to software development. This blog post is not so much talking about the benefits of DevTest Lab, but more about how to create policies for DevTest Labs using Bicep. Although there is a good support for <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.devtestlab/labs?pivots=deployment-language-bicep">deploying DevTest labs with Bicep</a>, there is little to no documentation when it comes to creating policies for DevTest Labs in Bicep. In this blog post, we will focus on creating policies for DevTest Labs using Bicep and how to go about doing this.</p><h2 id="A-Brief-Overview-of-Azure-DevTest-Labs"><a href="#A-Brief-Overview-of-Azure-DevTest-Labs" class="headerlink" title="A Brief Overview of Azure DevTest Labs"></a>A Brief Overview of Azure DevTest Labs</h2><p>Azure DevTest Labs is a managed service that enables developers to quickly create, manage, and share development and test environments. It provides a range of features and tools designed to streamline the development process, minimize costs, and improve overall productivity. By leveraging the power of the cloud, developers can easily spin up virtual machines (VMs) pre-configured with the necessary tools, frameworks, and software needed for their projects.</p><h2 id="Existing-Documentation-Limitations"><a href="#Existing-Documentation-Limitations" class="headerlink" title="Existing Documentation Limitations"></a>Existing Documentation Limitations</h2><p>While the existing documentation covers various aspects of Azure DevTest Labs, it lacks clear guidance on setting up policies with DevTest Labs in Bicep. This blog post aims to address that gap by providing a Bicep script for creating a DevTest Lab and applying policies to it. Shout out to my colleague <a href="https://www.linkedin.com/in/illian-yuan">Illian Y</a> for persisting and not giving up and finding a away around undocumented features and showing me.</p><span id="more"></span><h2 id="Existing-Documentation-For-Creating-a-DevTest-Lab"><a href="#Existing-Documentation-For-Creating-a-DevTest-Lab" class="headerlink" title="Existing Documentation For Creating a DevTest Lab"></a>Existing Documentation For Creating a DevTest Lab</h2><p><a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.devtestlab/labs?pivots=deployment-language-bicep">The existing documentation</a> for creating a DevTest Lab is pretty good, but when it comes to creating <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.devtestlab/labs/policysets/policies?pivots=deployment-language-bicep">policies for DevTest Lab</a> this is where the documentation falls short. The documentation does not provide a Bicep script for creating policies for DevTest Labs.</p><h2 id="Vanilla-DevTest-Lab"><a href="#Vanilla-DevTest-Lab" class="headerlink" title="Vanilla DevTest Lab"></a>Vanilla DevTest Lab</h2><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">resource lab 'Microsoft.DevTestLab/labs@<span class="number">2018</span><span class="number">-09</span><span class="number">-15</span>' = <span class="punctuation">{</span></span><br><span class="line"> name<span class="punctuation">:</span> 'testLab'</span><br><span class="line"> location<span class="punctuation">:</span> 'australiacentral'</span><br><span class="line"> tags<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> tagName1<span class="punctuation">:</span> 'test-tag'</span><br><span class="line"> tagName2<span class="punctuation">:</span> 'test-tag1'</span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"> properties<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> environmentPermission<span class="punctuation">:</span> 'Contributor'</span><br><span class="line"> labStorageType<span class="punctuation">:</span> 'Premium'</span><br><span class="line"> mandatoryArtifactsResourceIdsLinux<span class="punctuation">:</span> <span class="punctuation">[</span><span class="punctuation">]</span></span><br><span class="line"> mandatoryArtifactsResourceIdsWindows<span class="punctuation">:</span> <span class="punctuation">[</span><span class="punctuation">]</span></span><br><span class="line"> premiumDataDisks<span class="punctuation">:</span> 'Disabled'</span><br><span class="line"> announcement<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> enabled<span class="punctuation">:</span> 'Disabled'</span><br><span class="line"> expired<span class="punctuation">:</span> <span class="literal"><span class="keyword">false</span></span></span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"> support<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> enabled<span class="punctuation">:</span> 'Enabled'</span><br><span class="line"> markdown<span class="punctuation">:</span> 'Test'</span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"><span class="punctuation">}</span></span><br></pre></td></tr></table></figure><h2 id="Creating-Policies-for-DevTest-Labs-in-Bicep"><a href="#Creating-Policies-for-DevTest-Labs-in-Bicep" class="headerlink" title="Creating Policies for DevTest Labs in Bicep"></a>Creating Policies for DevTest Labs in Bicep</h2><p>The documentation states all the possible policies that can be created under the fact name in <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.devtestlab/labs/policysets/policies?pivots=deployment-language-bicep#policyproperties">PolicyProperties</a></p><p>Below is a list of three of those policies that can be created in Bicep.</p><ul><li>Allowed VM Sizes</li><li>Allowed VMs Per User</li><li>Allowed Premium SSD Per User</li></ul><h3 id="Linking-the-policies-to-the-DevTest-Labs"><a href="#Linking-the-policies-to-the-DevTest-Labs" class="headerlink" title="Linking the policies to the DevTest Labs"></a>Linking the policies to the DevTest Labs</h3><p>This is the important glue that is missing from the documentation, how to link the policies to the DevTest Labs. The way to do this is to create a resource policySetParent and link it to the DevTest Labs. The policySetParent resource is then used as the parent for the policies.</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">resource policySetParent 'Microsoft.DevTestLab/labs/policysets@<span class="number">2018</span><span class="number">-09</span><span class="number">-15</span>' existing = <span class="punctuation">{</span></span><br><span class="line"> parent<span class="punctuation">:</span> lab</span><br><span class="line"> name<span class="punctuation">:</span> 'default'</span><br><span class="line"><span class="punctuation">}</span></span><br></pre></td></tr></table></figure><h3 id="Allowed-VM-Sizes"><a href="#Allowed-VM-Sizes" class="headerlink" title="Allowed VM Sizes"></a>Allowed VM Sizes</h3><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">resource allowedVmSizesPolicies 'Microsoft.DevTestLab/labs/policysets/policies@<span class="number">2018</span><span class="number">-09</span><span class="number">-15</span>' = <span class="punctuation">{</span></span><br><span class="line"> name<span class="punctuation">:</span> 'allowedVmSizesPolicy'</span><br><span class="line"> location<span class="punctuation">:</span> location</span><br><span class="line"> parent<span class="punctuation">:</span> policySetParent</span><br><span class="line"> properties<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> evaluatorType<span class="punctuation">:</span> 'AllowedValuesPolicy'</span><br><span class="line"> factName<span class="punctuation">:</span> 'LabVmSize'</span><br><span class="line"> status<span class="punctuation">:</span> 'Enabled'</span><br><span class="line"> threshold<span class="punctuation">:</span> '<span class="punctuation">[</span><span class="string">"Standard_D4_v2"</span><span class="punctuation">,</span><span class="string">"Standard_E4_v2"</span><span class="punctuation">]</span>'</span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"><span class="punctuation">}</span></span><br></pre></td></tr></table></figure><h3 id="Allowed-VM’s-per-user"><a href="#Allowed-VM’s-per-user" class="headerlink" title="Allowed VM’s per user"></a>Allowed VM’s per user</h3><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">resource allowedVmsPerUserPolicies 'Microsoft.DevTestLab/labs/policysets/policies@<span class="number">2018</span><span class="number">-09</span><span class="number">-15</span>' = <span class="punctuation">{</span></span><br><span class="line"> name<span class="punctuation">:</span> 'allowedVmsPerUserPolicy'</span><br><span class="line"> location<span class="punctuation">:</span> location</span><br><span class="line"> parent<span class="punctuation">:</span> policySetParent</span><br><span class="line"> properties<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> evaluatorType<span class="punctuation">:</span> 'MaxValuePolicy'</span><br><span class="line"> factName<span class="punctuation">:</span> 'UserOwnedLabVmCount'</span><br><span class="line"> status<span class="punctuation">:</span> 'Enabled'</span><br><span class="line"> threshold<span class="punctuation">:</span> '<span class="number">4</span>'</span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"><span class="punctuation">}</span></span><br></pre></td></tr></table></figure><h3 id="Allowed-Premium-SSD-Per-User"><a href="#Allowed-Premium-SSD-Per-User" class="headerlink" title="Allowed Premium SSD Per User"></a>Allowed Premium SSD Per User</h3><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">resource allowedPremiumSSDPerUserPolicies 'Microsoft.DevTestLab/labs/policysets/policies@<span class="number">2018</span><span class="number">-09</span><span class="number">-15</span>' = <span class="punctuation">{</span></span><br><span class="line"> name<span class="punctuation">:</span> 'allowedPremiumSSDPerUserPolicy'</span><br><span class="line"> location<span class="punctuation">:</span> location</span><br><span class="line"> parent<span class="punctuation">:</span> policySetParent</span><br><span class="line"> properties<span class="punctuation">:</span> <span class="punctuation">{</span></span><br><span class="line"> evaluatorType<span class="punctuation">:</span> 'MaxValuePolicy'</span><br><span class="line"> factName<span class="punctuation">:</span> 'UserOwnedLabPremiumVmCount'</span><br><span class="line"> status<span class="punctuation">:</span> 'Enabled'</span><br><span class="line"> threshold<span class="punctuation">:</span> '<span class="number">4</span>'</span><br><span class="line"> <span class="punctuation">}</span></span><br><span class="line"><span class="punctuation">}</span></span><br></pre></td></tr></table></figure><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li>Main & thumbnail image <a href="https://azure.microsoft.com/">was taken from the Azure site</a></li></ul>]]></content>
<summary type="html"><p>Azure DevTest Labs offers a powerful cloud-based development workstation environment and great alternative to a local development workstation&#x2F;laptop when it comes to software development. This blog post is not so much talking about the benefits of DevTest Lab, but more about how to create policies for DevTest Labs using Bicep. Although there is a good support for <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.devtestlab/labs?pivots=deployment-language-bicep">deploying DevTest labs with Bicep</a>, there is little to no documentation when it comes to creating policies for DevTest Labs in Bicep. In this blog post, we will focus on creating policies for DevTest Labs using Bicep and how to go about doing this.</p>
<h2 id="A-Brief-Overview-of-Azure-DevTest-Labs"><a href="#A-Brief-Overview-of-Azure-DevTest-Labs" class="headerlink" title="A Brief Overview of Azure DevTest Labs"></a>A Brief Overview of Azure DevTest Labs</h2><p>Azure DevTest Labs is a managed service that enables developers to quickly create, manage, and share development and test environments. It provides a range of features and tools designed to streamline the development process, minimize costs, and improve overall productivity. By leveraging the power of the cloud, developers can easily spin up virtual machines (VMs) pre-configured with the necessary tools, frameworks, and software needed for their projects.</p>
<h2 id="Existing-Documentation-Limitations"><a href="#Existing-Documentation-Limitations" class="headerlink" title="Existing Documentation Limitations"></a>Existing Documentation Limitations</h2><p>While the existing documentation covers various aspects of Azure DevTest Labs, it lacks clear guidance on setting up policies with DevTest Labs in Bicep. This blog post aims to address that gap by providing a Bicep script for creating a DevTest Lab and applying policies to it. Shout out to my colleague <a href="https://www.linkedin.com/in/illian-yuan">Illian Y</a> for persisting and not giving up and finding a away around undocumented features and showing me.</p></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="DevTest Labs" scheme="https://clouddev.blog/categories/Azure/DevTest-Labs/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Azure Dev Test Labs" scheme="https://clouddev.blog/tags/Azure-Dev-Test-Labs/"/>
<category term="Developer Environments" scheme="https://clouddev.blog/tags/Developer-Environments/"/>
<category term="Azure Policy" scheme="https://clouddev.blog/tags/Azure-Policy/"/>
</entry>
<entry>
<title>Azure Logic Apps Timeout</title>
<link href="https://clouddev.blog/Azure/Logic-Apps/azure-logic-apps-timeout/"/>
<id>https://clouddev.blog/Azure/Logic-Apps/azure-logic-apps-timeout/</id>
<published>2022-10-19T11:00:00.000Z</published>
<updated>2023-08-18T10:23:58.425Z</updated>
<content type="html"><![CDATA[<p>Recently I got pulled into a production incident where a logic app was running for a long time (long time in this scenario was > 10 minutes), but the intention from the dev crew was they wanted this to time out in 60 seconds. These logic apps were a combination of HTTP triggers and Timer based.</p><h2 id="Logic-App-Default-Time-Limits"><a href="#Logic-App-Default-Time-Limits" class="headerlink" title="Logic App Default Time Limits"></a>Logic App Default Time Limits</h2><p>First things to keep in mind are some default limits.</p><ol><li><p>If its a HTTP based trigger the <a href="https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config?tabs=consumption,azure-portal#timeout-duration">default timeout is around 3.9 minutes</a></p></li><li><p>For most others the <a href="https://learn.microsoft.com/en-us/azure/logic-apps/edit-app-settings-host-settings?tabs=azure-portal#run-duration-and-history-retention">default max run duration of a logic app is 90 days and min is 7 days</a></p></li></ol><h2 id="Ways-To-Change-Defaults"><a href="#Ways-To-Change-Defaults" class="headerlink" title="Ways To Change Defaults"></a>Ways To Change Defaults</h2><p>With that, here are a couple of quick ways to make sure your Logic App times out and terminates within the time frame you set. Lets say if we want our Logic App to run no more than 60 seconds at max then:</p><span id="more"></span><ol><li>You can change the setting <a href="https://learn.microsoft.com/en-us/azure/logic-apps/edit-app-settings-host-settings?tabs=azure-portal#run-duration-and-history-retention#:~:text=Runtime.Backend.FlowRunTimeout">Runtime.Backend.FlowRunTimeout</a> from the default 90 days to 7 days (keep in mind the minimum for this setting is 7 days which is quite large, refer to this issue : <a href="https://github.com/Azure/logicapps/issues/782#issuecomment-1609008805">https://github.com/Azure/logicapps/issues/782#issuecomment-1609008805</a>)</li></ol><blockquote><ul><li>PRO: This will make sure that the Logic App runs for a maximum of 7 days only (which is quite large)</li><li>CON: However this will apply to all the Logic Apps in the host/tenant, meaning if you had 15 logic apps then all 15 will have the 7 day limit</li></ul></blockquote><ol start="2"><li>Have a branch with in the Logic App itself to control the timeout (shown in the below diagram)</li></ol><blockquote><ul><li>PRO: You have full control of timeout per Logic App, so some can have 30 second time outs while others 60 seconds etc</li><li>CON: There will be an extra branch/logic in your logic app</li></ul></blockquote><h2 id="Time-Out-Branch-In-Logic-App"><a href="#Time-Out-Branch-In-Logic-App" class="headerlink" title="Time-Out Branch In Logic App"></a>Time-Out Branch In Logic App</h2><p>Below is how a potential timeout out setting in a Logic App could look like. You create a “Delay” branch and set the desired time limit, in the example below its 2 minutes so if the other flow takes longer than two minutes then the delay will finish, logic app will be terminated and a cancelled status will be returned to the user in the below example. Shout out to my colleague <a href="https://www.linkedin.com/in/johnbilliris">John B</a> for this awesome idea.</p><p><img src="/Azure/Logic-Apps/azure-logic-apps-timeout/logic-apps-timeout.png" alt=" " title="Single Threaded Container Apps"></p><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li>Main image <a href="https://azure.microsoft.com/en-us/products/logic-apps/">was taken from the Azure site</a></li><li>Thumbnail image <a href="https://azure.microsoft.com/svghandler/logic-apps/?width=1280&height=720">was taken from Azure SVG icons</a></li></ul>]]></content>
<summary type="html"><p>Recently I got pulled into a production incident where a logic app was running for a long time (long time in this scenario was &gt; 10 minutes), but the intention from the dev crew was they wanted this to time out in 60 seconds. These logic apps were a combination of HTTP triggers and Timer based.</p>
<h2 id="Logic-App-Default-Time-Limits"><a href="#Logic-App-Default-Time-Limits" class="headerlink" title="Logic App Default Time Limits"></a>Logic App Default Time Limits</h2><p>First things to keep in mind are some default limits.</p>
<ol>
<li><p>If its a HTTP based trigger the <a href="https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config?tabs=consumption,azure-portal#timeout-duration">default timeout is around 3.9 minutes</a></p>
</li>
<li><p>For most others the <a href="https://learn.microsoft.com/en-us/azure/logic-apps/edit-app-settings-host-settings?tabs=azure-portal#run-duration-and-history-retention">default max run duration of a logic app is 90 days and min is 7 days</a></p>
</li>
</ol>
<h2 id="Ways-To-Change-Defaults"><a href="#Ways-To-Change-Defaults" class="headerlink" title="Ways To Change Defaults"></a>Ways To Change Defaults</h2><p>With that, here are a couple of quick ways to make sure your Logic App times out and terminates within the time frame you set. Lets say if we want our Logic App to run no more than 60 seconds at max then:</p></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="Logic Apps" scheme="https://clouddev.blog/categories/Azure/Logic-Apps/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Azure Logic Apps" scheme="https://clouddev.blog/tags/Azure-Logic-Apps/"/>
</entry>
<entry>
<title>Create A Multi User Experience For Single Threaded Applications Using Azure Container Apps</title>
<link href="https://clouddev.blog/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/"/>
<id>https://clouddev.blog/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/</id>
<published>2022-09-11T12:00:00.000Z</published>
<updated>2022-09-16T11:27:57.402Z</updated>
<content type="html"><![CDATA[<p>How to make a single-threaded app multi-threaded? This is the scenario I faced very recently. These were legacy web app(s) written to be single-threaded; in this context single-threaded means can only serve one request at a time. <strong>I know this goes against everything that a web app should be</strong>, but it what it is.</p><p>So if we have a single threaded web app (legacy) now all of a sudden we have a requirement to support multiple users at the same time. What are our options:</p><ol><li>Re-architect the app to be multi threaded</li><li>Find a way to simulate multi threaded behavior</li></ol><p>Both are great options, but in this scenario option 1 was out, due to the cost involved in re-writing this app to support multi threading. So that leaves us with option 2; how can we at a cloud infra level <strong>easily</strong> simulate multi threaded behavior. Turns out if we containerize the app (in this case it was easy enough to do) we orchestrate the app such that for each http request is routed to a new container (ie: every new http request should spin up a new container and request send to it)</p><h2 id="Options-For-Running-Containers"><a href="#Options-For-Running-Containers" class="headerlink" title="Options For Running Containers"></a>Options For Running Containers</h2><p>So when it comes to running a container in Azure our main options are below<br><img src="/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/container-options.png" alt=" " title="Container Options"></p><span id="more"></span><p>Here we need to orchestrate containers, ie: at a minimum for every new http request spin a new one), which means we only have two viable options, Azure Kubernetes Service (AKS) or Azure Container Apps (ACA). Both are valid options, each with their own pros/cons, with AKS its a lot more complex we will need to :</p><blockquote><ul><li>Think of networking</li><li>Think of vm’s/vm scale sets for nodes</li><li>Choose ingress controller and set up ingress rules</li><li>Identity</li><li>Plus many more, <a href="https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/baseline-aks#network-topology">here is the baseline reference for AKS</a></li></ul></blockquote><p>So in short, as flexible as AKS is its not as easy as something like ACA which is a fully managed version of AKS that abstracts all the complexities of Kubernetes. So for this scenario to prove we can simulate multi threaded experience lets go ahead with ACA.</p><h2 id="Sample-Single-Threaded-Program"><a href="#Sample-Single-Threaded-Program" class="headerlink" title="Sample Single Threaded Program"></a>Sample Single Threaded Program</h2><p>For this demo below is a simple C# DotNet app that simulates a single threaded behavior, essentially its doing a lock on a static variable which blocks the whole process for 6 seconds. So when we visit the /test endpoint we lock the whole app.</p><figure class="highlight csharp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">public</span> <span class="keyword">class</span> <span class="title">Program</span></span><br><span class="line">{</span><br><span class="line"> <span class="keyword">private</span> <span class="keyword">static</span> <span class="keyword">readonly</span> <span class="built_in">object</span> LockObject = <span class="keyword">new</span>();</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="keyword">static</span> <span class="keyword">void</span> <span class="title">Main</span>(<span class="params"><span class="built_in">string</span>[] args</span>)</span></span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">var</span> builder = WebApplication.CreateBuilder(args);</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Add services to the container.</span></span><br><span class="line"> builder.Services.AddAuthorization();</span><br><span class="line"></span><br><span class="line"> builder.Services.AddEndpointsApiExplorer();</span><br><span class="line"> builder.Services.AddSwaggerGen();</span><br><span class="line"></span><br><span class="line"> builder.Services.AddApplicationInsightsTelemetry();</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="keyword">var</span> app = builder.Build();</span><br><span class="line"></span><br><span class="line"> <span class="comment">// Configure the HTTP request pipeline.</span></span><br><span class="line"> <span class="keyword">if</span> (app.Environment.IsDevelopment())</span><br><span class="line"> {</span><br><span class="line"> app.UseSwagger();</span><br><span class="line"> app.UseSwaggerUI();</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> app.UseAuthorization();</span><br><span class="line"></span><br><span class="line"> app.MapGet(<span class="string">"/test"</span>, (HttpContext httpContext) =></span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">if</span> (Monitor.TryEnter(LockObject, <span class="keyword">new</span> TimeSpan(<span class="number">0</span>, <span class="number">0</span>, <span class="number">6</span>)))</span><br><span class="line"> {</span><br><span class="line"> <span class="keyword">try</span></span><br><span class="line"> {</span><br><span class="line"> Thread.Sleep(<span class="number">5000</span>);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">finally</span></span><br><span class="line"> {</span><br><span class="line"> Monitor.Exit(LockObject);</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> (<span class="string">"Hello From Container: "</span> + System.Environment.MachineName);</span><br><span class="line"> });</span><br><span class="line"></span><br><span class="line"> app.Run();</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h2 id="Azure-Container-Apps"><a href="#Azure-Container-Apps" class="headerlink" title="Azure Container Apps"></a>Azure Container Apps</h2><p>For this demo the easiest way to create the Azure Container Apps environment is through Visual Studio, you right click, publish and go through the menus and in the end VS will create a Container Apps Environment and deploy the code as a container to ACA.<br><img src="/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/azure-container-app-create.png" alt=" " title="Single Threaded Container Apps"></p><p>Once this is all done, we should have a resource group like below<br><img src="/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/container-apps-resource-group.png" alt=" " title="Container Apps Resource Group"></p><h2 id="Azure-Container-Apps-Scaling"><a href="#Azure-Container-Apps-Scaling" class="headerlink" title="Azure Container Apps Scaling"></a>Azure Container Apps Scaling</h2><p>Next we go to the container app (the single threaded api we just deployed) and set up a simple http scale rule that will spin up a new container for every 1 http incoming request. In the example below we set min-replicas to 0 and max to 30 this means that when there is no traffic it will scale down to 0 and at peak it will hit 30.<br><img src="/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/container-options.png" alt=" " title="Container Apps Resource Group"></p><h2 id="Testing"><a href="#Testing" class="headerlink" title="Testing"></a>Testing</h2><p>Now go to the url of the container app and hit it simultaneously in browser tabs, when I opened it in multiple browser tabs out of 10 tabs about 7 were served by unique containers and based on the test code above I see it being served by different container ids</p><figure class="highlight json"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">Tab1<span class="punctuation">:</span> Hello From Container<span class="punctuation">:</span> single-threaded-api-app<span class="number">-20220731</span>--ps4yjjp<span class="number">-66</span>f4885b65-w5s6h</span><br><span class="line">Tab2<span class="punctuation">:</span> Hello From Container<span class="punctuation">:</span> single-threaded-api-app<span class="number">-20220731</span>--ps4yjjp<span class="number">-66</span>f4885b65-gs8qf</span><br><span class="line">Tab3<span class="punctuation">:</span> Hello From Container<span class="punctuation">:</span> single-threaded-api-app<span class="number">-20220731</span>--ps4yjjp<span class="number">-66</span>f4885b65-x7grl</span><br><span class="line">etc</span><br></pre></td></tr></table></figure><p>So its not 100% every single request goes to a brand new container, but very easily and very quickly with out too much complexity we were able to achieve a 70 - 90% of requests being served with new containers, so in essence we found a quick way to simulate a pseudo - multi threaded experience for our legacy single threaded app with out too much effort.</p>]]></content>
<summary type="html"><p>How to make a single-threaded app multi-threaded? This is the scenario I faced very recently. These were legacy web app(s) written to be single-threaded; in this context single-threaded means can only serve one request at a time. <strong>I know this goes against everything that a web app should be</strong>, but it what it is.</p>
<p>So if we have a single threaded web app (legacy) now all of a sudden we have a requirement to support multiple users at the same time. What are our options:</p>
<ol>
<li>Re-architect the app to be multi threaded</li>
<li>Find a way to simulate multi threaded behavior</li>
</ol>
<p>Both are great options, but in this scenario option 1 was out, due to the cost involved in re-writing this app to support multi threading. So that leaves us with option 2; how can we at a cloud infra level <strong>easily</strong> simulate multi threaded behavior. Turns out if we containerize the app (in this case it was easy enough to do) we orchestrate the app such that for each http request is routed to a new container (ie: every new http request should spin up a new container and request send to it)</p>
<h2 id="Options-For-Running-Containers"><a href="#Options-For-Running-Containers" class="headerlink" title="Options For Running Containers"></a>Options For Running Containers</h2><p>So when it comes to running a container in Azure our main options are below<br><img src="/Azure/Container-Apps/create-a-multi-user-experience-for-single-threaded-applications-using-azure-container-apps/container-options.png" alt=" " title="Container Options"></p></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="Container Apps" scheme="https://clouddev.blog/categories/Azure/Container-Apps/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Azure Container Apps" scheme="https://clouddev.blog/tags/Azure-Container-Apps/"/>
<category term="Containers" scheme="https://clouddev.blog/tags/Containers/"/>
<category term="Docker" scheme="https://clouddev.blog/tags/Docker/"/>
<category term="DotNet" scheme="https://clouddev.blog/tags/DotNet/"/>
<category term="Single Threaded Apps" scheme="https://clouddev.blog/tags/Single-Threaded-Apps/"/>
</entry>
<entry>
<title>Application Gateway Ingress Controller For AKS</title>
<link href="https://clouddev.blog/AKS/AGIC/application-gateway-ingress-controller-for-aks/"/>
<id>https://clouddev.blog/AKS/AGIC/application-gateway-ingress-controller-for-aks/</id>
<published>2022-08-19T12:00:00.000Z</published>
<updated>2023-04-04T10:49:14.496Z</updated>
<content type="html"><![CDATA[<p>Recently I ran into an interesting issue with an AKS cluster running 2000+ services. There is nothing wrong in running 2000+ services that’s what Kubernetes is there for, scale! but the interesting aspect that caught my attention was trying to get the Applicaiton Gateway Ingress Controller (AGIC) to ingress to all these services. I had worked with Istio and NGINX for ingress into AKS with no issues and never AGIC, so I had to try this to see where it worked well, what the advantages are and where the limitations are.</p><h2 id="Application-Gateway"><a href="#Application-Gateway" class="headerlink" title="Application Gateway"></a>Application Gateway</h2><p>Application Gateway (App Gateway) is a well-established layer 7 service that has been around for a while, some of the major features are:</p><ul><li>URL routing</li><li>Cookie-based affinity</li><li>SSL termination</li><li>End-to-end SSL</li><li>Support for public, private, and hybrid web sites</li><li>Integrated web application firewall</li><li>Zone redundancy</li><li>Connection draining</li></ul><p>This post isn’t focused on the App Gateway itself, it’s more about how and what it can do as an ingress controller for AKS. <a href="https://docs.microsoft.com/en-us/azure/application-gateway/features">You can find out more about App Gateway and all abouts its features here</a></p><span id="more"></span><h2 id="TLDR"><a href="#TLDR" class="headerlink" title="TLDR;"></a>TLDR;</h2><h3 id="Benefits-of-AGIC"><a href="#Benefits-of-AGIC" class="headerlink" title="Benefits of AGIC"></a>Benefits of AGIC</h3><blockquote><ul><li>Direct connection to the pods without an extra hop, <a href="https://azure.microsoft.com/en-au/blog/application-gateway-ingress-controller-for-azure-kubernetes-service/#:~:text=Solution%20performance">this results in a performance benefit up to 50% lower network latency compared to in-cluster ingress</a></li><li>Could make a huge difference in performance and latency sensitive applications and workloads</li><li>If going the AKS add-on route then it becomes fully managed and updated</li><li>In cluster ingress consumes and competes for AKS compute/memory resources where was with App Gateway separated from the cluster it won’t be leeching any of the AKS compute</li><li>Full benefits of the Application Gateway such as WAF, cookie-based affinity, ssl termination amongst many others</li></ul></blockquote><h4 id="Limitations"><a href="#Limitations" class="headerlink" title="Limitations"></a>Limitations</h4><blockquote><ul><li><a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#application-gateway-limits">Application Gateway has some backend limits. Backend pools are limited to 100.</a></li><li><a href="https://azure.microsoft.com/en-us/pricing/details/application-gateway/#pricing">Application Gateway does have a pricing implication</a></li><li>Routing is directly to pod IP’s rather than the ClusterIP of the service. <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1427">There is a feature request open for this</a></li></ul></blockquote><h3 id="Application-Gateway-Ingress-Controller-AGIC"><a href="#Application-Gateway-Ingress-Controller-AGIC" class="headerlink" title="Application Gateway Ingress Controller (AGIC)"></a>Application Gateway Ingress Controller (AGIC)</h3><p>AGIC went to GA around the end of 2019 and offered the possibilities of hooking up an App Gateway as an attractive alternative for ingress into an AKS cluster. Before moving any further with AGIC, we need to understand at a high-level how networking works in AKS.</p><p>There are two main network models:</p><ol><li><p>Kubenet networking</p><blockquote><ul><li>Default option for Kubernetes out of the box</li><li>Each Node receives an IP from the Azure virtual network subnet</li><li>Pods in the node are not associated to the Azure vnet, they are assigned an IP address from the <em>PodIPCidr</em> and a route table is created by AKS</li></ul></blockquote></li><li><p>Azure Container Networking Interface networking (CNI)</p><blockquote><ul><li>Each pod itself receives an IPaddress from the Azure virtual network subnet</li><li>Pods can be directly reached via their private IP from connected networks</li><li>Pods can access resources in the vnet directly with out issues (e.g.: function app in the same vnet)</li></ul></blockquote></li></ol><p>It’s important to note, once you create an AKS cluster with a given network model you can’t change it; you will have to create a new one. <a href="https://docs.microsoft.com/en-us/azure/aks/concepts-network#compare-network-models">There are advantages and disadvantages in both models which are listed in detail in this link</a>.</p><p>One key consideration to highlight is:</p><ul><li>Kubenet - /24 IP range can support up to 251 nodes (each subnet reserves the first 3 IP addresses for management operations). Given the maximum nodes per pod in Kubenet is 110, this configuration can support a maximum of 251 * 110 = 27,610 pods</li><li>CNI - the same /24 IP range can support a maximum of 8 nodes (CNI has a max of thirty pods per node). So, this configuration can support a maximum of 240</li></ul><p>When it comes to CNI you will have to plan for the IP addresses, you might need to a /16 range to get a bigger node count. <a href="https://docs.microsoft.com/en-us/azure/aks/configure-kubenet#limitations--considerations-for-kubenet">There are also limitations with the kubenet that will need to be taken into consideration</a>.</p><p>With the AKS networking models out of the way, let’s look at AGIC; regardless of which model is chosen, the goal for AGIC is to ingress directly to the pod, a simple representation of this can be seen below. AGIC when deployed, runs in a pod in the AKS cluster and watches for changes, when changes are detected (i.e.: a new pod has been added or existing pod removed) these IP changes are propagated to the App Gateway via the Azure Resource Manager.</p><div class="mxgraph-container"> <div class="mxgraph" style="max-width:100%;border:1px solid transparent;" data-mxgraph="{"highlight":"#0000ff","lightbox":false,"nav":true,"resize":false,"page":0,"toolbar":"lightbox zoom layers pages","url":"https://raw.githubusercontent.com/Ricky-G/draw-io/main/AGIC-Ingress-AKS.drawio"}"></div></div><p>If we went with the CNI networking model, then the pod would get IP address from the vnet and there would be a mapping in the App Gateway. Alternatively, with the Kubenet model <a href="https://azure.github.io/application-gateway-kubernetes-ingress/how-tos/networking/#with-kubenet">this is how App Gateway will be setup</a>, it will try to assign the same routable created by AKS to App Gateway’s subnet.</p><p>It’s important to note, whichever model you choose the App Gateway will always connect directly to the pod and this is by design.</p><h2 id="Deploying-AGIC"><a href="#Deploying-AGIC" class="headerlink" title="Deploying AGIC"></a>Deploying AGIC</h2><p>AGIC can be deployed in two ways <a href="https://docs.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview#difference-between-helm-deployment-and-AKS-add-on">either using Helm or as an AKS add-on</a>. Each has their pros and cons, the key benefit of going via an AKS add-on will be that it will be fully managed and auto updated by Azure (i.e.: all updates, patching etc. for the AGIC will be taken care of automatically) whereas with Helm you will have to do that yourself.</p><p>Let’s go ahead and deploy a demo AKS cluster with AGIC and see it in action to understand exactly what is going on. For the sake of simplicity, this demo will be creating an AKS cluster with CNI networking model and deploying the AGIC as and AKS add-on.</p><h3 id="Create-an-AKS-cluster"><a href="#Create-an-AKS-cluster" class="headerlink" title="Create an AKS cluster"></a>Create an AKS cluster</h3><p><strong>Login and set the right subscription</strong></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">az login</span><br><span class="line">az account <span class="built_in">set</span> -s <span class="string">"your-subcription-id"</span></span><br></pre></td></tr></table></figure><p><strong>Create a new resource group</strong></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az group create --name agicTestResourceGroup --location eastus</span><br></pre></td></tr></table></figure><p>Here we are creating a new AKS cluster with CNI networking model (–network-plugin azure) and we are setting up App Gateway as ingress and in this instance we are saying our App Gateway’s name is “testAppGateway” which doesn’t exist and will be created for us</p><p><strong>Create AKS cluster</strong></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az aks create -n agicTestCluster -g agicTestResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testAppGateway --appgw-subnet-cidr <span class="string">"10.225.0.0/16"</span> --generate-ssh-keys</span><br></pre></td></tr></table></figure><p>If we go into the Azure Portal, we can see two resource groups (one of them is what we created and this where the Azure managed AKS control plane is), the other resource group (MC_agicTestResourceGroup_agicTestCluster_eastus) is where the node pool, vnet, App Gateway etc all live, this resource group gets created automatically for us as part of the <em>az aks create</em> command.</p><p><img src="/AKS/AGIC/application-gateway-ingress-controller-for-aks/aks-resource-group.png" alt=" " title="AKS Resource Group"></p><p><img src="/AKS/AGIC/application-gateway-ingress-controller-for-aks/app-gateway-resource-group.png" alt=" " title="App Gateway Resource Group"></p><h2 id="Deploy-a-sample-API"><a href="#Deploy-a-sample-API" class="headerlink" title="Deploy a sample API"></a>Deploy a sample API</h2><p>Now we have the AKS cluster up and running with AGIC deployed as an add-on, let’s deploy a sample API app and set ingress through the App Gateway.</p><p><strong>Get credentials to the AKS cluster</strong></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az aks get-credentials -n agicTestCluster -g agicTestResourceGroup</span><br></pre></td></tr></table></figure><p><strong>Deploy a sample API</strong></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl apply -f https://gist.githubusercontent.com/Ricky-G/59eb109913bd45d3e9229f9cf0a97edc/raw/b336047feecd9fd89fbe1a9627ac385b525124fe/sample-api-aks-deployment.yaml</span><br></pre></td></tr></table></figure><p>The above sample API deployment yaml was taken from the <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/examples/aspnetapp.yaml">AGIC GitHub repo</a>, the only change made to it was added a minimum of 10 replicas. We are saying we need 10 pods running this API. As soon as you run this you should see the app deployed as a service and 10 pods running successfully and there is a cluster-IPIP set for this (cluster-IP is an IP load balancer that Kubernetes creates, we just need to call this IP and our traffic will be forwarded to one of the 10 pods)</p><p><img src="/AKS/AGIC/application-gateway-ingress-controller-for-aks/sample-api-sevice.png" alt=" " title="Service Deployed to AKS"></p><p>Now if we go to the resource group where we have the actual Application Gateway and go to backend pool, we can see there is one here created by AGIC and if we dig into the pool all the IP addresses of the 10 pods are listed here. So, we have direct ingress to the pods from the Application Gateway.</p><p><img src="/AKS/AGIC/application-gateway-ingress-controller-for-aks/app-gateway-backend-pool.png" alt=" " title="Application Gateway Backend Pool"></p><p>Finally, if we run the below command, we should see an ingress IP address for “aspnetapp” which is our sample API. This is the public IP of the Application Gateway, which has been wired up to ingress all the way to the pod. If we paste this IP into the browser, we can see sample aspnet site served from the pod.</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl get ingress</span><br></pre></td></tr></table></figure><p>Right, so we have successfully ingressed all the way from public ip going via Application Gateway all the way to our pod.</p><h2 id="Benefits-of-AGIC-1"><a href="#Benefits-of-AGIC-1" class="headerlink" title="Benefits of AGIC"></a>Benefits of AGIC</h2><ul><li>Direct connection to the pods without an extra hop, <a href="https://azure.microsoft.com/en-au/blog/application-gateway-ingress-controller-for-azure-kubernetes-service/#:~:text=Solution%20performance">this results in a performance benefit up to 50% lower network latency compared to in-cluster ingress</a></li><li>Could make a huge difference in performance and latency sensitive applications and workloads</li><li>If going the AKS add-on route then it becomes fully managed and updated</li><li>In cluster ingress consumes and competes for AKS compute/memory resources where was with App Gateway separated from the cluster it won’t be leeching any of the AKS compute</li><li>Full benefits of the Application Gateway such as WAF, cookie-based affinity, ssl termination amongst many others</li></ul><h2 id="Limitations-1"><a href="#Limitations-1" class="headerlink" title="Limitations"></a>Limitations</h2><ul><li><a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#application-gateway-limits">Application Gateway has some backend limits. Backend pools are limited to 100.</a></li><li><a href="https://azure.microsoft.com/en-us/pricing/details/application-gateway/#pricing">Application Gateway does have a pricing implication</a></li><li>Routing is directly to pod IP’s rather than the ClusterIP of the service. <a href="https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1427">There is a feature request open for this</a></li></ul><h2 id="Closing-Thoughts"><a href="#Closing-Thoughts" class="headerlink" title="Closing Thoughts"></a>Closing Thoughts</h2><p>Key thing to keep in mind is the backend pool limitation of 100 . If you have more than 100 “ingres-able” services, then you would need multiple Application Gateway’s to cater for this. Although it is a supported scenario and straightforward to set up multiple App Gateways for one AKS cluster, your costs will pile up.</p><p>At the start of this post, I mentioned a scenario of 2000+ services, in this case we would need 20 App Gateways; 2000 services / 100 = 20. Due to cost implications this won’t be palatable in most cases.</p><p>On the plus side you get direct connection to the pod and can shave 50% of network latency. So, in this 2000+ services in one cluster scenario we could put the App Gateway as ingress for just latency sensitive apps/API’s and use another traditional in cluster-based ingress for all the other services. This way you get the best of both words while still keeping below the App Gateway max backend pool limits.</p><p>One neat option for an in cluster-based ingress could be <a href="https://docs.microsoft.com/en-us/azure/aks/web-app-routing">Web Application Routing</a>, which is still in preview at the time of writing this. It’s a managed NGINX based solution that should work well as an in cluster-based ingress controller</p><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li><a href="https://azure.microsoft.com/en-au/blog/application-gateway-ingress-controller-for-azure-kubernetes-service/">AGIC main documentation</a></li><li><a href="https://azure.github.io/application-gateway-kubernetes-ingress/">AGIC GitHub</a></li><li>Main image <a href="https://azure.microsoft.com/svghandler/application-gateway">was taken from the Azure site</a> and slightly modified</li></ul>]]></content>
<summary type="html"><p>Recently I ran into an interesting issue with an AKS cluster running 2000+ services. There is nothing wrong in running 2000+ services that’s what Kubernetes is there for, scale! but the interesting aspect that caught my attention was trying to get the Applicaiton Gateway Ingress Controller (AGIC) to ingress to all these services. I had worked with Istio and NGINX for ingress into AKS with no issues and never AGIC, so I had to try this to see where it worked well, what the advantages are and where the limitations are.</p>
<h2 id="Application-Gateway"><a href="#Application-Gateway" class="headerlink" title="Application Gateway"></a>Application Gateway</h2><p>Application Gateway (App Gateway) is a well-established layer 7 service that has been around for a while, some of the major features are:</p>
<ul>
<li>URL routing</li>
<li>Cookie-based affinity</li>
<li>SSL termination</li>
<li>End-to-end SSL</li>
<li>Support for public, private, and hybrid web sites</li>
<li>Integrated web application firewall</li>
<li>Zone redundancy</li>
<li>Connection draining</li>
</ul>
<p>This post isn’t focused on the App Gateway itself, it’s more about how and what it can do as an ingress controller for AKS. <a href="https://docs.microsoft.com/en-us/azure/application-gateway/features">You can find out more about App Gateway and all abouts its features here</a></p></summary>
<category term="AKS" scheme="https://clouddev.blog/categories/AKS/"/>
<category term="AGIC" scheme="https://clouddev.blog/categories/AKS/AGIC/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="AKS" scheme="https://clouddev.blog/tags/AKS/"/>
<category term="Ingress" scheme="https://clouddev.blog/tags/Ingress/"/>
<category term="AGIC" scheme="https://clouddev.blog/tags/AGIC/"/>
<category term="Application Gateway" scheme="https://clouddev.blog/tags/Application-Gateway/"/>
<category term="Kubernetes" scheme="https://clouddev.blog/tags/Kubernetes/"/>
</entry>
<entry>
<title>Deploying To IP Restricted Azure Function Apps Using GitHub Actions</title>
<link href="https://clouddev.blog/GitHub/Actions/deploying-to-ip-restricted-azure-function-apps-using-github-actions/"/>
<id>https://clouddev.blog/GitHub/Actions/deploying-to-ip-restricted-azure-function-apps-using-github-actions/</id>
<published>2022-08-06T12:00:00.000Z</published>
<updated>2022-08-20T12:04:19.270Z</updated>
<content type="html"><![CDATA[<a href="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/" title="In the previous post we blocked our function app to be available only to the APIM via ip restrictions">In the previous post we blocked our function app to be available only to the APIM via ip restrictions</a>. <p>This secures our function app and it isn’t available publicly, any one that tries to access our function app url will get “HTTP 403 Forbidden”.</p><p>This secures our function app; now what about deploying code changes to the function app via GitHub Actions? we should be able to CI/CD to our function app, but there is a problem here. The GitHub action will fail with the same “HTTP 403 Forbidden”, this is because GitHub actions run on runners (its a hosted virtual environment), each time we run the Action we get a new runner and it can have a different ip address. So how can we get around this? <a href="https://api.github.com/meta">do we white list the entire GitHub ip range?</a></p><p>GitHub’s ip ranges can change any time, so will have to keep scanning for changes to these ranges and proactively update our ip restrictions, this is not very scalable or practical. So what are other ways of getting around this? we have a couple of ways to get around this.</p><h2 id="Possible-Solutions"><a href="#Possible-Solutions" class="headerlink" title="Possible Solutions"></a>Possible Solutions</h2><p>There are two viable solutions here</p><span id="more"></span><h3 id="1-Use-a-self-hosted-runner"><a href="#1-Use-a-self-hosted-runner" class="headerlink" title="1. Use a self-hosted runner"></a>1. Use a self-hosted runner</h3><blockquote><p>Where you bring your own VM’s with static ip’s and whitelist these static ip’s</p></blockquote><p><strong>Pros:</strong></p><ul><li>Full control over your devops agents</li><li>Can optimize/reuse these agents for various CI/CD workloads for your cloud and on-prem deployments</li></ul><p><strong>Cons:</strong></p><ul><li>You have to provision and maintain your own VM’s, there will be time and effort required for this</li><li>Extra costs to maintain your own VM(s), although this could be optimized by turning them off after hours etc</li><li>You miss out on the free GitHub Action minutes you get</li><li>Extra work of provisioning VM’s, installing all the tooling for builds, maintaining and paying for them</li></ul><h3 id="2-Do-some-extra-steps-in-the-existing-GitHub-Actions"><a href="#2-Do-some-extra-steps-in-the-existing-GitHub-Actions" class="headerlink" title="2. Do some extra steps in the existing GitHub Actions"></a>2. Do some extra steps in the existing GitHub Actions</h3><blockquote><ol><li>Use the Azure CLI</li><li>Do an az login</li><li>Grab the public ip of the GitHub runner, you could use a simple public api like the <a href="https://api.ipify.org/">ipify api</a> to grab the public ip of the Github Runner</li><li>Use az cli to update ip restriction to add this additional ip</li><li>Do-your-normal-Deployment</li><li>Use az cli to remove the ip added in step 4</li></ol></blockquote><p><strong>Pros:</strong></p><ul><li>You use the same GitHub runner and workflow</li><li>No effort in provisioning or maintaining extra virtual machines yourself</li><li>Little bit of extra code is all that is needed</li></ul><p><strong>Cons:</strong></p><ul><li>There is a possibility that the GitHub action runner fails/crashes after doing step 4 but before it had a chance to get to step 5, you could be left with an extra ip address white listed in your app until you run the workflow again.</li></ul><p>This post is all about how to go about doing option 2 (do some extra steps in the existing GitHub Actions), although there is one con (ie: the GitHub runner crashing during step 5 and leaving an ip address of the runner there), in my view this is a very small risk. The chances of a crash precisely at that point are low and even it does happen the risk of having the runner ip (only 1 extra ip) for a short duration until your next run happens is very low.</p><h2 id="Show-me-the-code"><a href="#Show-me-the-code" class="headerlink" title="Show me the code"></a>Show me the code</h2><p>If you want to skip and just get to the code:</p><ul><li><a href="https://github.com/Ricky-G/github-cicd-samples/tree/main/functionapp">Here is the sample hello world function app (written in .net 6)</a></li><li><a href="https://github.com/Ricky-G/github-cicd-samples/blob/main/.github/workflows/azure-function-app-deploy.yml">Here is the GitHub Action that is deploying to ip restricted app</a></li></ul><p>In the above GitHub Action it is deploying a hello world function app; it is doing a dotnet build, package and deploy. Those are all the standard bits of deploying a function app; lets go over the interesting bits</p><ol><li>Getting the GitHub Runners public ip</li><li>Whitelisting this ip</li><li>After a successful deploy of our app, we remove the ip added in step 2</li></ol><blockquote><p>For the first step we are using a public package <a href="https://github.com/marketplace/actions/public-ip">haythem/public-ip@v1.2</a> to get the ip. We can also manually do a curl our >selves to the <a href="https://api.ipify.org/">ipify api</a> and grab the public ip ourselves. For the purposes of this demo we will use this package.</p></blockquote><p><strong>Step 1 - getting the GitHub runners public ip</strong></p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">Public</span> <span class="string">IP</span></span><br><span class="line"> <span class="attr">id:</span> <span class="string">ip</span></span><br><span class="line"> <span class="attr">uses:</span> <span class="string">haythem/public-ip@v1.2</span></span><br></pre></td></tr></table></figure><blockquote><ul><li>Next for the second step we use the az cli to add the ip address.</li><li>First we use az webapp config to set the –use-same-restrictions-for-scm-site false, here we are saying don’t apply the same restriction as the main site to the scm site</li><li>Our main site is still safe with the right ip restrictions, our scm site is now ready for changes</li><li>Next we use az functionapp config access-restriction to add the GitHub runner ip to just the scm site</li></ul></blockquote><p><strong>Step 2 - white listing the GitHub runner’s public ip</strong></p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">'Allow Github Runner IpAddress'</span></span><br><span class="line"> <span class="attr">uses:</span> <span class="string">azure/CLI@v1</span></span><br><span class="line"> <span class="attr">with:</span></span><br><span class="line"> <span class="attr">azcliversion:</span> <span class="number">2.37</span><span class="number">.0</span></span><br><span class="line"> <span class="attr">inlineScript:</span> <span class="string">|</span></span><br><span class="line"><span class="string"> az webapp config access-restriction set -g $ -n func-app-iprest-demo --use-same-restrictions-for-scm-site false</span></span><br><span class="line"><span class="string"> az functionapp config access-restriction add -g $ -n func-app-iprest-demo --rule-name github_runner --action Allow --ip-address $ --priority 100 --scm-site true</span></span><br></pre></td></tr></table></figure><blockquote><p>Finally we remove the ip address we added from the previous step and set the scm site access the same as our main site</p></blockquote><p><strong>Step 3 - after successful deploy, remove the GitHub runner’s public ip</strong></p><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="bullet">-</span> <span class="attr">name:</span> <span class="string">'Remove Github Runner IpAddress'</span></span><br><span class="line"> <span class="attr">uses:</span> <span class="string">azure/CLI@v1</span></span><br><span class="line"> <span class="attr">with:</span></span><br><span class="line"> <span class="attr">azcliversion:</span> <span class="number">2.37</span><span class="number">.0</span></span><br><span class="line"> <span class="attr">inlineScript:</span> <span class="string">|</span></span><br><span class="line"><span class="string"> az functionapp config access-restriction remove -g $ -n func-app-iprest-demo --rule-name github_runner --scm-site true</span></span><br><span class="line"><span class="string"> az webapp config access-restriction set -g $ -n func-app-iprest-demo --use-same-restrictions-for-scm-site true</span></span><br></pre></td></tr></table></figure><p>Finally 👏! we can now deploy using GitHub Actions to ip restricted function apps 🙌.</p><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><p>As always a big thank you to <a href="https://unsplash.com/">Unsplash</a> for providing a huge range of images for free</p><ul><li>Cover image has been taken from <a href="https://unsplash.com/photos/842ofHC6MaI">https://unsplash.com/photos/842ofHC6MaI</a></li></ul>]]></content>
<summary type="html"><a href="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/" title="In the previous post we blocked our function app to be available only to the APIM via ip restrictions">In the previous post we blocked our function app to be available only to the APIM via ip restrictions</a>.
<p>This secures our function app and it isn’t available publicly, any one that tries to access our function app url will get “HTTP 403 Forbidden”.</p>
<p>This secures our function app; now what about deploying code changes to the function app via GitHub Actions? we should be able to CI&#x2F;CD to our function app, but there is a problem here. The GitHub action will fail with the same “HTTP 403 Forbidden”, this is because GitHub actions run on runners (its a hosted virtual environment), each time we run the Action we get a new runner and it can have a different ip address. So how can we get around this? <a href="https://api.github.com/meta">do we white list the entire GitHub ip range?</a></p>
<p>GitHub’s ip ranges can change any time, so will have to keep scanning for changes to these ranges and proactively update our ip restrictions, this is not very scalable or practical. So what are other ways of getting around this? we have a couple of ways to get around this.</p>
<h2 id="Possible-Solutions"><a href="#Possible-Solutions" class="headerlink" title="Possible Solutions"></a>Possible Solutions</h2><p>There are two viable solutions here</p></summary>
<category term="GitHub" scheme="https://clouddev.blog/categories/GitHub/"/>
<category term="Actions" scheme="https://clouddev.blog/categories/GitHub/Actions/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/tags/Function-Apps/"/>
<category term="Azure App Service" scheme="https://clouddev.blog/tags/Azure-App-Service/"/>
<category term="GitHub" scheme="https://clouddev.blog/tags/GitHub/"/>
<category term="CI/CD" scheme="https://clouddev.blog/tags/CI-CD/"/>
<category term="Security" scheme="https://clouddev.blog/tags/Security/"/>
<category term="IP Restrictions" scheme="https://clouddev.blog/tags/IP-Restrictions/"/>
<category term="Serverless" scheme="https://clouddev.blog/tags/Serverless/"/>
</entry>
<entry>
<title>Securing Azure Functions and Logic Apps</title>
<link href="https://clouddev.blog/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/"/>
<id>https://clouddev.blog/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/</id>
<published>2022-07-31T12:00:00.000Z</published>
<updated>2022-08-09T10:58:49.555Z</updated>
<content type="html"><![CDATA[<p>Here is a scenario that I recently encountered. Imagine we are building micro-services using serverless (a mix on Azure Function Apps and Logic Apps) with APIM in the front. Lets say we went with the APIM standard instance and all the logic and function apps are going to be running on consumption plan (for cost reasons as its cheaper). This means we wont be getting any vnet capability and our function and logic apps will be exposed out to the world (remember to get vnet with APIM we have to go with the premium version, we are going APIM standard here for cost saving reasons).</p><p>So how do we restrict our function and logic apps to only go through the APIM, in another words all our function and logic apps <strong>must only</strong> go through the APIM and if anyone tries to access them directly they should be getting a “HTTP 403 Forbidden”.</p><p>Lets visualize this scenario; We have some WAF capable ingress endpoint, in this case its Azure Front Door, that is forwarding traffic to APIM which then sends the requests to the serverless apps.<br>Reason for having Front Door before APIM is because APIM doesn’t have WAF natively so we <a href="https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/api-management-security-baseline#ns-6-deploy-web-application-firewall">will need to put something in front of it that has that capability to be secure</a>. </p><p><a href="https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/api-management-security-baseline#ns-6-deploy-web-application-firewall">There are few options like Azure Firewall, Application Gateway etc</a>, but for the purposes of this scenario we have Azure Front Door in front of APIM (and we can have an APIM policy that will only accept traffic from Azure Font Door, we wont be going in to that, we will keep it to securing our function apps to just being available via APIM for today)</p><p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/apim-azure-functions-backend.png" alt=" " title="Sample Scenario"></p><span id="more"></span><h2 id="Securing-the-function-app"><a href="#Securing-the-function-app" class="headerlink" title="Securing the function app"></a>Securing the function app</h2><ol><li>First we will need to get the public ip address of the APIM</li><li>White-list this address in our function app network restrictions</li></ol><h2 id="Getting-the-public-ip-of-APIM"><a href="#Getting-the-public-ip-of-APIM" class="headerlink" title="Getting the public ip of APIM"></a>Getting the public ip of APIM</h2><p>You can go to the APIM resource in the Azure portal and get it from there<br><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/apim-public-ip.png" alt=" " title="APIM ip address"></p><p>Or you can use the CLI and run </p><figure class="highlight plaintext"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">az apim show --name "apim-name" --resource-group "resource-group-name"</span><br></pre></td></tr></table></figure><h2 id="White-listing-the-function-app"><a href="#White-listing-the-function-app" class="headerlink" title="White-listing the function app"></a>White-listing the function app</h2><ol><li>You need to go into networking -> access restriction</li><li>Only allow the APIM ip (once you enter this, the deny all will automatically come ie: all other ip’s are denied)</li><li>Its important that the SCM site is also blocked. <a href="https://docs.microsoft.com/en-us/azure/app-service/resources-kudu">More about Kudu service that powers the SCM site here</a></li></ol><p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/func-ip-restriction-1.png" alt=" " title="Function app ip restrictions"></p><p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/func-ip-restriction-2.png" alt=" " title="Function app block all ips except APIM"></p><p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/func-ip-restriction-3.png" alt=" " title="Make sure to block the SCM site also"></p><h2 id="What-happens-if-you-try-to-access-this-function"><a href="#What-happens-if-you-try-to-access-this-function" class="headerlink" title="What happens if you try to access this function"></a>What happens if you try to access this function</h2><p>Now its all blocked we get a nice HTTP 403 Forbidden</p><p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/func-ip-restriction-4.png" alt=" " title="Make sure to block the SCM site also"></p><h2 id="What-about-deploying-code-to-this-function-via-GitHub-Actions"><a href="#What-about-deploying-code-to-this-function-via-GitHub-Actions" class="headerlink" title="What about deploying code to this function via GitHub Actions"></a>What about deploying code to this function via GitHub Actions</h2><p>When you try to deploy to these functions using GitHub Actions or even via Azure Devops you will get the same HTTP 403 and wont be able to deploy. This is because the GitHub runner’s ip address will be blocked; remember we are only allowing APIM in, all others are blocked.</p><p>There are a couple of ways to get around this. <a href="/GitHub/Actions/deploying-to-ip-restricted-azure-function-apps-using-github-actions/" title="I talk about this in the next post, you can check it out here">I talk about this in the next post, you can check it out here</a></p><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><ul><li>Cover image has been taken from <a href="https://azure.microsoft.com/en-us/services/functions/#overview">https://azure.microsoft.com/en-us/services/functions/#overview</a></li></ul>]]></content>
<summary type="html"><p>Here is a scenario that I recently encountered. Imagine we are building micro-services using serverless (a mix on Azure Function Apps and Logic Apps) with APIM in the front. Lets say we went with the APIM standard instance and all the logic and function apps are going to be running on consumption plan (for cost reasons as its cheaper). This means we wont be getting any vnet capability and our function and logic apps will be exposed out to the world (remember to get vnet with APIM we have to go with the premium version, we are going APIM standard here for cost saving reasons).</p>
<p>So how do we restrict our function and logic apps to only go through the APIM, in another words all our function and logic apps <strong>must only</strong> go through the APIM and if anyone tries to access them directly they should be getting a “HTTP 403 Forbidden”.</p>
<p>Lets visualize this scenario; We have some WAF capable ingress endpoint, in this case its Azure Front Door, that is forwarding traffic to APIM which then sends the requests to the serverless apps.<br>Reason for having Front Door before APIM is because APIM doesn’t have WAF natively so we <a href="https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/api-management-security-baseline#ns-6-deploy-web-application-firewall">will need to put something in front of it that has that capability to be secure</a>. </p>
<p><a href="https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/api-management-security-baseline#ns-6-deploy-web-application-firewall">There are few options like Azure Firewall, Application Gateway etc</a>, but for the purposes of this scenario we have Azure Front Door in front of APIM (and we can have an APIM policy that will only accept traffic from Azure Font Door, we wont be going in to that, we will keep it to securing our function apps to just being available via APIM for today)</p>
<p><img src="/Azure/Function-Apps/Security/securing-azure-functions-and-logic-apps/apim-azure-functions-backend.png" alt=" " title="Sample Scenario"></p></summary>
<category term="Azure" scheme="https://clouddev.blog/categories/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/categories/Azure/Function-Apps/"/>
<category term="Security" scheme="https://clouddev.blog/categories/Azure/Function-Apps/Security/"/>
<category term="Azure" scheme="https://clouddev.blog/tags/Azure/"/>
<category term="Function Apps" scheme="https://clouddev.blog/tags/Function-Apps/"/>
<category term="Azure App Service" scheme="https://clouddev.blog/tags/Azure-App-Service/"/>
<category term="GitHub" scheme="https://clouddev.blog/tags/GitHub/"/>
<category term="CI/CD" scheme="https://clouddev.blog/tags/CI-CD/"/>
<category term="Security" scheme="https://clouddev.blog/tags/Security/"/>
<category term="IP Restrictions" scheme="https://clouddev.blog/tags/IP-Restrictions/"/>
<category term="Serverless" scheme="https://clouddev.blog/tags/Serverless/"/>
</entry>
<entry>
<title>Hello World 👋</title>
<link href="https://clouddev.blog/Blog/hello-world-%F0%9F%91%8B/"/>
<id>https://clouddev.blog/Blog/hello-world-%F0%9F%91%8B/</id>
<published>2022-07-26T12:00:00.000Z</published>
<updated>2022-08-09T12:10:48.103Z</updated>
<content type="html"><![CDATA[<p>After sitting on this for a long time and wanting to blog / write down my thoughts, I’ve finally got my act together and started this. There were so many times I was asked some very good questions which I am sure not just the person asking me but a lot more would have been interested in knowing the answer/solution/thoughts around the matter. This is a way to write about that and help the wider community who are searching for similar solutions.</p><p>I regularly answer in Stack Overflow and in some cases I wrote a question and answered it myself just incase some one was looking for something similar, that wasn’t really the ideal platform to do that. There have been so many times that going through and reading other people’s blogs have helped me and unlocked me in problems that I was stuck with; this is a in a way trying to give back to the community and helping people that are on the look out for a solution for a similar problem.</p><h1 id="How-to-power-the-blog"><a href="#How-to-power-the-blog" class="headerlink" title="How to power the blog"></a>How to power the blog</h1><p>There were so many choices out there when it came to what frameworks and libraries to use to build the blog and what to use to host the blog.</p><h2 id="My-requirements-when-it-came-to-building-were-simple"><a href="#My-requirements-when-it-came-to-building-were-simple" class="headerlink" title="My requirements when it came to building were simple"></a>My requirements when it came to building were simple</h2><ul><li>Easy to author posts</li><li>Easy to build</li><li>Easy to maintain</li><li>Most customizations (eg: search, ads, tags, categories etc) should come out of the box</li></ul><h2 id="My-requirements-when-it-came-to-hosting-were-even-simpler"><a href="#My-requirements-when-it-came-to-hosting-were-even-simpler" class="headerlink" title="My requirements when it came to hosting were even simpler"></a>My requirements when it came to hosting were even simpler</h2><ul><li>Has to be free</li><li>Has to be able to handle ‘some’ level of load</li><li>Easy to CI/CD</li></ul><span id="more"></span><h2 id="Main-choices-here-boiled-down-to"><a href="#Main-choices-here-boiled-down-to" class="headerlink" title="Main choices here boiled down to:"></a>Main choices here boiled down to:</h2><ul><li><a href="https://github.com/OrchardCMS/OrchardCore">Orchard CMS</a></li><li><a href="https://github.com/gohugoio/hugo">Hugo</a></li><li><a href="https://github.com/TryGhost/Ghost">Ghost</a></li><li><a href="https://jekyllrb.com/docs/github-pages/">Jekyll With Github Pages</a></li><li><a href="https://github.com/hexojs/hexo">Hexo</a></li></ul><p>All the options were good, I really liked Hugo, it was so easy to create a site. But all of them were geared towards creating a CMS / generic site. I was looking for something that had all the things needed for a blog out of the box with out having to grab lots of plugins or write something custom.</p><p>Jekyll and GitHub pages were really good, it nailed most of the things, but I didn’t really want to go down the road of learning Jekyll just to host a blog. This left one and Hexo fit my requirements beautifully. It was a dedicated Javascript framework that has all the things I was looking for out of the box and it had <a href="https://hexo.io/themes/">360+ themes available all community built and free</a>.</p><p>One thing I loved about Hexo is the fact its builds the source to a static site and you can use GitHub to host the static site and use <a href="https://hexo.io/docs/github-pages">GitHub Actions</a> to build the static site from source.</p><p>This is what I went with in the end, Hexo to build the blog. I write everything in markdown files and Hexo builds it out into a nice static site and I host it using <a href="https://github.com/Ricky-G/ricky-g.github.io">GitHub pages as a public repo</a></p><p>There are some <a href="https://docs.github.com/en/pages/getting-started-with-github-pages/about-github-pages#usage-limits">limits of hosting with GitHub Pages</a>, the main one is the 100GB of bandwidth as a soft limit. Since this is just a static site 100GB should be plenty but if and when it comes to that I will look at putting a CDN in front.</p><h1 id="Final-Result"><a href="#Final-Result" class="headerlink" title="Final Result"></a>Final Result</h1><ul><li><a href="https://github.com/hexojs/hexo">Hexo</a> to build the blog into a static site</li><li><a href="https://github.com/ppoffice/hexo-theme-icarus">Icarus</a> theme</li><li><a href="https://pages.github.com/">GitHub Pages</a> to host the site</li><li><a href="https://bulma.io/">Bulma</a> to help enrich the markdown files with styling</li></ul><h2 id="References"><a href="#References" class="headerlink" title="References"></a>References</h2><p>As always a big thank you to <a href="https://unsplash.com/">Unsplash</a> for providing a huge range of images for free</p><ul><li>Cover image has been taken from <a href="https://unsplash.com/photos/3SIXZisims4">https://unsplash.com/photos/3SIXZisims4</a></li></ul>]]></content>
<summary type="html"><p>After sitting on this for a long time and wanting to blog &#x2F; write down my thoughts, I’ve finally got my act together and started this. There were so many times I was asked some very good questions which I am sure not just the person asking me but a lot more would have been interested in knowing the answer&#x2F;solution&#x2F;thoughts around the matter. This is a way to write about that and help the wider community who are searching for similar solutions.</p>
<p>I regularly answer in Stack Overflow and in some cases I wrote a question and answered it myself just incase some one was looking for something similar, that wasn’t really the ideal platform to do that. There have been so many times that going through and reading other people’s blogs have helped me and unlocked me in problems that I was stuck with; this is a in a way trying to give back to the community and helping people that are on the look out for a solution for a similar problem.</p>
<h1 id="How-to-power-the-blog"><a href="#How-to-power-the-blog" class="headerlink" title="How to power the blog"></a>How to power the blog</h1><p>There were so many choices out there when it came to what frameworks and libraries to use to build the blog and what to use to host the blog.</p>
<h2 id="My-requirements-when-it-came-to-building-were-simple"><a href="#My-requirements-when-it-came-to-building-were-simple" class="headerlink" title="My requirements when it came to building were simple"></a>My requirements when it came to building were simple</h2><ul>
<li>Easy to author posts</li>
<li>Easy to build</li>
<li>Easy to maintain</li>
<li>Most customizations (eg: search, ads, tags, categories etc) should come out of the box</li>
</ul>
<h2 id="My-requirements-when-it-came-to-hosting-were-even-simpler"><a href="#My-requirements-when-it-came-to-hosting-were-even-simpler" class="headerlink" title="My requirements when it came to hosting were even simpler"></a>My requirements when it came to hosting were even simpler</h2><ul>
<li>Has to be free</li>
<li>Has to be able to handle ‘some’ level of load</li>
<li>Easy to CI&#x2F;CD</li>
</ul></summary>
<category term="Blog" scheme="https://clouddev.blog/categories/Blog/"/>
<category term="Hexo" scheme="https://clouddev.blog/tags/Hexo/"/>
<category term="Personal" scheme="https://clouddev.blog/tags/Personal/"/>
<category term="Blog" scheme="https://clouddev.blog/tags/Blog/"/>
</entry>
</feed>