Skip to content

Commit

Permalink
fixing links
Browse files Browse the repository at this point in the history
  • Loading branch information
Duncanma committed May 11, 2024
1 parent 3dac714 commit c37270f
Show file tree
Hide file tree
Showing 8 changed files with 12 additions and 12 deletions.
6 changes: 3 additions & 3 deletions content/Blog/adding-e-commerce-to-my-galleries.md
Expand Up @@ -11,7 +11,7 @@ images:
- /images/photo-gallery/buy-links.png
description: As a weekend project, I decided to try coding against Stripe and adding the ability to buy original versions of my photos.
---
{{% note %}}This is a follow-up to my last post on [creating a photo gallery feature for my site](/blog/adding-photo-galleries), so it might be worth a quick skim if you haven't read it before.{{% /note %}}
{{% note %}}This is a follow-up to my last post on [creating a photo gallery feature for my site](/blog/adding-photo-galleries/), so it might be worth a quick skim if you haven't read it before.{{% /note %}}

What if I wanted to integrate with Stripe to let people buy these photos? I don’t really need to make money from this hobby, but I’ve had a few people comment that various images would make great wall art, and I’ve wanted to try coding against Stripe’s APIs for real, so… here goes.

Expand Down Expand Up @@ -92,7 +92,7 @@ Some of my photos are not suitable for purchase because:

I make the calls to Stripe as part of generating each gallery but checking against a list of images “not for sale". For each of the rest of the images, we are going to turn each of them into a Stripe Product in my account. Using the unique ID, we’ll check if a product already exists, and if so, just skip the creation. This allows us to run this code on the same input more than once without creating duplicate products.

{{% note %}}Making calls to Stripe, using their .NET SDK, requires an API Key. I'm using a [Restricted Access Key](https://docs.stripe.com/keys#limit-access), limited to only being able to read and write Products, Prices and Payment Links as a second layer of security. Even if this key leaked, it has limited ability to cause trouble, and having a distinct key for different parts of my system means it is easy to roll (replace with a new key and deactivating the leaked one) just this key without impacting the rest of my code. In my next article, [order fulfillment](/blog/order-fulfillment), I use a different key with the set of permissions required for that code.
{{% note %}}Making calls to Stripe, using their .NET SDK, requires an API Key. I'm using a [Restricted Access Key](https://docs.stripe.com/keys#limit-access), limited to only being able to read and write Products, Prices and Payment Links as a second layer of security. Even if this key leaked, it has limited ability to cause trouble, and having a distinct key for different parts of my system means it is easy to roll (replace with a new key and deactivating the leaked one) just this key without impacting the rest of my code. In my next article, [order fulfillment](/blog/order-fulfillment/), I use a different key with the set of permissions required for that code.

![image of the key management area of the Stripe dashboard, showing the two restricted keys](/images/photo-gallery/restricted-access-key-gallery.png){{% /note %}}

Expand Down Expand Up @@ -256,4 +256,4 @@ To provide the no-JS fallback, a page like [Olives & Spices](/albums/olives-and-

If I stopped there, I’d have a functional method to sell photos. In my Stripe dashboard, I could see a list of completed payments, and I could then manually email the original image to customers. Keeping track on my own, of which orders had been fulfilled. This is a fine solution, but I’m not excited about it. I don’t expect a lot of orders, so it isn’t the manual work that concerns me (in fact, coding up a solution here could be more work than processing a handful of orders), but the fact that it depends on me taking manual action on a regular basis. I get busy or travel, and suddenly people have paid me $ and are not happy with the service. What I want instead is for this to be completely automated, someone orders a picture at 2am, they get what they paid for within a few hours, I check the Stripe dashboard whenever I have time and see a list of happy orders.

It feels like I’ve covered a lot in this article already, so I’m going to break out the [order fulfillment into its own piece](/blog/order-fulfillment).
It feels like I’ve covered a lot in this article already, so I’m going to break out the [order fulfillment into its own piece](/blog/order-fulfillment/).
2 changes: 1 addition & 1 deletion content/Blog/adding-photo-galleries.md
Expand Up @@ -103,7 +103,7 @@ The Hugo theme was using [a cool image layout library](https://flickr.github.io/

![the gallery view, showing landscape and portrait images sized and positioned together](/images/photo-gallery/flexGallery.png)

All of this, done first in HTML, then in Hugo, turns into two ‘layouts’ in my theme. A List template, which is used when showing [the homepage for my albums](/albums), and then a Single template that handles [an individual album](/albums/flowers). I built these first without any partials, but when I was building a later feature, I ended up moving the album card out of the list view into its own file and then doing the same with the individual image code. All of the templates, partials, css, etc. are included in [my blog repository](https://github.com/Duncanma/Blog).
All of this, done first in HTML, then in Hugo, turns into two ‘layouts’ in my theme. A List template, which is used when showing [the homepage for my albums](/albums/), and then a Single template that handles [an individual album](/albums/flowers/). I built these first without any partials, but when I was building a later feature, I ended up moving the album card out of the list view into its own file and then doing the same with the individual image code. All of the templates, partials, css, etc. are included in [my blog repository](https://github.com/Duncanma/Blog).

## Supporting multiple image resolutions

Expand Down
2 changes: 1 addition & 1 deletion content/Blog/cdn-advanced-functionality.md
Expand Up @@ -12,7 +12,7 @@ techfeatured: false
---
The core purpose of a content distribution network (CDN), is to provide your users with endpoints to hit all over the world instead of having everyone go all the way back to your origin server. Caching is a core part of that, to bring some of your content closer to the user as well as the server they connect to. If you've looked into CDNs though, or tried to compare various companies, you will see they offer a lot of 'extras' on top of the basic functionality.

> This post is one of [a series about CDNs](/tags/cdn), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
> This post is one of [a series about CDNs](/tags/cdn/), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
## Web Application Firewalls

Expand Down
4 changes: 2 additions & 2 deletions content/Blog/cdn-purge-and-invalidate.md
Expand Up @@ -13,7 +13,7 @@ description: Long cache times are great, but sometimes you need to update conten

One of the key ways in which we configure and control our caching is through determining the length of time that content should be considered 'valid'. This time, often referred to as the time-to-live or TTL, determines how long that content will live in a user's local cache and how long the CDN will keep the previously retrieved content instead of going back to your server for an updated copy. Picking the right amount of time can be a challenge. What you want to think about is "how long am I ok with users continuing to get this version of this resource"? If the content is 'immutable', which is to say that it **never** changes, then you should set the cache time very high (a year for example).

> This post is one of [a series about CDNs](/tags/cdn), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
> This post is one of [a series about CDNs](/tags/cdn/), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
This advice is often given for a site's CSS and JavaScript, but it is hardly believable that you will never change it, so how does that work in practice? Caches, including CDNs, use the URL of content online such as `https://mydomain.com/main.css` to uniquely identify it, so one technique is to change that URL whenever your content updates. One version of your CSS could be referred to as `https://mydomain.com/main.css?v=1`, and when you update it, the URL changes to `https://mydomain.com/main.css?v=2`. Those two URLs are considered different pieces of content to a cache, so the user or the CDN will need to fetch the v=2 content as if it has never seen this content before. This method, changing URLs, can be done through a query string, filename or path (`/main-v2.css`, `/2020-02-02/main.css`, etc.), and is generally referred to as 'cache busting'. Often, changing the URL of these resources, and updating all the places on your pages where you use it, is part of the build and deployment step for your site. Without that, it would be a tedious and error prone technique to update everything manually.

Expand All @@ -36,4 +36,4 @@ First, you could only set long cache times on URLs that you update whenever they

## Short client TTL with a long CDN cache and purging

The second option takes advantage of the fact that we can [configure the CDN with a different cache time](https://www.keycdn.com/blog/http-cache-headers#s-maxage) than [what is sent to the user's browser](https://www.keycdn.com/blog/http-cache-headers#max-age). We tell the CDN to cache everything for a long time (a year or more), but for content that *might* change (HTML pages in the docs example) we send a much smaller cache time to the user's browser (10 minutes). This means that browsers will still need to request updated content after ten minutes, but the CDN will hang onto that content for a long time, so our server still won't get the request. When the content **does** change, we purge that specific piece of content from the CDN, and within ten minutes users will be seeing the update. This is how caching is handled here on `duncanmackenzie.net`, but in a bit of a brute force fashion. When I publish a new blog post, I would likely only need to update a few pages (the home page, maybe [some pages that list posts on a specific topic](/tags/cdn)), but I've set it up so that every update just purges all of the content at the CDN. Not a huge issue for me, since I only have a few 1000 pages, but it **would** be an issue for Docs that has many millions of pages. This technique gets you the most offload of traffic from your server (good for load and cost), but has the downside of being more complex and needing an automatic way to purge the appropriate pieces of content from the CDN when they are updated.
The second option takes advantage of the fact that we can [configure the CDN with a different cache time](https://www.keycdn.com/blog/http-cache-headers#s-maxage) than [what is sent to the user's browser](https://www.keycdn.com/blog/http-cache-headers#max-age). We tell the CDN to cache everything for a long time (a year or more), but for content that *might* change (HTML pages in the docs example) we send a much smaller cache time to the user's browser (10 minutes). This means that browsers will still need to request updated content after ten minutes, but the CDN will hang onto that content for a long time, so our server still won't get the request. When the content **does** change, we purge that specific piece of content from the CDN, and within ten minutes users will be seeing the update. This is how caching is handled here on `duncanmackenzie.net`, but in a bit of a brute force fashion. When I publish a new blog post, I would likely only need to update a few pages (the home page, maybe [some pages that list posts on a specific topic](/tags/cdn/)), but I've set it up so that every update just purges all of the content at the CDN. Not a huge issue for me, since I only have a few 1000 pages, but it **would** be an issue for Docs that has many millions of pages. This technique gets you the most offload of traffic from your server (good for load and cost), but has the downside of being more complex and needing an automatic way to purge the appropriate pieces of content from the CDN when they are updated.
2 changes: 1 addition & 1 deletion content/Blog/cdn-separate-domain.md
Expand Up @@ -11,7 +11,7 @@ techfeatured: false
---
Short answer is "No, you shouldn't". You will see this quite often on sites, where the site itself is served at www.mydomain.com, and then static resources (CSS, JS, images, etc.) are served from a secondary domain like cdn.mydomain.com or something similar. The second domain is routed through a CDN, the main domain is not.

> This post is one of [a series about CDNs](/tags/cdn), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
> This post is one of [a series about CDNs](/tags/cdn/), starting with [an overview of what they are and how they work]({{< relref "overview-of-cdn.md" >}}).
The argument for doing this generally follows one or both of these reasons:

Expand Down
2 changes: 1 addition & 1 deletion content/Blog/evolution-of-microsoft-documentation-sites.md
Expand Up @@ -20,7 +20,7 @@ maybe I should post it up online for posterity's sake.
> understand what should live at that address. For anything else though,
> paths below that root, or subdomains, you are essentially creating a
> categorization of content, and that will evolve over time.
> Also, [a related discussion of domain names and why you should always **use your domain** instead of making random new ones](/blog/domain-names).
> Also, [a related discussion of domain names and why you should always **use your domain** instead of making random new ones](/blog/domain-names/).
## The MSDN and TechNet days

Expand Down
4 changes: 2 additions & 2 deletions content/Blog/order-fulfillment.md
Expand Up @@ -13,7 +13,7 @@ images:
description: As the final step in my series on adding photo galleries to my site, I explain how I use Azure Functions to process incoming orders.
---

In the previous articles in this series, I covered how I [added photo galleries to my site](/blog/adding-photo-galleries), and then [how I enabled a feature to buy the original digital photo](/blog/adding-e-commerce-to-my-galleries). The last piece (for now at least) is how I automate delivery of those high-resolution images, so that everything *should* just happen without any manual steps.
In the previous articles in this series, I covered how I [added photo galleries to my site](/blog/adding-photo-galleries/), and then [how I enabled a feature to buy the original digital photo](/blog/adding-e-commerce-to-my-galleries/). The last piece (for now at least) is how I automate delivery of those high-resolution images, so that everything *should* just happen without any manual steps.
There might be a better way to handle this, but after reading even more pages on [Stripe Docs](https://docs.stripe.com), I decided to create a system using three [Azure Functions](https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp).

![Diagram of the steps in my 3 Azure Functions, also described in the following paragraph](/images/photo-gallery/azure-function-diagram.png)
Expand Down Expand Up @@ -279,4 +279,4 @@ That's it. It ignores a few possible issues, what if the customer's email is inc

In creating this system, and then again in writing this article, I can see many ways to improve or add to it. I find this to be normal with any project, but it is important not to let it stop you from shipping. This code, the v1 as it were, works in my limited testing. If I end up getting tons of orders and that reveals some issues, well that's a nice problem to have and I'll evolve this code and process as needed.

Feel free to checkout [my photo albums](/albums), and since you made it this far, I've created a promo code, `THANKSFORREADING`, that will give you 50% off any image you want to buy.
Feel free to checkout [my photo albums](/albums/), and since you made it this far, I've created a promo code, `THANKSFORREADING`, that will give you 50% off any image you want to buy.
2 changes: 1 addition & 1 deletion content/albums/_index.md
Expand Up @@ -2,4 +2,4 @@
title: Albums
description: Some of my favorite photographs
---
Details on [how I built these pages onto my Hugo site](/blog/adding-photo-galleries)
Details on [how I built these pages onto my Hugo site](/blog/adding-photo-galleries/)

0 comments on commit c37270f

Please sign in to comment.