Skip to content

Releases: naftulikay/titan

aws/2.0.0

18 Oct 23:03
aws/2.0.0
3791c0d
Compare
Choose a tag to compare

Changes:

  1. #68: Update to Terraform 1.0 syntax and fix testing harness.
  2. #69: Add a nat_enabled boolean to allow creation of networks without NAT gateways.
  3. #70: Move module to v2.

This release should be published to Hashicorp's Terraform module directory, now that everything is up to date and working with the latest Terraform.

gcp/0.1.0

29 May 03:00
gcp/v0.1.0
6729cd6
Compare
Choose a tag to compare

Initial release of Titan for Google Cloud 🚀.

TL;DR

Here's what to expect using Titan 0.1.0 on GCP:

  • Five layers: admin, data, dmz, routing, services - just like AWS.
  • Each layer is a conglomerate of 5 subnets which are /23s - just like AWS, but limited to 3-4 AZs depending on the region.
  • Public routing works as expected: tag your public instances with public to pick up the summary WAN route on your public instances.
  • Private routing works as expected: tag your private instances with private to pick up (default) 3 routes to WAN via 3 NAT gateway instances spread across availability zones.
    • By default, ECMP will be used to round robin your streams to WAN via NAT Gateways.
    • If you need super low latency on WAN access behind NAT, tag your instances with nat-us-east1-a (nat- plus your availability zone) to get the route within the same AZ.
    • High availability of NAT at this time is provided by routing packets across all 3 instances and AZs. In the future, redundant HA routes per-AZ will be added so that each AZ has two NAT Gateway instances/routes.
  • Security model is exactly the same as AWS.
  • Each subnet in a layer also has a corresponding range for container networking; also 5x /23.
  • For ultra high-capacity:
    • Shard your instances across subnets in a layer.
    • For instance groups, use google_compute_region_instance_group_manager, though this only can place instances in a single subnet. Shard if you can.
    • If you have exhausted a subnet with instance groups, simply spin up the next instance group in a different subnet, though obviously it's better to shard if possible.
  • Up to 2,560 logical addresses within a layer for VMs, up to 2,560 logical addresses in a layer for containers.

Usage

Releases are prefixed so as to avoid breaking backwards compatibility and support. Following SemVer, minor releases less than 1.0 may break backward compatibility. Upgrading from, say 0.1 to 0.2 will be documented in the release notes. Changes within 0.1 may add new resources, may modify resources, but may not cause destruction of any resources, even if only to recreate them.

module "network" {
    source = "github.com/naftulikay/titan/modules/gcp/v0/1/titan_network"
    # ...
}

Not Yet Supported

NAT as a Service

Google does not have a NAT-as-a-service solution yet. As the platform matures, it is expected that this will be added similar to how AWS originally only supported NAT instances and then built a network service for NAT.

Interim solutions:

  • Continue and finish work on naftulikay/natd and use this as a NAT health check and management service.
  • Provide per-AZ high availability NAT Gateway instances as an option disabled by default. This is being discussed for 0.2.

Custom DHCP Options

Google does not currently support configuring cloud routers with custom DHCP options such as search domain, additional resolvers, etc.

Custom Private Hosted Zones

Google does not currently support custom private hosted zones, we are at the mercy of whatever the auto-generated hosted zone name is.

An idea I had was to use TRustDNS to write a DNS resolver which proxies and rewrites its requests so that private.mycompany.com internally resolves to private.${autogenerated_hosted_zone_name}. There are a lot of challenges to doing this: DNS caching is paramount, and reverse private hosted zones should be supported as well.

The Goal

For a 1.0 release of Titan for GCP, NAT-as-a-service, custom DHCP options, and custom private hosted zones are mandatory. Until then, minor releases can serve to fake and bolt-on these services, but the end goal is, like AWS, to not have any hacks or running instances as a default when using Titan.

The express goal of Titan is to provide a tunable, reusable, highly-available, scalable, and secure networking framework. Having to run instances for NAT or for DNS is unideal and will block a major 1.0 release.

1.0.1

22 Jan 00:16
v1.0.1
c16e0b7
Compare
Choose a tag to compare

Fix NAT Gateway destruction bug: #38.

1.0.0

21 Dec 01:08
v1.0.0
d4dd657
Compare
Choose a tag to compare

Initial release for Titan! There's a ton of stuff that went into this release and a lot of manual validation as there isn't exactly a good testing framework for networks. There is continuous integration setup for validation of changes and examples.

Documentation forthcoming.