Skip to content

Admittance

Jason Harvey edited this page Oct 24, 2023 · 5 revisions

{{>toc}}

Admittance

Terminology

The terms “admittance” and “Conductance” are loosely interchangeable in GUNNS. When we talk about the main system of equations that GUNNS solves, we call the [A] matrix in the [A] {p} = {w} system “admittance matrix”. In Nodal Analysis, this same matrix is sometimes called the “conductance matrix”, but these terms mean the same thing in this context.

We choose to call [A] the admittance matrix because it contains more than just the conductance effect. It also contains part of the Capacitance effect, and it would contain part of the Inductance effect (which we’ve not yet used in GUNNS). So, not only does [A] describe the ability of nodes to conduct between each other, but it also describes their ability to admit flow into and out of themselves through other means. Thus we think “admittance” is a better descriptor for it1.

In conductance links and in the conductance effect in general, we sometimes refer to variations or linearizations of the link’s conductance as its “admittance”, just as a way to delineate terms. Unfortunately, we weren’t always consistent in when to use which name. For instance, when fluid conductors linearize the standard GUNNS fluid flow equation, they call method GunnsFluidUtils::computeAdmittance passing in the link’s effective conductance, but they store the linearized result in a term called “system conductance” for eventual placement in the link’s admittance matrix. We realize this inconsistent naming may seem confusing and someday maybe we’ll be able to refactor the code to clean this up!

Admittance Matrix

In the basic Nodal Analysis that GUNNS uses, the admittance matrix is always a square symmetrical positive-definite matrix. This applies to both the network’s admittance matrix (encompassing the entire network) and every link’s admittance matrix (just its subset of the network). Any easy way to remember what this matrix is supposed to look like is this:

  • Diagonals must always be >= 0
  • Off-diagonals must always be <= 0
  • Off-diagonals must always be symmetric, i.e. [1,2] = [2,1], etc.

Here are some examples of valid 2×2 admittance matrices:

And here are some bad admittance matrices, highlighting the values that violate the above rules:

Note that the above is not a strictly correct definition of positive-definite matrices, but it’s close enough for our purposes.

In the code, the matrix is stored as a single-dimension array, not in 2 dimensions. So a 2-port link’s admittance matrix is an array of size 4, and a 16-node network’s matrix (not counting the Ground node) is an array of size 16 2. The matrix is stored in the array by row, column as:

The diagonals i = { (0,0), (1,1), (2,2), …} of the n x n matrix are always at locations i*n + i in the array.

The Link’s Admittance Matrix

Every GUNNS link has its own admittance matrix, class attribute mAdmittanceMatrix. This is a subset of the network’s admittance matrix and only consists of the link’s contributions to the rows and columns of the network matrix for the nodes that the link connects to. By default, the link’s admittance matrix is its number of port n x n in size, so a 2-port link has a 2 × 2 admittance matrix. However, links can customize the size and arrangement of their admittance matrix, to do things like use a compressed matrix format to conserve memory, etc. All links contain a GunnsBasicLinkAdmittanceMap object named mAdmittanceMap, which tells the solver the size of its admittance array and where each term of its array maps into the network’s admittance matrix. To use a custom admittance matrix & map, the link class should override the three virtual methods listed below. You can refer to the default behavior of these in the base implementation in GunnsBasicLink:

  • virtual void createAdmittanceMap(). This calls the allocateMap function within the admittance map object, to allocate the admittance map array memory with a given array size and allocation name for the memory manager. The default implementation sizes the array to n x n. The map size should match the admittance matrix size (below), as it gives the 1:1 mapping of each index of the link’s admittance array to the index into the network’s admittance array.
  • virtual void allocateAdmittanceMatrix(). This allocates the link’s admittance matrix. The default implementation sizes the array to n x n.
  • virtual void updateAdmittanceMap(). This function is called by the link any time a port map is changed, such as during the initial mapping during initialization, and for any dynamic map changes during run. This function updates the link’s map to reflect changes in how the link’s admittance array maps into the network’s. The link can arrange its own admittance matrix in any form it wants, as long as the map is updated to tell the solver how it maps into the network’s matrix. The default implementation maps the default link n x n matrix to the network based on the link’s node mapping.

See the Compressed Link Admittance, Visualized section below for an example.

A few important notes about how to use the custom admittance matrix & map:

  • Links must produce a symmetrical contribution to the network’s matrix, i.e. both the upper and lower triangles are populated with symmetrical values. Although it would be more efficient for links to only have to populate either the lower or upper triangle of the admittance matrix, since it is symmetrical, the GUNNS solver still requires both triangles to be populated. The solver uses the upper triangle to check for matrix conditioning, while the lower triangle is used for the Cholesky LDU decomposition, which is the default and most commonly used solution option.
  • When a link port is connected to the Ground node, or for whatever other reason a position in the link’s admittance matrix shouldn’t map to any location in the network’s matrix, the link’s map value for that position should be -1. This value tells the solver to ignore that location in the link’s matrix (such as for ports on the Ground node).

The Network’s Admittance Matrix

The Gunns solver class builds the network’s [A] matrix as the sum of all of the link’s individual [A] matrices. Each link’s [A] element is added into the network’s [A] based on the link’s admittance map, described above. A link whose port 0 is mapped to network node 1 will have its row/column 0 terms added to the row/column 1 terms of the network’s matrix. The links own their node mapping; this allows them to change their mapping on the fly, i.e. change which nodes they connect to, and the network solution follows the change. The following picture illustrates how multiple links’ individual contributions are combined into the total network’s [A] matrix:

Compressed Link Admittance, Visualized

Given the descriptions of how the network and link admittances are related above, we can show an example of how a compressed link admittance matrix maps to the network’s matrix. In this example illustrated below, we have a network with a single 3-way valve in it. A 3-way valve is really two 2-way valves, making a V shape between 3 network nodes:

The 3-way valve link never has a conductance between its ports 0 and 1, so the [0,1] and [1,0] off-diagonals in its admittance matrix are always zero. So, we could compress the link’s admittance matrix to be size 7 instead of 9, and save those 2 words of memory and the cost of adding those two zeroes to the network’s matrix by the solver. In the picture below, all 3 systems are equivalent. The network’s matrix is on the left, the uncompressed valve link’s matrix in the middle, and this uncompressed example on the right:

This example doesn’t represent much of a savings, and in fact at the time of writing the GUNNS 3-way valve link doesn’t actually compress its admittance matrix. However, when this concept is scaled up to links with tens or hundreds of ports, the memory and time savings from a compressed matrix can be significant.

Decomposition & Debugging

The Gunns solver stores the network’s [A] matrix in its mAdmittanceMatrix array. As part of the network solution, this array is decomposed into an alternate form (see Cholesky_Method) and stored back into the same mAdmittanceMatrix term. This is mainly done to save computer memory, as the matrix gets very large for typical networks. GUNNS networks are typically very sparse, but we don’t compress the matrix because the network topography can change every pass (due to link node re-mapping), and re-ordering or optimizing the compression every pass would incur more CPU operations than it’s worth.

Problems with the network solution’s linear algebra solver (Cholesky or SOR_Method, etc) are usually caused by a badly-formed admittance matrix prior to the solution. GUNNS does not check the matrix for the validity rules described above because it would take too long. It will let a bad matrix die in the linear algebra solver to save time, but this usually results in a corrupted post-solution mAdmittanceMatrix. You need visibility into the pre-solution mAdmittanceMatrix to find such problems. Fortunately, the Gunns class has debug features to let you see the pre-solution mAdmittanceMatrix. See solver#debugging.

Conditioning the Admittance Matrix

A significant issue in computational solutions of [A] {p} = {w} arises when the admittance matrix is ill-conditioned. In GUNNS, this happens for non-capacitive nodes, or nodes with a very small capacitance relative to the conductances incident on them. The problems that arise are:

1) Singular matrices are impossible to invert or decompose numerically; they are basically the linear algebra equivalent of a divide-by zero, and the solution goes undefined. An isolated group (island) of one or more non-capacitive nodes creates a singular matrix.

2) Ill-conditioned matrices can cause significant rounding errors in their computational solution, in some cases so bad that the solution has unacceptable noise or “walks-off”, i.e. slowly drifts up or down when the correct solution would be constant. This tends to occur when the node’s capacitance is smaller than the sum of the incident conductances, and worsens as the capacitance gets smaller relative to the conductance. This is a limitation of the nodal analysis method in general, and we don’t have a perfect solution for this in all cases. TODO elaborate workarounds

Since the singular matrix problem 1) arises from the use of non-capacitive nodes, we could have designed GUNNS to require that all nodes be capacitive. However this was undesirable for the electrical aspect in particular — the vehicle power distribution systems we’ve tended to model in GUNNS so far have so little real capacitance that the modellers prefer to ignore it completely. We’ve built very large electrical networks with zero capacitance in them at all! Also, because of the mass overflow limitation in the fluid aspect, sometimes it is better to use non-capacitive nodes in certain places in fluid networks. Allowing the option to have non-capacitive nodes in GUNNS provides greater flexibility to our users.

To avoid the singular matrix problem 1), the Solver cheats a little by by massaging or “conditioning” the system of equations when necessary. This takes the form of a very small conductance to Ground, applied on the diagonal of [A] for non-capacitive nodes. This ties all non-cap nodes to Ground so that when they’re isolated, they solve to zero instead of going undefined. The value of this “leak” conductance to Ground is extremely small, just enough for the Cholesky method to be able to decompose the matrix, and as small as possible to limit the leak flow or a significant drop in nearby potentials when the conditioned node is not isolated (see 4) below).

First, for each row in [A]_, we check whether the row is conditioned. The row is conditioned if the diagonal A ii is greater than the sum of the off-diagonals A ij, j≠i. Since the off-diagonals are always negative, we flip the sign of the off-diagonal sum to compare it to the diagonal. If the off-diagonal sum is equal to the diagonal, then the row is ill-conditioned and we proceed with conditioning it. The conditioning value is calculated as max ( DBLEPSILON, A ii ) * 1.0E-15. This is then added to A ii, creating the conditioning conductance effect to Ground.

In avoiding the singular matrix problem 1) above, there are some undesirable side-effects of this conditioning:

3) Non-capacitive nodes that are isolated from any other capacitance, potential or flow source effects go to zero potential. This includes islands of non-capacitive nodes that are connected to each other via some conductances but as a group are isolated from any of the above source effects. Technically this is consistent with physics: since the non-capacitive area represents zero “volume” there is zero quantity to hold potential, so potential is really undefined. Another way to look at it is in the electrical aspect: a location that can store zero charge would instantly discharge any voltage to its surroundings, since nothing really has infinite electrical resistance. This is why we recommend giving any node whose potential is sensed by the modeled system some capacitance — any real potential (voltage, temperature, pressure) sensor has to measure the average potential in some finite non-zero area that holds quantity.

4) We apply conditioning whether the node is really isolated from the source effects or not, because checking for their isolation requires the use of an Islands finding mode, and we don’t want to require the use of islands modes for conditioning [A]. Therefore, we often apply the conditioning to nodes that don’t really need it. This creates a small leak drain on those areas of the system. TODO describe this impact further…

Note: between Releases 14.2 and 17.1, for bug fix #298, we attempted to get rid of the leak drain problem described in 4) above by conditioning island sub-matrixes differently in the SOLVE island mode (mIslandMode = SOLVE). However, this caused the matrix decomposition for some islands to fail, and we had to abandon this change in the fix for # 63. In release 17.2 and later, problem 4) can still occur and this is a limitation of the solver.

In the past, we have tried to condition the matrix as a small capacitance or potential source instead of a leak conductance, with the goal being for isolated nodes to remain at a constant potential instead of going to zero. However this hasn’t worked very well; either the conditioning amount is so small that it causes large roundoff errors and very noisy potentials between connected non-capacitive nodes, or it’s so large that it causes an unacceptable capacitance or potential source effect in what should be an otherwise non-capacitive system. We’d rather have zero potential than a noisy potential or other effects that we can’t account for.

1 We also inherited this convention from PFN.

Clone this wiki locally