Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistency in normal force magnitudes when varying 1) timestep size or 2) material properties #23

Closed
chungmin99 opened this issue Aug 24, 2021 · 8 comments

Comments

@chungmin99
Copy link

First, I wanted to say that this is an amazing project, and I really appreciate the work you have put into this project. I’m currently trying to see if there is a way to calculate the total force acting on a mesh (i.e., for each mesh inputted in with shapes input configuration), and I was hoping that the friction information export update in 24ab245 would be sufficient to do this task.

For some initial testing, I placed a cube on top of a 20x20 pad as shown below. Here, I expected that normal forces should be approximately aligned with the y-axis, so a sum of the normal force magnitudes (from lambda terms as saved by the last call of save_friction_data within the simulation) should equal the force applied by the cube onto the pad by its weight.

0

shapes input 2
input/tetMeshes/cube.msh -0.5 0.2 -0.5  0 0 0  1 1 1 material 1000 1e10 0.4
input/tetMeshes/mat20x20.msh 0 0 0  0 0 0  2 10 2 material 1000 1e10 0.4 DBC -5 0 -5 5 0 5 0 0 0 0 0 0

selfFric 0.1
time 5 1

fricIterAmt 4

When I run the script above, the normal force applied by the cube is calculated to be 9948.064, which similar to the expected value 9810 (1000 kg cube presses into pad with force of 1000*9.81 newtons).

For a more extensive study, I tried repeating the above test, while:

  • Changing timestep size: However, if I make the timestep smaller, from 1 to 0.1, the sum falls to a value of 100.856, which has at least one order of magnitude difference. I would have expected this sum to stay consistent even if I vary the timestep size.
  • Changing material properties: If I change the density of the pad (1e10 -> 1e5), then the total normal force falls to 8347, which is noticeably smaller than our original value of 9948.
  • Changing mesh resolution: If I vary the mesh resolution (go from mat20x20.msh to mat40x40.msh), I observe a similar trend; the total normal force equals 9852 with timestep=1, and equals 99.37 with timestep 0.1. Mesh resolution does not seem to add inconsistency.

Is there something I’m missing? I would have expected all of these tests to return the same normal force values, but this does not appear to be the case.

@liminchen
Copy link
Collaborator

Hi Chung Min,

Thanks a lot for your interest in our project! Your tests are very interesting and there are certainly some bugs for calculating the normal forces. We will look into it.
Notice that for different resolution, it should be that higher resolution will give more accurate numbers. In fact, if you only count the vertical component it should be identical to the weight, but if you count the magnitude to the total force, due to the summed barrier approximation, there could be some tangential errors which will vanish as resolution goes higher. In addition, using an analytical plane won't have tangential errors.

@zfergus
Copy link
Member

zfergus commented Aug 24, 2021

The save_friction_data function saves the normal force magnitudes that are used internally for computing the friction potential. This means that we cancel out a / h^2. If you want to compute the actual normal force value you should divide by h^2 where h is the timestep size. This gets you pretty close (i.e., 100.856 / 0.1^2 = 10,085.6).

I will have to think more about why the density of the pad affects the normal force, but I think Minchen is right to try to refine the mesh and see if this improves the accuracy when the density is smaller.

@chungmin99
Copy link
Author

chungmin99 commented Aug 24, 2021

Thank you so much for the quick response - I just realized that in my post, I made a typo; I intended to say Young's modulus (1e10), not density. Sorry about the miscommunication.


When the mesh is very refined (using mat225x225t40.msh) I find that the normal force is calculated to be about ~9900 for a variety of E values (tested 1e5, 1e6, 1e10), which now is approximately its expected value. In fact, even just using the 40x40 mesh appears to have pretty good results with the cube!

However, if I swap out the cube with a sphere (sphere1K),
0
All properties + scaling are kept the same as the first post, except for the pad mesh resolution and young's modulus as stated below.

Using mat40x40 as the "pad" mesh resolution,
with E=1e6, the sum of of all normal force magnitudes is calculated to be 317;
with E=1e10, the normal force is calculated to be 5441.
(expected upwards force is around 5136).

I do realize that I should be accounting for only the upwards components to retrieve the actual normal force applied by the sphere, but the sum should be overestimating the normal force (since it also adds up the forces along the x- and z- directions). Therefore, the force-sum with E=1e10 makes sense (overestimate), but the sum with E=1e6 doesn't (severe underestimate).

Do you possibly have any insight as to why this might be happening?

Edit: the contact area decreased compared to the cube, so this may simply be an indication that a higher mesh resolution is required; however, I expect that the softer pad (smaller E) should have a larger contact area than the stiffer pad (larger E), or at least a similar number of "localized forces" should be present. Yet, there is a large discrepancy between the two normal forces.

Edit2: Using mat225x225t40, E=1e6 for the pad settings, the sphere-contact returns a force-sum of ~292. Based on this result, it seems that the higher mesh resolution did not help the contact force estimation.

@liminchen
Copy link
Collaborator

liminchen commented Aug 28, 2021

At static state, we have the following system of equations hold:
CodeCogsEqn
where x is the stacked nodal positions, and the above equation has 3n rows where n is the number of nodes.
For each contact pair, we have
CodeCogsEqn (1)
So I'm not sure whether the sum of all \lambda_k/(\Delta t^2) should equal to the sum of all gravity of the nodes? In other words, we may want to calculate the net contact force for an object more wisely. For example, summing up the contact force vector on all nodes and then calculate the norm of this sum? Maybe this is equal to the gravity of the object, which is also the norm of the sum of gravity vectors at all the nodes.

@chungmin99
Copy link
Author

chungmin99 commented Aug 30, 2021

Based on your recommendation ("summing up the contact force vector on all nodes and then calculate the norm of this sum"), I wrote up a simple python-based contact-force calculator as shown below:

import numpy as np

# pymesh.load_matrix reads dmat files; for details see
# https://pymesh.readthedocs.io/en/latest/api_misc.html
from pymesh import load_matrix 

import sys
basedir = sys.argv[1] # e.g., 'sphere1K-mat40x40_null_NH_BE_interiorPoint_20210824145722t8/'

# fric_4_7_* files correspond to the last saved friction-data files for the simulation saved in basedir
lambdas = load_matrix(os.path.join(basedir, 'fric_4_7_lambda.dmat'))
bases = load_matrix(os.path.join(basedir, 'fric_4_7_bases.dmat'))

# bases describe the tangent plane, use cross product to retrieve normal to plane
contact_normals = np.zeros((lambdas.shape[0], 3))
for i in range(lambdas.shape[0]):
    contact_normals[i] = np.cross(
        bases[3*i:3*(i+1), 0],
        bases[3*i:3*(i+1), 1],
    )
    # adjust contact normal to point upwards
    if contact_normals[i][1] < 0:
        contact_normals[i] *= -1

contact_forces = lambdas * contact_normals
norm_of_sum = np.linalg.norm(np.sum(contact_forces, axis=0))
upwards_force = np.sum(contact_forces, axis=0)[1]
print(norm_of_sum, upwards_force)

However, I am still getting numbers similar to my previous reply (the config files are kept the same, with timestep=1, so lambda = lambda/h^2).
norm_of_sum and upwards_force values end up quite similar, so I'm writing the norm_of_sum here:

  • mat40x40, E=1e10Pa: 5441
  • mat40x40, E=1e6Pa: 317
  • mat225x225, E=1e6Pa: 287

I also re-ran the calculator without the lines if contact_normals[i][1] < 0.... then the values are:

  • mat40x40, E=1e10Pa: 5431
  • mat40x40, E=1e6Pa: 128.7 (upwards_force is -128.47)
  • mat225x225, E=1e6Pa: 76

Also, a side question, but is IPC deterministic? So far, when I re-run scenes, the contact forces appear to stay the same.

@liminchen
Copy link
Collaborator

So I see this is
image
where n_k is the contact normal of stencil k.
However this is not what I suggested, as in each stencil, the contact force acted on each node can have different directions, not necessarily in n_k.
For what I suggested I think you can get the barrier energy gradient (which has the opposite contact force times dt^2 on each DOF) and then sum up each DOF's 3D vector and take the negation to obtain the contact force. Remember to only take the sum among DOF of a single object.
For the barrier energy gradient vector (in 3n length, n is total num of DOF), lines 3456-3513 of Optimizer.cpp can give you. DOF of objects you added first in config.txt will appear in the front of the 3n vector.

@chungmin99
Copy link
Author

Based on your feedback from last time, I believe that you were referring to the gradient variable, which includes the deformation energy gradient term (shown below)

switch (animConfig.timeIntegrationType) {
case TIT_BE: {
energyTerms[0]->computeGradient(data, redoSVD, svd, F,
dtSq * energyParams[0], gradient_ET[0], projectDBC);
gradient = gradient_ET[0];
for (int eI = 1; eI < energyTerms.size(); eI++) {
energyTerms[eI]->computeGradient(data, redoSVD, svd, F,
dtSq * energyParams[eI], gradient_ET[eI], projectDBC);
gradient += gradient_ET[eI];
}
break;
}
case TIT_NM: {
energyTerms[0]->computeGradient(data, redoSVD, svd, F, dtSq * beta_NM * energyParams[0], gradient_ET[0], projectDBC);
gradient = gradient_ET[0];
for (int eI = 1; eI < energyTerms.size(); eI++) {
energyTerms[eI]->computeGradient(data, redoSVD, svd, F, dtSq * beta_NM * energyParams[eI], gradient_ET[eI], projectDBC);
gradient += gradient_ET[eI];
}
break;
}
}
) and the effects of considering the constraints (shown below)
Eigen::VectorXd constraintVal;
for (int coI = 0; coI < animConfig.collisionObjects.size(); ++coI) {
int startCI = constraintVal.size();
animConfig.collisionObjects[coI]->evaluateConstraints(data, activeSet[coI], constraintVal);
for (int cI = startCI; cI < constraintVal.size(); ++cI) {
compute_g_b(constraintVal[cI], dHat, constraintVal[cI]);
}
animConfig.collisionObjects[coI]->leftMultiplyConstraintJacobianT(data, activeSet[coI],
constraintVal.segment(startCI, activeSet[coI].size()), gradient, kappa);
// friction
if (activeSet_lastH[coI].size() && fricDHat > 0.0 && animConfig.collisionObjects[coI]->friction > 0.0) {
animConfig.collisionObjects[coI]->augmentFrictionGradient(data.V, result.V_prev, activeSet_lastH[coI],
lambda_lastH[coI], gradient, fricDHat, 1.0);
}
}
for (int coI = 0; coI < animConfig.meshCollisionObjects.size(); ++coI) {
int startCI = constraintVal.size();
animConfig.meshCollisionObjects[coI]->evaluateConstraints(data, MMActiveSet[coI], constraintVal);
for (int cI = startCI; cI < constraintVal.size(); ++cI) {
compute_g_b(constraintVal[cI], dHat, constraintVal[cI]);
}
animConfig.meshCollisionObjects[coI]->leftMultiplyConstraintJacobianT(data, MMActiveSet[coI],
constraintVal.segment(startCI, MMActiveSet[coI].size()), gradient, kappa);
animConfig.meshCollisionObjects[coI]->augmentParaEEGradient(data,
paraEEMMCVIDSet[coI], paraEEeIeJSet[coI], gradient, dHat, kappa);
}
if (animConfig.isSelfCollision) {
int startCI = constraintVal.size();
SelfCollisionHandler<dim>::evaluateConstraints(data, MMActiveSet.back(), constraintVal);
for (int cI = startCI; cI < constraintVal.size(); ++cI) {
compute_g_b(constraintVal[cI], dHat, constraintVal[cI]);
}
SelfCollisionHandler<dim>::leftMultiplyConstraintJacobianT(data, MMActiveSet.back(),
constraintVal.segment(startCI, MMActiveSet.back().size()), gradient, kappa);
SelfCollisionHandler<dim>::augmentParaEEGradient(data,
paraEEMMCVIDSet.back(), paraEEeIeJSet.back(), gradient, dHat, kappa);
if (MMActiveSet_lastH.back().size() && fricDHat > 0.0 && animConfig.selfFric > 0.0) {
SelfCollisionHandler<dim>::augmentFrictionGradient(data.V, result.V_prev, MMActiveSet_lastH.back(),
MMLambda_lastH.back(), MMDistCoord.back(), MMTanBasis.back(), gradient, fricDHat, animConfig.selfFric);
To check the value of gradient, I exported this vector along the friction data at
saveFrictionData(data);
as total_gradient in the json file ("total", as it incorporates friction and deformation effects, unlike the pre-existing gradient vector storing friction gradient data).

Reading this, I ran the following lines to retrieve the gradient sum for the first-object DOFs:

>>> # results is sphere-mat20x20 interaction with mat20x20’s E set to 1e6
>>> results = ‘sphere1K-mat40x40_null_NH_BE_interiorPoint_20210905212324t8’
>>> with open(results + '/friction_data_12.json') as f: #_12 is the last written friction_data file
...     mat_E_1e6 = json.load(f)
...
>>> mat_E_1e6.keys()
dict_keys(['mmcvids', 'dhat_squared', 'barrier_stiffness', 'epsv_times_h_squared', 'mu', 'normal_force_magnitudes', 'closest_point_coordinates', 'tangent_bases', 'energy', 'gradient', 'total_gradient', 'hessian_triplets', 'V_start', 'V_lagged', 'V_end'])
>>> np.array(bar['total_gradient']).shape
(14880,)
>>> np.array(mat_E_1e6['V_start']).shape
(4960, 3)
>>> (4960 * 3)
14880

The sphere mesh (consisting of 1760 nodes) was placed before the mat40x40 mesh (3200 nodes), and since the barrier energy gradient vector consisted of all the vertices, I summed up the 3D DOF vectors for the first 1760 vertices as shown below:

>>> # mat_E_1e10 is created similarly to mat_E_1e6, but with E=1e6
>>> np.array(mat_E_1e6['total_gradient']).reshape((-1, 3))[:1760].sum(axis=0)
array([-3.46058609e+00,  4.75051200e+03, -5.11991339e+00])
>>> np.array(mat_E_1e10['total_gradient']).reshape((-1, 3))[:1760].sum(axis=0)
array([-8.04705607e-02, -2.69475376e+02,  1.13115857e-01])

, and found that

  1. The outputs from the two Young’s modulus values still do not match, and
  2. Neither match the expected value ( around 5136).

@liminchen
Copy link
Collaborator

I think you only want the contact part of the gradient, and look at only the sphere. The total_gradient should be everywhere nearly 0 as the scene becomes static.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants