-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to draw a custom Mesh on top of existing earth. #6
Comments
Yes it is possible. There are several ways to do so.
Use Map::fabricateResourceFreeLayerGeodata, Map::setResourceFreeLayerGeodata and Map::setResourceFreeLayerStyle to provide the mesh to vts browser. This approach is easiest and offers great flexibility on how to render the mesh.
After calling Camera::renderUpdate and before RenderView::render, you may modify the draw commands. This approach is much more performant if the meshes are changing or moving, but it requires some more setup and has restrictions on the format of the meshes.
The renderer library provides access to the depth buffer it uses for rendering. This allows you to render your meshes in any way you want. This approach is most flexible and can be made most performant, but requires even more setup. Alternatively, you may also use vts alongside other rendering engines/libraries if you manage to synchronize the depth and color buffers and other state. Use Map::convert function for conversions between different srs. Hope this helps. |
I have moved the question to this repository because it is about how to use the library, not how to build the library. |
Thanks for the insight on the various ways to do it. 1. Monolithic geodata free layer 2. Inject custom draw commands 3. Custom rendering
Is there any sample that setups the Custom rendering? if not, can you provide some pseudo code/steps to set up the same? I'm going step by step so first of all, I'm trying to get a cube drawn with the second approach. Can you please guide me on that first? I would sincerely appreciate your help. |
Hi,
I suspect we have such code somewhere. Could you please open an issue for that in the vts-libs repository: https://github.com/melowntech/vts-libs
I did not check the rest, but you had an error with the buffers: // Setup the cube mesh.
vts::Buffer vertBuffer;
vertBuffer.resize(vertices.size() * sizeof(vts::vec3f));
//vts::vec3f *bufPos = (vts::vec3f*)vertBuffer.data();
//bufPos = &vertices[0]; // this does not copy the actual data!
memcpy(vertBuffer.data(), vertices.data(), vertBuffer.size()); // this copies the data
vts::Buffer indBuffer;
indBuffer.resize(indices.size() * sizeof(uint32));
//uint32 *bufInd = (uint32*)indBuffer.data();
//bufInd = &indices[0];
memcpy(indBuffer.data(), indices.data(), indBuffer.size()); b. your observation seems correct :D Also set the uvClip to -1, -1, 2, 2; this defines a clipping planes in the external uvs coordinates space, so this should be ok with all uvs zero. EDIT: Also your mesh setup is missing the attributes filed setup. That defines the layout of the vertices buffer. In your case, you should enable the first attribute and set it to vec3 (components = 3, type = float).
Your snippet goes in the right direction. Hope this helps. |
Thanks a lot, @malytomas that definitely helped. 1. Monolithic geodata free layer 2. Inject custom draw commands I have a few more questions. (Please bear with me for just a few more doubts :D) a. My concern is regarding the orientation of the custom geometry it doesn't seem to be correct right now (it is not perpendicular to the earth).
Is there any similar method in the desktop version? How can we get/calculate this matrix? b. Is there any thumb rule on what should be the altitude/height we should keep in order to draw an object just above the ground? (basically, the translation component of the model matrix) Or that is managed by the spaceMatrix given above? c. There is no way I can see to provide texture co-ordinates(uvs)(on vts::GpuMeshSpec) but, the RenderViewImpl::drawSurface() expects the texture. This is not quite matching. d. One more thing if vts::DrawSurfaceTask has flatShading flag true, RenderViewImpl::drawSurface() method shouln't check for texture (or if vts::DrawSurfaceTask::texColor is null then it should automatically consider it as flat shading) I'm not sure, but c and d look like bugs to me. Please, correct me if I'm missing something. 3. Custom rendering Thanks for your help. |
There is no such function at the moment. You can write it yourself. Convert three points from navigation to physical: the center, one point offset to the north and one point offset to the east (or up) and use the two additional vectors to create your rotation matrix.
Navigation srs uses altitude above ellipsoid. TLDR: My expectation is that you have exact 3D coordinates already prepared. You can take coordinates of any point in the vts-browser-desktop app. Point mouse to the point of interest and press M, that will add the point to the list. Do not forget to switch to viewing navigation srs coordinates in the gui (the default is public srs, which is different and may not work). To dynamically place an object exactly on a surface, one approach is to use a physical engine. Place the object arbitrarily high and let it "fall" on the ground. I use this approach for spawning zombies in my Unity examples. https://github.com/melowntech/Vtsmageddon but that requires a lot of additional work without Unity. Instead of using a fully featured physical engine, you can also raytrace the meshes. Alternatively, there is a function to acquire approximate height of a surface above ellipsoid in vts, but it is not exposed in the api now. And it is really just an approximation.
The buffer with vertex coordinates may also contain additional attributes. You can interleave them or place all at the end etc. The attributes array defines the layout. It is directly used for https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glVertexAttribPointer.xhtml
The flatShading was a poor design decision and I plan to remove it. Flat shading is a debugging feature and should have not been exposed in the browser api. It will be reworked to match the wireframe option.
Yes that is exactly the intention. Cheers |
Regarding the orientation, I just remembered that I have a code for it in Unity. This code is specific for the lat-lon navigation srs, whereas the method I described in the previous answer is more general. However, this approach might be simpler to understand and/or implement and may also have better performance. |
Hello @malytomas, thanks for all your inputs. I've finally set up the OpenGL layer successfully for custom rendering. 1. Unity logic vts::mat4 MakeLocal(vts::vec3 navPos, std::shared_ptr<vts::Map> map)
{
vts::vec3 p;
map->convert(&navPos[0], &p[0], vts::Srs::Navigation, vts::Srs::Physical);
/*{ // swap YZ
double tmp = p[1];
p[1] = p[2];
p[2] = tmp;
}*/
//Vector3 v = Vector3.Scale(VtsUtil.V2U3(p), umap.transform.localScale);
vts::mat4 mat;
if (map->getMapProjected())
{
// This block never got a hit in my case so far, Hence ignoring for now.
//(mat[12] = p.x, mat.row[13] = p.y, mat.row[14] = p.z, mat.row[15] = 1.0);
//umap.transform.position = -v;
}
else
{
float m = std::sqrt(p[0]*p[0] + p[1]*p[1] + p[2]*p[2]);
vts::mat4 northAlignMat = vts::rotationMatrix(1, (float)navPos[0] + 90.0f); // align to north
//umap.transform.rotation =
//Quaternion.Euler(0, (float)navPt[0] + 90.0f, 0) // align to north
//* Quaternion.FromToRotation(-v, umap.transform.position); // latlon
Eigen::Quaterniond quat = Eigen::Quaterniond::FromTwoVectors(-p, vts::vec3(0.0, -m, 0.0)); // latlon
vts::mat3 rot = quat.toRotationMatrix();
vts::mat4 rot4 = vts::mat3to4(rot);
mat = northAlignMat * rot4;
}
return mat;
} I'm not sure if swapping yz and local scale is needed in my case. I think it's specific to unity but, I tried that too. 2. vts-browser.js logic vts::mat4 getNED(vts::vec3 navPos, std::shared_ptr<vts::Map> map)
{
vts::vec3 center;
map->convert(&navPos[0], ¢er[0], vts::Srs::Navigation, vts::Srs::Physical);
vts::vec3 upCoords, rightCoords;
if (map->getMapProjected())
{
map->convert({ navPos[0], navPos[1] + 100, 0 }, &upCoords[0], vts::Srs::Navigation, vts::Srs::Physical);
map->convert({ navPos[0] + 100, navPos[1], 0 }, &rightCoords[0], vts::Srs::Navigation, vts::Srs::Physical);
}
else
{
float cy = (navPos[1] + 90) - 0.0001;
float cx = (navPos[0] + 180) + 0.0001;
if (cy < 0.00 || cx > 180.00)
{
// I couldn't find corresponding method or class for getGeodesic() in vts desktop to write the c++ version of following js code.
// And almost all the calls for given geolocation hitting this, Hence I couldn't check the correctness of remaining code.
//var geodesic = this.getGeodesic();
//up coords
//var r = geodesic.Direct(coords[1], coords[0], 0, -100);
//upCoords = this.convert.convertCoords([r.lon2, r.lat2, 0], 'navigation', 'physical');
//right coords
//r = geodesic.Direct(coords[1], coords[0], 90, 100);
//rightCoords = this.convert.convertCoords([r.lon2, r.lat2, 0], 'navigation', 'physical');
}
else
{
// substraction instead of addition is probably case of complicated view matrix calculation
map->convert({ navPos[0], navPos[1] - 0.0001, 0 }, &upCoords[0], vts::Srs::Navigation, vts::Srs::Physical);
map->convert({ navPos[0] + 0.0001, navPos[1], 0 }, &upCoords[0], vts::Srs::Navigation, vts::Srs::Physical);
//upCoords = this.convert.convertCoords([coords[0], coords[1] - 0.0001, 0], 'navigation', 'physical');
//rightCoords = this.convert.convertCoords([coords[0] + 0.0001, coords[1], 0], 'navigation', 'physical');
}
}
vts::vec3 up = upCoords - center;
vts::vec3 right = rightCoords - center;
up.normalize();
right.normalize();
vts::vec3 dir = vts::cross(up, right);
dir.normalize();
double mat[9];
(mat[0] = right.x(), mat[1] = right.y(), mat[2] = right.z());
(mat[3] = up.x(), mat[4] = up.y(), mat[5] = up.z());
(mat[6] = dir.x(), mat[7] = dir.y(), mat[8] = dir.z());
vts::mat3 mat3 = vts::rawToMat3(mat);
return vts::mat3to4(mat3);
} Could you please correct if I'm doing something wrong. It will really help, if you could add such a method in c++ version as well (maybe somewhere inside vts::Map class). Thanks, |
A quick thought: Eigen::Quaterniond::FromTwoVectors(vts::vec3(0,1,0), p); Also you probably want to swap the order of multiplication of the two matrices. |
I found that with vts-browser.js we can draw custom mesh on top of the existing earth being drawn by vts. In fact there are APIs on renderer to draw mesh and on map to convert points from navigation co-ordinates to camera co-ordinates.
Is it possible to draw custom mesh with this desktop/cpp wrapper version?
If not yet possible, is it in implementation plan in near future or targeted for any new future release?
The text was updated successfully, but these errors were encountered: