-
Notifications
You must be signed in to change notification settings - Fork 119
ProConcepts Geometry
The ArcGIS.Core.Geometry namespace contains the geometry classes and members for creating, modifying, deleting, and converting geometry objects as well as spatial operators and methods to manipulate ArcGIS.Core.Geometry geometry instances using GeometryEngine.Instance.
ArcGIS.Core.dll
Language: C#
Subject: Geometry
Contributor: ArcGIS Pro SDK Team <arcgisprosdk@esri.com>
Organization: Esri, http://www.esri.com
Date: 10/06/2024
ArcGIS Pro: 3.4
Visual Studio: 2022
- Immutable geometries
- Building geometries
- Representing geometries as JSON objects
- GeometryEngine.Instance
- Datum transformation
- Summary
Geometries are immutable. This means they are read only. Once instantiated, their content cannot be changed.
Why are geometries immutable? The vast majority of geometries in the system are actually static and unchanging—they are features in a geodatabase or a non-editable layer, or returned from a query, geocode, network trace, or geoprocessing tool. Using immutable geometries reflects reality in most cases and helps make behaviors more predictable. Immutable geometries do not change (even by accident). Because geometries cannot change, there is no need for event handlers/listeners to deal with changing geometries in cases where different parts of the system have a reference to essentially the same geometry. Another advantage of immutable geometry objects is their inherent thread safety. Immutable objects are simpler to understand and avoid potential concurrency issues in the multithreaded environment of ArcGIS Pro.
Since geometries are immutable; meaning that you cannot change the instance of a geometry or any of its properties once it's created; the API provides builder classes for each geometry type (including spatial references). The builder classes are flexible and represent a geometry that is being constructed or edited. They also provide a consistent way of creating and modifying geometries.
When building geometries, there are typically two scenarios. You know the entire state of the geometry up front (e.g. you have the x, y and z-coordinates of the point you want to create, or you have the set of segments to create a polygon) or you have a workflow that requires defining the geometry in a step-by-step process (e.g. you want to build a multi-part polygon). If you know the entire state of the geometry up front, you can use static convenience methods on the builder classes to generate an instance of a geometry. In the case where your workflow requires defining the geometry in a step-by-step workflow, you must create an instance of the appropriate geometry builder class and use that to modify the geometry properties before creating an immutable geometry instance via its ToGeometry()
method.
The static convenience methods as well as most builder constructors and methods can run on any thread. There are a few builder methods that need to run on the MCT thread. If a method must be called on the MCT thread, it is noted in the summary of the method.
Let's start by examining the simplest of geometries; the MapPoint and the MapPointBuilderEx. Here are some examples of using both the convenience methods and the MapPointBuilderEx class. Note the values of the HasZ
, HasM
and HasID
properties of the resulting MapPoint which are derived according to the function parameters.
// create a point with x,y
MapPoint pt1 = MapPointBuilderEx.CreateMapPoint(1.0, 2.0);
// pt1.HasZ = false
// pt1.HasM = false
// pt1.HasID = false
// create a point with x,y,z
MapPoint pt2 = MapPointBuilderEx.CreateMapPoint(1.0, 2.0, 3.0);
// pt2.HasZ = true
// pt2.HasM = false
// pt2.HasID = false
// create a point with x,y,z,m
MapPoint pt3 = MapPointBuilderEx.CreateMapPoint(1.0, 2.0, 3.0, 4.0);
// pt3.HasZ = true
// pt3.HasM = true
// pt3.HasID = false
MapPoint pt4 = null;
MapPoint pt5 = null;
MapPoint pt6 = null;
// create a point with x,y
MapPointBuilderEx mapPointBuilder1 = new MapPointBuilderEx(1.0, 2.0);
// properties on the builder are derived from the parameters
// mapPointBuilder1.HasZ = false
// mapPointBuilder1.HasM = false
// mapPointBuilder1.HasID = false
pt4 = mapPointBuilder1.ToGeometry();
// properties on the MapPoint are as per the builder properties
// pt4.HasZ = false
// pt4.HasM = false
// pt4.HasID = false
// create a point with x,y,z
MapPointBuilderEx mapPointBuilder2 = new MapPointBuilderEx(1.0, 2.0, 3.0);
// properties on the builder are derived from the parameters
// mapPointBuilder2.HasZ = true
// mapPointBuilder2.HasM = false
// mapPointBuilder2.HasID = false
pt5 = mapPointBuilder2.ToGeometry();
// properties on the MapPoint are as per the builder properties
// pt5.HasZ = true
// pt5.HasM = false
// pt5.HasID = false
// create a point with x,y,z,m
MapPointBuilderEx mapPointBuilder3 = new MapPointBuilderEx(1.0, 2.0, 3.0, 4.0);
// properties on the builder are derived from the parameters
// mapPointBuilder3.HasZ = true
// mapPointBuilder3.HasM = true
// mapPointBuilder3.HasID = false
pt6 = mapPointBuilder3.ToGeometry();
// properties on the MapPoint are as per the builder properties
// pt6.HasZ = true
// pt6.HasM = true
// pt6.HasID = false
// create a point from another point
MapPoint pt7 = MapPointBuilderEx.CreateMapPoint(pt6);
// properties on the MapPoint are derived from the parameters
// pt7.HasZ = true
// pt7.HasM = true
// pt7.HasID = false
// create a point from another point
MapPointBuilderEx multipointBuilder4 = new MapPointBuilderEx(pt2);
// properties on the builder are derived from the parameters
// multipointBuilder4.HasZ = true
// multipointBuilder4.HasM = false
// multipointBuilder4.HasID = false
MapPoint pt8 = multipointBuilder4.ToGeometry();
// properties on the MapPoint are as per the builder properties
// pt8.HasZ = true
// pt8.HasM = false
// pt8.HasID = false
What if you want to create a copy of a MapPoint but ensure the Z value is 10? Clearly the convenience methods cannot be used - they always produce an immutable geometry which cannot be altered. This is where the power of the geometry builder classes lie. They give you the ability to manipulate and set properties for the geometry before it is created. Here is a snippet showing how a cloned MapPoint with a z-value of 10 is created.
MapPoint pt = MapPointBuilderEx.CreateMapPoint(1, 2);
// pt.HasZ = true
// pt.HasM = false
// pt.HasID = false
MapPoint ptWithZ = null;
// create a point from another point
MapPointBuilderEx mapPointBuilder = new MapPointBuilderEx(pt);
// initially HasZ, HasM, HasID properties on the builder are derived according to the
// HasZ, HasM, HasID values of the 'pt' parameter
// we want a point with Z value of 10
// set the Z value
mapPointBuilder.Z = 10;
// return the geometry
ptWithZ = mapPointBuilder.ToGeometry();
// ptWithZ.Z = 10
// ptWithZ.HasZ = true
// ptWithZ.HasM = false (inherited from pt)
// ptWithZ.HasID = false (inherited from pt)
Note that setting the z-value on the MapPointBuilderEx automatically sets the HasZ attribute to true. Similarly, setting the m-value or ID-value automatically sets the HasM or HasID attribute to true.
Next, lets look at the Polyline and the PolylineBuilderEx classes. As with the MapPoint, if you know the entire geometry up front you have the option of using one of the many CreatePolyline
convenience methods on the PolylineBuilderEx class, otherwise create an instance of the PolylineBuilderEx class, manipulate properties and then use the ToGeometry()
method to obtain the polyline.
As we saw earlier, the HasZ
, HasM
, HasID
attributes are derived from the parameters when using the convenience methods. If you want to control the attributes that are derived from the parameters in the convenience methods, there are overwritten methods which use AttributeFlags enumeration. For builder classes other than the MapPointBuilderEx, the constructors also use the AttributeFlags enumeration to control how attributes are derived unless the input geometry is the same type as the builder. In that case, the attributes are inherited from the input geometry.
List<MapPoint> list3D = new List<MapPoint>();
list3D.Add(MapPointBuilderEx.CreateMapPoint(1.0, 1.0, 1.0, 2.0));
list3D.Add(MapPointBuilderEx.CreateMapPoint(1.0, 2.0, 3.0, 6.0));
list3D.Add(MapPointBuilderEx.CreateMapPoint(2.0, 2.0, 1.0, 2.0));
list3D.Add(MapPointBuilderEx.CreateMapPoint(2.0, 1.0, 1.0, 2.0));
Polyline polyline = PolylineBuilderEx.CreatePolyline(list3D);
// attributes are defined from the parameters
// (list of MapPoints with HasZ = true, HasM = true)
// polyline.HasZ = true
// polyline.HasM = true
// polyline.HasID = false
Polyline polylineNoAttrs = PolylineBuilderEx.CreatePolyline(list3D, AttributeFlags.None);
// attributes are defined by the attribute flags
// polylineNoAttrs.HasZ = false
// polylineNoAttrs.HasM = false
// polylineNoAttrs.HasID = false
PolylineBuilderEx polylineBuilder = new PolylineBuilderEx(list3D, AttributeFlags.HasZ | AttributeFlags.HasM);
// use bitwise OR operator to specify more than one attribute (or AttributeFlags.AllAttributes)
// polylineBuilder.HasZ = true
// polylineBuilder.HasM = true
// polylineBuilder.HasID = false
Polyline polylineWithZM = polylineBuilder.ToGeometry();
// polylineWithZM.HasZ = true
// polylineWithZM.HasM = true
// polylineWithZM.HasID = false
Polyline polylineClone = PolylineBuilderEx.CreatePolyline(polylineWithZM);
// attributes are defined from the parameters
// polylineClone.HasZ = true
// polylineClone.HasM = true
// polylineClone.HasID = false
PolylineBuilderEx polylineBuilder2 = new PolylineBuilderEx(polylineWithZM);
// because the input geometry matches the geometry type of the builder
// the attribute values are derived from the parameter
// polylineBuilder2.HasZ = true
// polylineBuilder2.HasM = true
// polylineBuilder2.HasID = false
Polyline polyline2 = polylineBuilder2.ToGeometry();
// polyline2.HasZ = true
// polyline2.HasM = true
// polyline2.HasID = false
Builder classes exist for all the other geometry types - PolygonBuilderEx, EnvelopeBuilderEx, MultipointBuilderEx, MultipatchBuilderEx and GeometryBagBuilderEx. Each of these builder classes have numerous overloads on their constructors along with multiple convenience methods to facilitate geometry creation.
There are also additional builder classes for creating segments - EllipticArcBuilderEx, CubicBezierBuilderEx and LineBuilderEx.
There is also a SpatialReferenceBuilder for building SpatialReference objects.
In this topic, you'll learn how to create Polygon
and Polyline
geometries from scratch using Coordinate
and MapPoint
geometries. You'll learn how to access the segments of a polygon and how to change its content using the builder classes. Finally, you'll apply some spatial operations using the GeometryEngine.Instance
to form a new polygon.
Since you're creating a geometry of type Polygon
, the use of PolygonBuilderEx
is the obvious choice.
The polygon you're going to build is a rectangle:
The polygon consists of four points with four linear segments connecting the points. The points are described by their coordinates containing the x and y values for the location using the Web Mercator spatial reference.
Looking at the PolygonBuilderEx
class, you'll notice multiple overloaded constructors helping you to initialize the builder. At the same time, you'll
also notice static convenience methods that allow you to fully describe the state of the geometry and produce a geometry in a single line of code.
The polygon is described by the four corners, so you need to create the points first. Points are geometries of type MapPoint
and use a builder class
of type MapPointBuilderEx
. The MapPoint
class contains the location values and the spatial reference information. The SpatialReference
is immutable as well and uses a similar builder approach as the other types of geometry.
Light-weight alternatives, Coordinate2D
and Coordinate3D
, are also available. These structs are useful when you want to avoid the overhead of creating a MapPoint
for use in the construction of a higher-level geometry such as a Polygon
or Polyline
. Because they are structs, Coordinate2D
and Coordinate3D
possess the same advantages as immutable geometries with respect to thread safety. Coordinate2D
and Coordinate3D
can be freely passed to any thread within ArcGIS Pro.
With this information, you can now create the polygon with the following code using Coordinate2D
:
// Create a spatial reference using the WKID (well-known ID)
// for the Web Mercator coordinate system
SpatialReference mercatorSR = SpatialReferenceBuilder.CreateSpatialReference(3857);
// Create a list of coordinates describing the polygon vertices
var vertices = new List<Coordinate2D>();
vertices.Add(new Coordinate2D(-13046167.65, 4036393.78));
vertices.Add(new Coordinate2D(-13046167.65, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036393.78));
// Use the builder to create the polygon object
Polygon polygon1 = PolygonBuilderEx.CreatePolygon(vertices, mercatorSR);
or using MapPoint
:
// Use the builder to create points that will become vertices
MapPoint corner1Point = MapPointBuilderEx.CreateMapPoint(-13046167.65, 4036393.78);
MapPoint corner2Point = MapPointBuilderEx.CreateMapPoint(-13046167.65, 4036404.5);
MapPoint corner3Point = MapPointBuilderEx.CreateMapPoint(-13046161.693, 4036404.5);
MapPoint corner4Point = MapPointBuilderEx.CreateMapPoint(-13046161.693, 4036393.78);
// Create a list of all map points describing the polygon vertices.
var points = new List<MapPoint>() { corner1Point, corner2Point, corner3Point, corner4Point };
// use the builder to create the polygon container
PolygonBuilderEx polygonBuilder = new PolygonBuilderEx(points, AttributeFlags.None, mercatorSR);
// manipulate the builder
Polygon polygon = polygonBuilder.ToGeometry()
There are a couple of things to note. It is assumed you will provide a spatial reference when calling CreatePolygon
or using the PolygonBuilderEx
constructor. The spatial reference of each point is ignored.
The order of the coordinates/map points is important. Based on the above sketch, the order is Point1, Point2, Point3, and Point4. Point1 and Point2 form line segment 1, Point 2 and Point 3 form line segment 2, and so on. The direction of the segments is oriented clock-wise and as such describe an exterior part of the polygon. Polygons are closed, meaning their last segment goes back to the start point of the first segment. Even though you only provided four points, enough to construct three segments, the polygon itself has four segments and five points. The additional information is generated by the builder to close the polygon.
Based on the previous explanation, consider the difference if you used a polyline builder as opposed to a polygon builder with the same set of points.
// Create a spatial reference using the WKID (well-known ID)
// for the Web Mercator coordinate system.
SpatialReference mercatorSR = SpatialReferenceBuilder.CreateSpatialReference(3857);
// Create a list of coordinates describing the polyline vertices.
var vertices = new List<Coordinate2D>();
vertices.Add(new Coordinate2D(-13046167.65, 4036393.78));
vertices.Add(new Coordinate2D(-13046167.65, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036404.5));
vertices.Add(new Coordinate2D(-13046161.693, 4036393.78));
// Use the builder to create the polyline object.
Polyline polyline = PolylineBuilderEx.CreatePolyline(vertices, mercatorSR);
The result is now the following:
The line geometry has one part, containing only the original four points and three linear segments you provided. The fourth segment was not created by the builder.
From the Polygon
class, you get read-only access to the parts of the polygon. Polygon parts, or rings, can be retrieved as either segment collections or point collections derived from the segment collection vertices.
The builder classes allow read-write access to the properties of a geometry.
Using the builder class, you can add and remove segments to reshape the geometry. Here you remove the last segment indicated by the -1
argument.
You replace the deleted segment with an elliptic arc constructed with three points, which you add at the end of the polygon builder segment collection, effectively closing the geometry.
// Replace the last segment, a line segment, from the 1st part of the polygon
// with an elliptic arc segment.
// Create a coordinate through which the elliptic arc needs to pass.
var interiorCoordinate = new Coordinate2D(-13046164.6, 4036383);
// Build a new segment of type EllipticArc.
var arcSegment = EllipticArcBuilderEx.CreateCircularArc(corner4Point, corner1Point, interiorCoordinate);
// Replace the line segment with the new segment.
polygonBuilder.ReplaceSegment(0, -1, arcSegment);
The resulting polygon looks like this:
See the Multipatches Concepts document for specifics about working for multipatch geometries.
For a description of the JSON representation for points, multipoints, and linear segment based polylines and polygons, please refer to the ArcGIS REST API documentation.
In ArcGISPro Geometry SDK , a circular arc, an elliptic arc and a Bézier curve can be represented as a JSON curve object. A curve object is given in a compact “curve to” manner with the first element representing the “to” (end) point. The “from” (start) point is derived from the previous segment or curve object.
The supported curve objects are as follows:
-
Circular Arc "c"
Converted to EllipticArcSegment class
Defined by end point and an interior point where interior point is a point on the arc between the start point and the end point{"c": [[x, y, <z>, <m>], [interior_x, interior_y]]}
-
Arc "a"
-
Elliptic Arc
Converted to EllipticArcSegment class
Defined by- end point
- center point
- minor: 1 if the arc is minor, 0 if the arc is major
- clockwise: 1 if the arc is oriented clockwise, 0 if the arc is oriented counterclockwise
- rotation: angle of rotation of major axis in radians with a positive value being counterclockwise
- axis: length of the semi-major axis
- ratio: ratio of minor axis to major axis
{"a": [[x, y, <z>, <m>], [center_x, center_y], minor, clockwise, rotation, axis, ratio]}
-
Circular Arc (old format)
Special case of elliptic arc
Converted to EllipticArcSegment class
Exclude rotation, axis, ratio{"a": [[x, y, <z>, <m>], [center_x, center_y], minor, clockwise]}
-
-
Bézier Curve "b"
Converted to CubicBezierSegment class
Defined by end point and two control points{"b": [[x, y, <z>, <m>], [x, y], [x, y]]}
A JSON string representing a polyline with curves contains an array of curvePaths and an optional "spatialReference". Each curve path is represented as an array containing points and curve objects.
{"curvePaths": [start point, next point or curve object, … ] }
- A polyline which is a circular arc from (0, 0) to (3, 3) through (1, 4).
{"curvePaths": [[[0,0], {"c": [[3,3], [1,4]]}]]}
- A polyline containing a line segment from (6, 3) to (5, 3), a Bézier curve from (5, 3) to (3, 2) with control points (6, 1) and (2, 4), a line segment from (3, 2) to (1, 2) and an elliptic arc from (1, 2) to (0, 2) with center point (0, 3), minor = 0, clockwise = 0, rotation = 2.094395102393195 (120 degrees), semi-major axis = 1.78, ratio = 0.323.
{
"curvePaths":
[[
[6,3], [5,3],
{"b": [[3,2], [6,1], [2,4]]},
[1,2],
{"a": [[0,2], [0,3],0,0,2.094395102393195,1.78,0.323]}
]]
}
A JSON string representing a polygon with curves contains an array of curveRings and an optional spatialReference. Each curve ring is represented as an array containing points and curve objects.
{"curveRings": [ [ start point, next point or curve object, … ] ] }
A multipart polygon with m-values. The first part contains three line segments, a Bézier curve from (11, 12) to (15, 15) with control points (10, 17) and (18, 20) which is closed with a line segment to (11, 11). The second part contains a circular arc from (22, 16) to (17, 15) through (22, 14) which is closed with a line segment to (22, 16).
{
"hasM": true,
"curveRings":
[
[
[11,11,1], [10,10,2], [10,11,3], [11,12,4],
{"b": [[15,15,5], [10,17], [18,20]]},
[11,11,1]
],
[
[22,16,1],
{"c": [[17,15,2], [22,14]]},
[22,16,1]
]
]
}
Use GeometryEngine.Instance
for performing geometric operations. The methods of this interface allow the user to discover relations between two geometries (for example, Touches
, Within
, Contains
, and so on) as well as to
construct new geometries based on topological relationships between existing geometries (for example, Union
, ConvexHull
, Cut
, and so on). The result of each operation is a new geometry instance.
Polyline
and Polygon
inherit from Geometry.Multipart
, meaning they can be comprised of more than one part. In the first step, you created a polygon from an instance of a polygon builder.
You will now use GeometryEngine.Instance
to create new geometries, which you'll assemble into a final multi-part geometry.
First, you move your polygon by 15 points (or 15 meters based on the units of the spatial reference) in the y-direction.
// Move an existing polygon in the y-direction.
var movedPolygon = GeometryEngine.Instance.Move(polygon, 0, 15) as Polygon;
Because geometries are immutable, the resulting movedPolygon
is a new geometry instance.
Here's another example. You create a polygon that is half the size of the original geometry.
// Scale an existing polygon around the label point,
// i.e., reduce the polygon by 50%.
var smallerPolygon = GeometryEngine.Instance.Scale(
polygon, GeometryEngine.Instance.LabelPoint(polygon), 0.5, 0.5) as Polygon;
Add the moved polygon as a new part into the polygonBuilder
instance.
// Add a second part into the polygon builder.
polygonBuilder.AddPart(movedPolygon.Points);
As a last step, you'll perform a topological operation by calculating the difference between the shrunken polygon, smallerPolygon
, and the composite two-part polygon created from your original points and the movedPolygon
.
Since a geometry is expected, you need to convert (or build) the current state of the builder into an instance by calling its ToGeometry()
method.
// Use GeometryEngine.Instance to cut a hole out of the first part.
var finalPolygon = GeometryEngine.Instance.Difference(polygonBuilder.ToGeometry(), smallerPolygon);
The difference operation will cut a hole out of the polygonBuilder
geometry. You'll use the subtracted area of the hole as your third part of the finalPolygon
geometry describing an interior part.
The segments of the interior part are oriented counter-clockwise.
The resulting polygon geometry and the orientation of the segments looks like the following:
The predefined relational operations in GeometryEngine.Instance are Contains, Crosses, Disjoint, Equal, Intersects, Overlaps, Touches, and Within. There is also a Relate method, which allows you to create custom relational operations. You can read more about the Relate method at Relate and the dimensionally extended nine intersection model (DE 9IM).
To see how the relational operations work, first review the definitions of dimensionality, interiors, boundaries, and exteriors for the basic geometry types.
- All point and multipoint shapes are zero dimensional.
- All polyline shapes are one dimensional.
- All polygon shapes are two dimensional.
Note that the presence of z-coordinates or m-coordinates does not affect the dimensionality of the geometry.
Each type of geometry has an interior, a boundary, and an exterior, which are important in understanding relational operators.
-
Point—A point represents a single location in space. The interior of a point is the point itself, the boundary is the empty set, and the exterior is all other points.
-
Multipoint—A multipoint is an ordered collection of points. The interior of a multipoint is the set of points in the collection, the boundary is the empty set, and the exterior is the set of points that are not in the collection.
-
Polyline—A polyline is an ordered collection of paths where each path is a collection of contiguous segments. A segment has a start and an end point. The boundary of a polyline is the set of start and end points of each path, the interior is the set of points in the polyline that are not in the boundary, and the exterior is the set of points that are not in the boundary or the interior. For the polyline shown below, the set of points comprising the boundary is shown in red. The interior of the polyline is shown in black.
- Polygon—A polygon is defined by a collection of rings. Each ring is a collection of contiguous segments such that the start point and the end point are the same.
The boundary of a polygon is the collection of rings by which the polygon is defined. The boundary contains one or more outer rings and zero or more inner rings. An outer ring is oriented clockwise, while an inner ring is oriented counter-clockwise. Imagine walking clockwise along an outer ring. The area to your immediate right is the interior of the polygon and to your left is the exterior.
Similarly, if you were to walk counter-clockwise along an inner ring, the area to your immediate right is the interior of the polygon and to your left is the exterior.
In the following images, the blue geometry is A, and the red geometry is B.
The predefined relational operations in GeometryEngine.Instance are
- Contains—One geometry contains another if the contained geometry is a subset of the container geometry and their interiors have at least one point in common. Contains is the inverse of Within.
- Crosses—Two polylines cross if they meet at points only, and at least one of the shared points is internal to both polylines. A polyline and polygon cross if a connected part of the polyline is partly inside and partly outside the polygon.
- Disjoint—Two geometries are disjoint if they don’t have any points in common.
- Equals—Two geometries are equal if they occupy the same space.
- Intersects—Two geometries intersect if they share at least one point in common.
- Overlaps—Two geometries overlap if they have the same dimension, and their intersection also has the same dimension but is different from both of them.
- Touches—Geometry A touches Geometry B if the intersection of their interiors is empty, but the intersection of Geometry A and Geometry B is not empty
- Within—One geometry is within another if it is a subset of the other geometry and their interiors have at least one point in common. Within is the inverse of Contains.
The GeometryEngine.Instance.Relate method allows you to create custom relational operations using a Dimensionally Extended Nine-Intersection Model, DE-9IM, formatted string. At this time, the Relate method does not support geometries with curves, so you must first densify the geometry if it contains curve segments.
All of the predefined relational operations (Contains, Crosses, Disjoint, Equals, Intersects, Overlaps, Touches, and Within) can be defined using the GeometryEngine.Instance.Relate method, but it offers much more. A review of the predefined relational operations can be found at Performing relational operations.
An explanation of DE-9IM, as well as examples, are given below. More information about DE-9IM can be found at https://en.wikipedia.org/wiki/DE-9IM or by downloading the OGC specification “OpenGIS Simple Features Specification For SQL Revision 1.1” from http://www.opengis.org.
For any geometry A, let I(A) be the interior of A, B(A) be the boundary of A, and E(A) be the exterior of A. For any set x of geometries, let dim(x) be the maximum dimension (-1, 0, 1, or 2) of the geometries in x, where -1 is the dimension of the empty set. A DE-9IM has the following form:
For example, consider two overlapping polygons and the associated DE-9IM.
The intersection of the interiors, I(A)∩I(B), is a polygon that has dimension 2. The intersection of the interior of A and the boundary of B, I(A)∩B(B), is a line that has dimension 1, and so forth.
A pattern matrix represents all acceptable values for the DE-9IM of a spatial relationship predicate on two geometries. The possible pattern values for any cell such that x is the intersection set are {T, F, *, 0, 1, 2} where
- T => dim(x) ϵ {0, 1, 2}, i.e., x is not empty
- F => dim(x) = -1, i.e. x is empty
-
- => dim(x) ϵ {-1, 0, 1, 2}, i.e., don’t care
- 0 => dim(x) = 0
- 1 => dim(x) = 1
- 2 => dim(x) = 2
The pattern matrix can be represented as a string of nine characters listed row by row from left to right. For example, the pattern matrix given above for overlapping polygons can be represented by the string “212101212”. The string representing two geometries, not necessarily polygons, that overlap is “T*T***T**” .
The GeometryEngine.Instance.Relate method has the following signature:
bool Relate(Geometry geometry1, Geometry geometry2, string relateString)
where relateString is a string representation of a pattern matrix.
If the spatial relationship between the two geometries corresponds to the values as represented in the string, the Relate method returns true. Otherwise, the Relate method returns false.
In the following examples, the blue geometry is Geometry A, and the red geometry is Geometry B.
Recall that Geometry A contains Geometry B if:
- Geometry B is a subset of Geometry A, and
- Their interiors have at least one point in common
Clearly, for Geometry A to contain Geometry B the set, I(A)∩I(B), representing the intersection of the interiors must not be empty. Also, if no part of B is outside of A, the intersection of the exterior of A with the interior of B and the boundary of B should be empty. In other words, E(A)∩I(B) must be empty, and E(A)∩B(B) must be empty. There are no other requirements, so the rest of the cells are filled with *.
Pattern matrix for Contains relationship
The string that you pass to the GeometryEngine.Instance.Relate method is “T*****FF*”.
The power of the GeometryEngine.Instance.Relate method is that you can create custom relationships. Suppose you want to know if A completely contains B.
As before, B must be a subset of the interior of A so, I(A)∩I(B) must not be empty, E(A)∩I(B) must be empty, and E(A)∩B(B) must be empty (or T*****FF*).
Now there is the extra requirement that Geometry A completely contains Geometry B. This means that the boundary of A must not intersect the interior or the boundary of B. In other words, the intersection of the boundary of A with the interior of B and the boundary of B must be empty or B(A)∩I(B) must be empty and B(A)∩B(B) must be empty. Add two more entries to the DE-9IM matrix giving:
Pattern matrix for Completely Contains relationship
The string that you pass to the GeometryEngine.Instance.Relate method is now “T**FF*FF*”.
Recall that two geometries, A and B, touch if the intersection of their interiors is empty, but the intersection of A and B is not empty. The first requirement is that the intersection of their interiors, I(A)∩I(B), is empty. Given the first requirement, what does it mean to say that the intersection of A and B is not empty? It means that one of the following must be true:
- Boundary of A intersect interior of B is not empty
- Interior of A intersect boundary of B is not empty
- Boundary of A intersect boundary of B is not empty
The DE-9IM matrix for this case is:
The string that you pass to the GeometryEngine.Instance.Relate method is “F**T*****”.
What geometry types does this case apply to? The boundary of A is not empty, so you know that A is not a point or a multipoint. Therefore, A is a polygon or a polyline. B cannot be a polygon because then the interiors would intersect. If A is a polygon, then B must be a point or multipoint. If A is a polyline, then B must be a point, multipoint, or polyline.
The DE-9IM matrix for this case is:
The string that you pass to the GeometryEngine.Instance.Relate method is “FT*******”.
Case 2 is the inverse of Example 1. B is a polygon or polyline. If B is a polygon, then A is a point or multipoint. If B is a polyline, then A is a point, multipoint, or a polyline.
The DE-9IM matrix for this case is:
The string that you pass to the GeometryEngine.Instance.Relate method is “F***T****”.
What geometry types does this example apply to? Neither A nor B has an empty boundary, so both A and B must be a polygon or a polyline.
The relate string for Touches is “F**T*****” or “FT*******” or “F***T****”. Which string you use depends on the geometry types. For example, to find out if Point A touches Polygon B, you pass the string “FT*******” to the GeometryEngine.Instance.Relate method.
From Example 3, you know that the relate string for Polygon A touches Polygon B is “F***T****”. Now you are specifying that not only do their boundaries intersect, they intersect at points only. In other words, the dim(B(A)∩B(B)) = 1.
The DE-9IM matrix for this example is:
The string that you pass to the GeometryEngine.Instance.Relate method is “F***1****”.
In this example, you want to know if Geometry A and Geometry B are disjoint but only if A is a polyline and B is a multipoint.
Clearly, the interiors and boundaries must not intersect. This is the matrix so far:
Notice that the intersection of the interior of A and the exterior of B is equal to A. In other words, I(A)∩E(B) = A and dim(A) = 1. You can fill in another cell of the matrix.
The intersection of the boundary of A and the exterior of B is equal to the boundary of A or B(A)∩E(B) = B(A). The boundary of A is the set of endpoints of the polyline, so dim(B(A)) = 0. Another cell gets filled in.
Looking at the last row in the matrix, you see that E(A)∩I(B) = B, E(A)∩B(B) is the empty set, and E(A)∩E(B) is everything except A and B.
The completed DE-9IM matrix is:
The string that you pass to the GeometryEngine.Instance.Relate method is “FF1FF00F2”.
Acceleration is the process of constructing data structures used during relational operations, such as a spatial index, and pinning them in memory to be reused. This process can have performance benefits when performing relational operations.
Acceleration is only applicable to the relational operations, i.e. Contains, Crosses, Disjoint, Disjoint3D, Equals, Intersects, Relate, Touches, Within. There is no harm in passing an accelerated geometry to any other operation, the acceleration structures are ignored.
If you are performing relational operations on the same geometry several times by comparing it to different geometries, you may want to accelerate the geometry. Accelerating a geometry can be time consuming, so if you are going to use the geometry in a relational operation only once or twice, then don't accelerate it. The time to accelerate versus the performance gain also depends on the number of vertices in the geometry. If the geometry has a fairly large number of vertices, say more than 10,000, then you can see a performance gain from acceleration even if you are using it only ten times. If the geometry has a small number of vertices, say less than 200, you may not see a performance gain from acceleration unless you are using it more than 100 or 200 times.
To accelerate a geometry, call GeometryEngine.Instance.AccelerateForRelationalOperations. A copy of the original geometry is returned with the proper data structures already created.
You can accelerate more than one geometry, but the relational operation will benefit only if the first argument passed to the function is the accelerated geometry.
Suppose you have a polygon and a list of points, and you want to see which points intersect the polygon. If there are more than just a few points in the list, then this is a situation where acceleration will be a benefit to the performance of the operation.
//acquire the polygon
var polygon = ....;
//acquire the list of points
var listOfPoints = ...;
//accelerate the polygon
var acceleratedPolygon =
GeometryEngine.Instance.AccelerateForRelationalOperations(polygon);
//test "Intersects"
List<int> intersectedPoints = new List<int>();
int numPoints = listOfPoints.Count;
for (int i = 0; i < numPoints; i++)
{
//note the accelerated polygon is passed in as the first argument
bool intersects = GeometryEngine.Instance.Intersects(acceleratedPolygon, listOfPoints[i]);
if (intersects)
intersectedPoints.Add(i);
}
There are two Cut
methods in the GeometryEngine
class.
The signatures of the methods are
IReadOnlyList Cut(Multipart multipart, Polyline cutter)
and
IReadOnlyList Cut(Multipart cuttee, Polyline cutter, bool considerTouch)
where the first parameter is the multipart to be cut, and the second parameter is the polyline that will perform the cut. The parameter considerTouch
indicates whether to consider a touch event to be a cut. It applies only when the cuttee is a polyline. The cut method that doesn't have the considerTouch
parameter behaves as if considerTouch
is set to true.
The goal of the Cut
operation is to divide the cuttee into two sets of paths, those to the left of the cutter and those to the right of the cutter. In addition, there could be uncut cuttee paths, coincident cuttee paths that overlap cutter segments, and undefined cuttee paths that intersect the cutter in some way but cannot be classified as left or right of the cutter.
When a polyline/polygon is cut, it is split where it intersects the cutter polyline. For polylines, all left cuts will be grouped together in the first polyline, right cuts and coincident cuts will be grouped together in the second polyline, and each undefined cut and any uncut parts are output as separate polylines. For polygons, all left cuts are grouped together in the first polygon, all right cuts are grouped together in the second polygon, and each undefined cut and any uncut parts are output as separate polygons.
Some parts may be empty or non-existent. If there are no cuts, then an empty list is returned. If the left cut doesn't exist but the right cut does exist, then the first geometry will be empty. Similarly, if the right cut doesn't exist but the left cut does exist, the second geometry will be empty. If both the left and right cuts don't exist, an empty list is returned.
In the images shown below, the cuttee is shown in blue and the cutter is shown in red. Start points are shown in green, and end points are red.
In the first set of examples, considerTouch
can be true or false, and it won't make a difference because the cutter either crosses or overlaps the cuttee.
- The cutter intersects the cuttee, creating a path that is to the left of it and another that is to to the right.
Basic cut decision |
- No undefined parts
Two left parts and one right part |
- An undefined part exists because the cutter assigns conflicting left/right labels to the same cuttee segment.
A left part, a right part, and an undefined part |
- Each cuttee part is treated separately, so two parts that don’t intersect the cutter are uncut even though they share a vertex with the right part.
Three part cuttee that has two uncut parts in addition to the left and right parts |
- As a result of treating parts separately, an intersecting segment could have different classifications if it belongs to different parts.
Intersects with conflicting edge classifications |
- Cutter parts that overlap in the same direction at an intersection behave as a single cutter segment.
Same segment duplicated |
- Cutter parts that overlap in an anti-parallel way at a cuttee intersection assigns conflicting labels to cuttee and produce undefined parts.
Same segment but start and end points are switched |
- Cutters that overlap the cuttee or have an endpoint touch with the cuttee can produce different kinds of output, depending on what else happens to the cuttee. Here, an overlapping cutter is the only interaction with the cuttee and causes the adjacent cuttee segments to be classified as undefined. The overlapping segments themselves are always grouped with the right output part.
Cutter only overlaps cuttee |
- If additional information is available about what happens to a cuttee segment next to an overlap, that information is used.
Cutter overlaps and crosses cuttee |
- Overlapping segments are always put into the right part, so the conflicting classification caused by the non-overlapping parts of the cutter does not matter.
Cutter overlaps and has non-intersecting segments |
The following examples show the behavior when considerTouch = true and the cuttee is a polyline.
- A cutter that ends in the interior of a cuttee still divides the cuttee into left and right parts, based on the cutter orientation. (moving towards the cuttee or away from it). Note that if
considerTouch = false
for this example, there are no cuts and an empty list is returned.
Cutter touches cuttee |
- A cuttee that stays to one side of a cutter is split into parts but has the same label for both parts. Note that if
considerTouch = false
for this example, there are no cuts and an empty list is returned.
Cutter touches cuttee |
The output of the GeometryEngine.Instance.SimplifyAsFeature
method is a “simple” geometry. Similarly, the method GeometryEngine.Instance.IsSimpleAsFeature
determines if the input geometry is “simple”.
A simple geometry is one that is topologically correct so that it can be stored in a geodatabase. Furthermore, some operations may have undefined behavior if the input geometry is not simple.
- An empty geometry is simple.
- A non-empty geometry must have finite x- and y-coordinates to be simple.
- Assuming all x- and y-coordinates are finite:
- A
MapPoint
is simple. - A
Multipoint
such that the distance between all points is >= 2 * sqrt(2) * xy-tolerance of the spatial reference is simple. - A
Polyline
with no degenerate segments is simple. Given a segment, if the HasZ property is false or the segment is a curve, then it is degenerate if its 2D-length is less than or equal to 2 * xy-resolution of the spatial reference. If the HasZ property is true and the segment is a line segment, then it is degenerate if its 2D-length is less than or equal to 2 * xy-resolution and its 3D-length is less than or equal to the z-tolerance of the spatial reference. For a quick reference, think of a degenerate segment this way:- Not 3D and 2D-length <= 2 * xy-resolution
- Is 3D, is line and 2D-length <= 2 * xy-resolution and 3D-length <= z-tolerance
- Is 3D, not a line and 2D-length <= 2 * xy-resolution
- A
Polygon
is simple if it has the following properties:- Exterior rings are clockwise and interior rings (holes) are counterclockwise. The order of the rings doesn’t matter.
- If a ring touches another ring, it does so at a finite number of points.
- If a ring is self-tangent, it does so at a finite number of points, and there are vertices at those points.
- All segments have length >= 2 * sqrt(2) * xy-tolerance of the spatial reference.
- Vertices are either exactly coincident or the distance between them is >= 2 * sqrt(2) * xy-tolerance.
- If a vertex is not the boundary point of a segment, then the distance between it and any segment is >= sqrt(2) * xy-tolerance.
- Each ring has at least three non-equal vertices.
- No empty rings.
- A
Let's look at some examples of non-simple vs. simple polygons. The green circles are the vertices of the polygon, and the lavender colored area represents the interior of the polygon.
Self-intersection | Self-intersection | Dangling segment | Overlapping rings | Dangling segment |
No self-intersection | Self-intersection at vertex | No dangling segment | No overlapping rings | No dangling segment |
GeometryEngine.Instance.SimplifyOgc
and GeometryEngine.Instance.IsSimpleOgc
methods use the Open Geospatial Consortium (OGC) validation specification. The specification can be downloaded here: Download OGC Specification.
The output of the SimplifyOgc method is an "OGC simple" geometry. Similarly, the output of IsSimpleOgc method determines if the input geometry is "OGC simple". Additionally, IsSimpleOgc returns the reason that the input geometry is not OGC simple.
The OGC specification is more restrictive than the validation used for SimplifyAsFeature and IsSimpleAsFeature, that is, if a geometry is simple, then it is OGC simple. The OGC validation for MapPoint
and Multipoint
are the same as SimplifyAsFeature and IsSimpleAsFeature.
- A
Polyline
is OGC simple if it has the following properties:- It is simple.
- For a given path, there can be no intersections between segments with exception of the first and the last points of a path. The first and last points of a path can coincide which forms a closed path with no boundary points. Different paths can only intersect at the boundary points.
- A
Polygon
is OGC simple if it has the following properties:- It is simple.
- Rings do not have any self-intersections or self-tangencies.
- Rings must be sorted such that each exterior ring is followed by its immediate interior rings (holes).
- The interior must be a connected set, that is, any two points in the interior can be connected by a path that contains only interior points.
- We also enforce additional condition for polygons. Exterior rings must be oriented clockwise, holes are oriented counterclockwise. When we export to the OGC formats (WKT or WKB) we reverse the ring orientation such that the exteriors are oriented counterclockwise and the holes are oriented clockwise.
ArcGIS Pro uses the even-odd rule for rendering polygons. The even-odd rule determines the interior of a polygon for drawing purposes only. How it works is you draw a ray from a point on the polygon in any direction towards infinity. Then you count the number of paths of the polygon that the ray crosses. If this number is odd, then the point is on the interior of the polygon. If this number is even, then the point is on the exterior of the polygon. Why is this important? It means that the orientation of the rings in a polygon doesn't matter when drawing the polygon, and overlapping rings will be drawn as a hole.
Odd => interior | Even => exterior | Odd => interior | Rendered polygon in ArcGIS Pro |
The most common form of datum transformation is the geographic transformation. The geographic transformation is a mathematical operation converting coordinates from one geographic coordinate system into a different system based on a different data/spheroid.
When the spheroids are different, your data is unprojected from the projected coordinate system (PCS) A1 into the geographic coordinate system A. Then it is converted to the latitude and the longitude values from GCS (Geographic Coordinate System) A to GCS B. This requires a geographic, or datum, transformation. The last step is to project GCS B into PCS B2. All of this is done under the covers when you use the Project operation.
In the following code sample, you're using a Mercator projection based on the European reference system as the input. The output coordinate system is the Web Mercator projection
based on the world geodetic system (WGS) from 1984. The datum transformation transformation
is then used to reproject the European location into a system that uses a WGS 84 reference system.
// Use this UTM spatial reference using the ETRS 1989 (European Terrestrial Reference System) datum.
// This will be the input spatial reference.
SpatialReference etrs_utmZone32N = SpatialReferenceBuilder.CreateSpatialReference(5652);
// Define the Web Mercator spatial reference using the WGS 1984 datum.
// This will be the output spatial reference.
SpatialReference webMercator = SpatialReferenceBuilder.CreateSpatialReference(3857);
// Set up the datum transformation to be used in the projection.
ProjectionTransformation transformation = ProjectionTransformation.Create(etrs_utmZone32N, webMercator);
// Define a location in Germany.
var mapPointInGermany = MapPointBuilderEx.CreateMapPoint(32693081.69, 5364738.25, etrs_utmZone32N);
// Perform the projection of the initial map point.
var projectedPoint = GeometryEngine.Instance.ProjectEx(mapPointInGermany, transformation);
Beginning with ArcGIS Pro 1.4, the Project operator will consider the height component of the coordinates in a geometry being projected. The height component is represented as a z-coordinate. The coordinate system of the height is called a vertical coordinate system (VCS), and it is part of the spatial reference object. To learn more about vertical coordinate systems, visit the ArcGIS help page What are vertical coordinate systems?
A vertical coordinate system can be referenced to two different types of surfaces known as datums: gravity-related (geoidal) or spheroidal (ellipsoidal). A gravity-related VCS may set its zero point through a local mean sea level or a benchmark. A spheroidal VCS defines heights that are referenced to a spheroid of a geographic coordinate system (GCS). To learn more about vertical datums, visit the ArcGIS help pages Vertical datums and Geoid.
In order to project the z-coordinates of a geometry, the input and output spatial references must have a vertical coordinate system in addition to a horizontal coordinate system. If the input and output vertical coordinate systems used in the Project operator differ from one another, a vertical datum transformation is used to transform the z-coordinates. A list of all geographic and vertical coordinate systems can be found on the ArcGIS help page Geographic Coordinate Systems, and a list of all geographic and vertical transformations can be found on the Geographic Transformations page. Another helpful list is that of all projected coordinate systems which can be found on the ArcGIS help page Projected Coordinate Systems.
Some transformations require grid files that are not installed by default and require a separate installation. Download the "ArcGIS Pro Coordinate Systems Data" setup from my.esri.com and choose which grid files to install.
To project the height or z-coordinates of a geometry, both the input and output spatial references must have vertical coordinate systems, and you must call the GeometryEngine.Instance.ProjectEx
method and pass a ProjectionTransformation
object.
Two new classes have been added to the ArcGIS.Core.Geometry
namespace, HVDatumTransformation
and CompositeHVDatumTransformation
. A CompositeHVDatumTransformation
contains one or more HVDatumTransformation
objects. Either a HVDatumTransformation
or a CompositeHVDatumTransformation
can be used to create a ProjectionTransformation
object which is then passed to the GeometryEngine.Instance.ProjectEx
method.
If the required transformation is unknown, call ProjectionTransformation.CreateWithVertical
to create the ProjectionTransformation
from the input and output spatial references and, possibly, an extent of interest. A default transformation is selected based on the spatial references and extent of interest.
Project a z-enabled polyline, i.e. its HasZ
property is true, and the vertical transformation is known.
Suppose there is a polyline in the Pacific Ocean using the horizontal coordinate system WGS 84 and the vertical coordinate system EGM 84. You want to project the polyline to the NAD 83 horizontal coordinate system and NAD 83 PA 11 vertical coordinate system.
The transformations to use are specified in the instructions.
Use the inverse of WGS_1984_To_EGM_1984_Geoid_1
and the forward transformation WGS_1984_(ITRF08)_To_NAD_1983_PA11
.
Here is a code sample to perform the projection.
// Create input spatial reference with horizontal GCS_WGS_1984, vertical EGM84_Geoid
SpatialReference inSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5798);
// Create the polyline to project
List<Coordinate3D> coordinates = new List<Coordinate3D>()
{
new Coordinate3D(-160.608606, 21.705238, 3000),
new Coordinate3D(-159.426811, 21.075439, 5243),
new Coordinate3D(-156.151956, 20.765497, 10023),
new Coordinate3D(-155.511224, 19.526748, 13803)
};
Polyline polyline = PolylineBuilderEx.CreatePolyline(coordinates, inSR);
// Create output spatial reference with horizontal GCS_NAD_1983_PA11, vertical NAD_1983_PA11
SpatialReference outSR = SpatialReferenceBuilder.CreateSpatialReference(6322, 115762);
// Create a composite horizontal/vertical transformation
List<HVDatumTransformation> hvTransforms = new List<HVDatumTransformation>()
{
HVDatumTransformation.Create(110008, false),
HVDatumTransformation.Create(108365, true)
};
CompositeHVDatumTransformation compositeHVTransform = CompositeHVDatumTransformation.Create(hvTransforms);
// Create the projection transformation from the composite horizontal/vertical transformation
ProjectionTransformation projectionTransformation =
ProjectionTransformation.CreateEx(inSR, outSR, compositeHVTransform);
// Now project the polyline. Call ProjectEx to transform the z-coordinates as well as the xy-coordinates.
Polyline projectedPolyline =
GeometryEngine.Instance.ProjectEx(polyline, projectionTransformation) as Polyline;
// Print the coordinates of the projected polyline
Console.WriteLine("Input polyline: ");
ReadOnlyPointCollection points = polyline.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Console.WriteLine("Output polyline: ");
points = projectedPolyline.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Output from the code sample is the following:
Input polyline:
(-160.608606, 21.705238, 3000)
(-159.426811, 21.075439, 5243)
(-156.151956, 20.765497, 10023)
(-155.511224, 19.526748, 13803)
Output polyline:
(-160.608588836171, 21.7052329085819, 3004.24183148582)
(-159.426793801716, 21.0754339009134, 5247.3481890005)
(-156.151938777436, 20.7654919748718, 10023.6100020141)
(-155.511206748496, 19.5267429579366, 13804.0194606541)
Project a z-enabled polygon, i.e. its HasZ
property is true, and you want the software to pick the best transformation by using the ProjectionTransformation.CreateWithVertical
method.
Suppose there is a polygon in Redlands, CA using the horizontal coordinate system NAD 83 horizontal and vertical coordinate systems. You want to project the polygon to the WGS 84 horizontal and vertical coordinate systems. To perform the projection, call ProjectionTransformation.CreateWithVertical
, and the software will pick the best horizontal-vertical transformation based on the spatial references. Optionally, an envelope representing the extent of interest can be supplied to get a more relevant transformation based on the data.
Here is a code sample to perform the projection.
// Create the input spatial reference with horizontal GCS_North_American_1983, vertical NAD_1983
SpatialReference inSR = SpatialReferenceBuilder.CreateSpatialReference(4269, 115702);
// Create the output spatial reference with horizontal GCS_WGS_1984, vertical WGS_1984
SpatialReference outSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 115700);
// Create a projection transformation from the spatial references. The software will pick the best
// horizontal-vertical transformation for the spatial references.
ProjectionTransformation projectionTransformation = ProjectionTransformation.CreateWithVertical(inSR, outSR);
// Check which transformation was chosen
CompositeHVDatumTransformation chvTransformation = projectionTransformation.Transformation as CompositeHVDatumTransformation;
int numTransformations = chvTransformation.Count; // numTransformations = 1
HVDatumTransformation hvTransformation = chvTransformation.Transformations[0];
string name = hvTransformation.Name; // name = "NAD_1983_To_WGS_1984_1"
int wkid = hvTransformation.Wkid; // wkid = 1188
bool isForward = hvTransformation.IsForward; // isForward = true
// Create the polygon to project
Coordinate3D[] coordinates = new Coordinate3D[]
{
new Coordinate3D(-117.19561775, 34.06158972, 100),
new Coordinate3D(-117.19359293, 34.06162097, 90),
new Coordinate3D(-117.19358417, 34.06105325, 110),
new Coordinate3D(-117.19560899, 34.06102201, 80)
};
Polygon polygon = PolygonBuilderEx.CreatePolygon(coordinates, inSR);
// Now project the polygon. Call ProjectEx to transform the z-coordinates as well as the xy-coordinates.
Polygon projectedPolygon = GeometryEngine.Instance.ProjectEx(polygon, projectionTransformation) as Polygon;
// Print the coordinates of the original and projected polygon
Console.WriteLine("Using ProjectionTransformation from spatial references ...");
Console.WriteLine("Input polygon: ");
ReadOnlyPointCollection points = polygon.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Console.WriteLine("");
Console.WriteLine("Output polygon: ");
points = projectedPolygon.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Console.WriteLine("");
// Create an envelope representing the extent of interest to use when creating the ProjectionTransformation
// XMin = -118, YMin = 34, ZMin = 80, XMax = -117, YMax = 35, ZMax = 110
Coordinate3D minCoordinate = new Coordinate3D(-118, 34, 80);
Coordinate3D maxCoordinate = new Coordinate3D(-117, 35, 110);
Envelope extentOfInterest = EnvelopeBuilderEx.CreateEnvelope(-120, 34, -117, 35);
// Create the ProjectionTransformation using the extent of interest
projectionTransformation = ProjectionTransformation.CreateWithVertical(inSR, outSR, extentOfInterest);
// Check which transformation was chosen.
chvTransformation = projectionTransformation.Transformation as CompositeHVDatumTransformation;
numTransformations = chvTransformation.Count; // numTransformations = 1
hvTransformation = chvTransformation.Transformations[0];
name = hvTransformation.Name; // name = "WGS_1984_(ITRF00)_To_NAD_1983"
wkid = hvTransformation.Wkid; // wkid = 108190
isForward = hvTransformation.IsForward; // isForward = false
// Now project the polygon. Call ProjectEx to transform the z-coordinates as well as the xy-coordinates.
projectedPolygon = GeometryEngine.Instance.ProjectEx(polygon, projectionTransformation) as Polygon;
// Print the coordinates of the original and projected polygon
Console.WriteLine("Using ProjectionTransformation from spatial references with extent of interest ...");
Console.WriteLine("Input polygon: ");
points = polygon.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Console.WriteLine("");
Console.WriteLine("Output polygon: ");
points = projectedPolygon.Points;
foreach (MapPoint p in points)
Console.WriteLine("(" + p.X + ", " + p.Y + ", " + p.Z + ")");
Output from the code sample is the following:
Using ProjectionTransformation from spatial references ...
Input polygon:
(-117.19561775, 34.06158972, 100)
(-117.19359293, 34.06162097, 90)
(-117.19358417, 34.06105325, 110)
(-117.19560899, 34.06102201, 80)
(-117.19561775, 34.06158972, 100)
Output polygon:
(-117.19561775, 34.061589719124164, 99.99996719442285)
(-117.19359293000001, 34.06162096912416, 89.99996719490018)
(-117.19358417, 34.061053249124164, 109.9999671936136)
(-117.19560899000001, 34.061022009124166, 79.99996719578466)
(-117.19561775, 34.061589719124164, 99.99996719442285)
Using ProjectionTransformation from spatial references with extent of interest ...
Input polygon:
(-117.19561775, 34.06158972, 100)
(-117.19359293, 34.06162097, 90)
(-117.19358417, 34.06105325, 110)
(-117.19560899, 34.06102201, 80)
(-117.19561775, 34.06158972, 100)
Output polygon:
(-117.19562975925349, 34.06159463605143, 99.26227334307153)
(-117.19360493893622, 34.06162588618495, 89.26222252656491)
(-117.19359617882895, 34.06105816608851, 109.26221116227481)
(-117.19562099926547, 34.061026926018705, 79.26226200637458)
(-117.19562975925349, 34.06159463605143, 99.26227334307153)
Project only the z-coordinate of a point and the vertical transformation is unknown.
In this case, let the system choose the best vertical transformation based on the input and output spatial references.
Consider the point (50, 41, 10), that is, x = 50, y = 41, z = 10. The point is in the horizontal coordinate system WGS 84 and vertical coordinate system Baltic (depth). The point is located in the Caspian Sea.
Project only the height of the point, so the output horizontal coordinate system will be the same as the input, WGS 84, and the output vertical coordinate system will be Caspian (height).
Here is a code sample to perform the projection.
// Create the input spatial reference with horizontal GCS_WGS_1984, vertical Baltic_depth
SpatialReference inSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5612);
// Create the point to project
MapPoint point = MapPointBuilderEx.CreateMapPoint(50, 41, 10, inSR);
// Create the output spatial reference with horizontal GCS_WGS_1984, vertical Caspian_height
SpatialReference outSR = SpatialReferenceBuilder.CreateSpatialReference(4326, 5611);
// Create the projection transformation from the spatial references.
ProjectionTransformation projectionTransformation = ProjectionTransformation.CreateWithVertical(inSR, outSR);
// Now project the point. Call ProjectEx to transform the z-coordinate.
MapPoint projectedPoint = GeometryEngine.Instance.ProjectEx(point, projectionTransformation) as MapPoint;
// Print the coordinates of the points
Console.WriteLine("Input point: (" + point.X + ", " + point.Y + ", " + point.Z + ")");
Console.WriteLine("Output point: (" + projectedPoint.X + ", " + projectedPoint.Y + ", " + projectedPoint.Z + ")");
Output from the code sample is the following:
Input point: (50, 41, 10)
Output point: (50, 41, 18)
- All geometry instances are read-only (immutable), so they cannot be changed once created.
- Geometry builder classes allow you to grow or modify a geometry.
- Polygon and Polyline are multipart geometries, and each part contains one or more segments and two or more points.
- SegmentCollection can be any mix of two point lines, elliptic arcs, or cubic Beziers.
- Polygon parts are always closed, so the end point of the very last segment of each part or ring coincides with the start point of the first segment of that part or ring.
- GeometryEngine.Instance contains convenience methods for performing spatial (relational and topological) operations.
- To project the height or z-coordinates of a geometry, both the input and output spatial references must have vertical coordinate systems, and Geometry.ProjectEx with an argument of type ProjectionTransformation must be called. If the transformation is unknown, create the ProjectionTransformation from the input and output spatial references and, possibly, the extent of interest. A transformation will be picked based on the spatial references and extent of interest.
Home | API Reference | Requirements | Download | Samples
- Overview of the ArcGIS Pro SDK
- What's New for Developers at 3.4
- Installing ArcGIS Pro SDK for .NET
- Release notes
- Resources
- Pro SDK Videos
- ProSnippets
- ArcGIS Pro API
- ProGuide: ArcGIS Pro Extensions NuGet
Migration
- ProSnippets: Framework
- ProSnippets: DAML
- ProConcepts: Framework
- ProConcepts: Asynchronous Programming in ArcGIS Pro
- ProConcepts: Advanced topics
- ProGuide: Custom settings
- ProGuide: Command line switches for ArcGISPro.exe
- ProGuide: Reusing ArcGIS Pro Commands
- ProGuide: Licensing
- ProGuide: Digital signatures
- ProGuide: Command Search
- ProGuide: Keyboard shortcuts
Add-ins
- ProGuide: Installation and Upgrade
- ProGuide: Your first add-in
- ProGuide: ArcGIS AllSource Project Template
- ProConcepts: Localization
- ProGuide: Content and Image Resources
- ProGuide: Embedding Toolboxes
- ProGuide: Diagnosing ArcGIS Pro Add-ins
- ProGuide: Regression Testing
Configurations
Customization
- ProGuide: The Ribbon, Tabs and Groups
- ProGuide: Buttons
- ProGuide: Label Controls
- ProGuide: Checkboxes
- ProGuide: Edit Boxes
- ProGuide: Combo Boxes
- ProGuide: Context Menus
- ProGuide: Palettes and Split Buttons
- ProGuide: Galleries
- ProGuide: Dockpanes
- ProGuide: Code Your Own States and Conditions
Styling
- ProSnippets: Content
- ProSnippets: Browse Dialog Filters
- ProConcepts: Project Content and Items
- ProConcepts: Custom Items
- ProGuide: Custom Items
- ProGuide: Custom browse dialog filters
- ArcGIS Pro TypeID Reference
- ProSnippets: Editing
- ProConcepts: Editing
- ProConcepts: COGO
- ProConcepts: Annotation Editing
- ProConcepts: Dimension Editing
- ProGuide: Editing Tool
- ProGuide: Sketch Tool With Halo
- ProGuide: Construction Tools with Options
- ProGuide: Annotation Construction Tools
- ProGuide: Annotation Editing Tools
- ProGuide: Knowledge Graph Construction Tools
- ProGuide: Templates
3D Analyst Data
Plugin Datasources
Topology
Linear Referencing
Object Model Diagram
- ProSnippets: Geometry
- ProSnippets: Geometry Engine
- ProConcepts: Geometry
- ProConcepts: Multipatches
- ProGuide: Building Multipatches
Relational Operations
- ProSnippets: Knowledge Graph
- ProConcepts: Knowledge Graph
- ProGuide: Knowledge Graph Construction Tools
Reports
- ProSnippets: Map Authoring
- ProSnippets: Annotation
- ProSnippets: Charts
- ProSnippets: Labeling
- ProSnippets: Renderers
- ProSnippets: Symbology
- ProSnippets: Text Symbols
- ProConcepts: Map Authoring
- ProConcepts: Annotation
- ProConcepts: Dimensions
- ProGuide: Tray buttons
- ProGuide: Custom Dictionary Style
- ProGuide: Geocoding
3D Analyst
CIM
Graphics
Scene
Stream
Voxel
- ProSnippets: Map Exploration
- ProSnippets: Custom Pane with Contents
- ProConcepts: Map Exploration
- ProGuide: Map Pane Impersonation
- ProGuide: TableControl
Map Tools
- ProGuide: Feature Selection
- ProGuide: Identify
- ProGuide: MapView Interaction
- ProGuide: Embeddable Controls
- ProGuide: Custom Pop-ups
- ProGuide: Dynamic Pop-up Menu
Network Diagrams
- ArcGIS Pro API Reference Guide
- ArcGIS Pro SDK (pro.arcgis.com)
- arcgis-pro-sdk-community-samples
- ArcGISPro Registry Keys
- ArcGIS Pro DAML ID Reference
- ArcGIS Pro Icon Reference
- ArcGIS Pro TypeID Reference
- ProConcepts: Distributing Add-Ins Online
- ProConcepts: Migrating to ArcGIS Pro
- FAQ
- Archived ArcGIS Pro API Reference Guides
- Dev Summit Tech Sessions