iobjectspy package

Submodules

iobjectspy.analyst module

The ananlyst module provides commonly used spatial data processing and analysis functions. Users can use the analyst module to perform buffer analysis (create_buffer() ), overlay analysis (overlay() ), Create Thiessen polygons (create_thiessen_polygons() ), topological facets (topology_build_regions() ), density clustering (kernel_density() ), Interpolation analysis (interpolate() ), raster algebra operations (expression_math_analyst()) and other functions.

In all interfaces of the analyst module, the input data parameters are required to be dataset (Dataset, DatasetVector, DatasetImage, :py: class:.DatasetGrid) parameters, Both accept direct input of a dataset object ( Dataset) or a combination of datasource alias and dataset name (for example,’alias/dataset_name’,’alias\dataset_name’), and also support datasource connection information and dataset name Combination (for example,’E:/data.udb/dataset_name’).

-Support setting dataset

>>> ds = Datasource.open('E:/data.udb')
>>> create_buffer(ds['point'], 10, 10, unit='Meter', out_data='E:/buffer_out.udb')

-Supports setting the combination of dataset alias and dataset name

>>> create_buffer(ds.alias +'/point' +, 10, 10, unit='Meter', out_data='E:/buffer_out.udb')
>>> create_buffer(ds.alias +'\point', 10, 10, unit='Meter', out_data='E:/buffer_out.udb')
>>> create_buffer(ds.alias +'|point', 10, 10, unit='Meter', out_data='E:/buffer_out.udb')

-Support setting udb file path and dataset name combination

>>> create_buffer('E:/data.udb/point', 10, 10, unit='Meter', out_data='E:/buffer_out.udb')

-Supports setting the combination of datasource connection information and dataset name. datasource connection information includes DCF files, xml strings, etc. For details, please refer to: py:meth:.DatasourceConnectionInfo.make

>>> create_buffer('E:/data_ds.dcf/point', 10, 10, unit='Meter', out_data='E:/buffer_out.udb')

In all the interfaces in the analyst module, the datasource (Datasource) is required for the output data parameters, all accept the Datasource object, which can also be the DatasourceConnectionInfo object. At the same time, it also supports the alias of the datasource in the current workspace, and also supports the UDB file path, DCF file path, etc.

-Support setting udb file path

>>> create_buffer('E:/data.udb/point', 10, 10, unit='Meter', out_data='E:/buffer_out.udb')

-Support setting datasource objects

>>> ds = Datasource.open('E:/buffer_out.udb')
>>> create_buffer('E:/data.udb/point', 10, 10, unit='Meter', out_data=ds)
>>> ds.close()

-Support setting datasource alias

>>> ds_conn = DatasourceConnectionInfo('E:/buffer_out.udb', alias='my_datasource')
>>> create_buffer('E:/data.udb/point', 10, 10, unit='Meter', out_data='my_datasource')
iobjectspy.analyst.create_buffer(input_data, distance_left, distance_right=None, unit=None, end_type=None, segment=24, is_save_attributes=True, is_union_result=False, out_data=None, out_dataset_name='BufferResult', progress=None)

Create a buffer of vector datasets or recordsets.

Buffer analysis is the process of generating one or more regions around space objects, using one or more distance values from these objects (called buffer radius) as the radius. Buffer can also be understood as an influence or service range of spatial objects.

The basic object of buffer analysis is point, line and surface. SuperMap supports buffer analysis for two-dimensional point, line, and area datasets (or record sets) and network datasets. Among them, when performing buffer analysis on the network dataset, the edge segment is buffered. The type of buffer can analyze single buffer (or simple buffer) and multiple buffer. The following takes a simple buffer as an example to introduce the buffers of point, line and surface respectively.

  • Point buffer The point buffer is a circular area generated with the point object as the center and the given buffer distance as the radius. When the buffer distance is large enough, the buffers of two or more point objects may overlap. When selecting the merge buffer, the overlapping parts will be merged, and the resulting buffer is a complex surface object.

    ../_images/PointBuffer.png
  • Line buffer The buffer of the line is a closed area formed by moving a certain distance to both sides of the line object along the normal direction of the line object, and joining with the smooth curve (or flat) formed at the end of the line. Similarly, when the buffer distance is large enough, the buffers of two or more line objects may overlap. The effect of the merge buffer is the same as the merge buffer of points.

    ../_images/LineBuffer.png

    The buffer widths on both sides of the line object can be inconsistent, resulting in unequal buffers between left and right; you can also create a single-sided buffer only on one side of the line object. Only flat buffers can be generated at this time.

    ../_images/LineBuffer_1.png
  • Surface buffer

    The surface buffer is generated in a similar way to the line buffer. The difference is that the surface buffer only expands or contracts on one side of the surface boundary. When the buffer radius is positive, the buffer expands to the outside of the boundary of the area object; when it is negative, it shrinks inward. Similarly, when the buffer distance is large enough, the buffers of two or more line objects may overlap. You can also choose the merge buffer, the effect is the same as the merge buffer of points.

    ../_images/RegionBuffer.png
  • Multiple buffers refer to the creation of buffers with corresponding data volume around the geometric objects according to the given buffer radius. For line objects, you can also create unilateral multiple buffers, but note that the creation of network dataset is not supported.

    ../_images/MultiBuffer.png

Buffer analysis is often used in GIS spatial analysis, and is often combined with overlay analysis to jointly solve practical problems. Buffer analysis has applications in many fields such as agriculture, urban planning, ecological protection, flood prevention and disaster relief, military, geology, and environment.

For example, when expanding a road, you can create a buffer zone for the road according to the widening width of the road, and then superimpose the buffer layer and the building layer, and find the buildings that fall into the buffer zone and need to be demolished through overlay analysis; another example, to protect the environment And arable land, buffer analysis can be performed on wetland, forest, grassland and arable land, and industrial construction is not allowed in the buffer zone.

Description:

  • For area objects, it is best to go through topology check before doing buffer analysis to rule out inter-plane intersections. The so-called intra-plane intersection refers to the intersection of the area object itself, as shown in the figure, the number in the figure represents the area object Node order.
../_images/buffer_regioninter.png
  • Explanation of “negative radius”

    • If the buffer radius is numeric, only surface data supports negative radius;
    • If the buffer radius is a field or field expression, if the value of the field or field expression is negative, the absolute value is taken for point and line data; for area data, if the buffer is merged, the absolute value is taken, if If it is not merged, it will be treated as a negative radius.
Parameters:
  • input_data (Recordset or DatasetVector or str) – The specified source vector record set for creating the buffer is a dataset. Support point, line, area dataset and record set.
  • distance_left (float or str) – (left) the distance of the buffer. If it is a string, it indicates the field where the (left) buffer distance is located, that is, each geometric object uses the value stored in the field as the buffer radius when creating the buffer. For line objects, it represents the radius of the left buffer area, for point and area objects, it represents the buffer radius.
  • distance_right (float or str) – The distance of the right buffer. If it is a string, it means the field where the right buffer distance is located. That is, each line geometry object uses the value stored in the field as the right buffer radius when creating the buffer. This parameter is only valid for line objects.
  • unit (Unit or str) – Buffer distance radius unit, only distance unit is supported, angle and radian unit are not supported.
  • end_type (BufferEndType or str) – The end type of the buffer. It is used to distinguish whether the endpoint of the line object buffer analysis is round or flat. For point or area objects, only round head buffer is supported
  • segment (int) – The number of semicircular edge segments, that is, how many line segments are used to simulate a semicircle, must be greater than or equal to 4.
  • is_save_attributes (bool) – Whether to preserve the field attributes of the object for buffer analysis. This parameter is invalid when the result face dataset is merged. That is, it is valid when the isUnion parameter is false.
  • is_union_result (bool) – Whether to merge the buffers, that is, whether to merge all the buffer areas generated by each object of the source data and return. For area objects, the area objects in the source dataset are required to be disjoint.
  • out_data (Datasource) – The datasource to store the result data
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.overlay(source_input, overlay_input, overlay_mode, source_retained=None, overlay_retained=None, tolerance=1e-10, out_data=None, out_dataset_name='OverlayOutput', progress=None, output_type=OverlayAnalystOutputType.INPUT, is_support_overlap_in_layer=False)

Overlay analysis is used to perform various overlay analysis operations between the two input dataset or record sets, such as clip, erase, union, intersect, identity,xOR and update. Overlay analysis is a very important spatial analysis function in GIS. It refers to the process of generating a new dataset through a series of set operations on two datasets under the unified spatial reference system. Overlay analysis is widely used in resource management, urban construction assessment, land management, agriculture, forestry and animal husbandry, statistics and other fields. Therefore, through this superposition analysis class, the processing and analysis of spatial data can be realized, the new spatial geometric information required by the user can be extracted, and the attribute information of the data can be processed.

-The two dataset for overlay analysis are called the input dataset (called the first dataset in SuperMap GIS), and its type can be point, line, area, etc.; the other is called overlay The dataset of the dataset (referred to as the second dataset in SuperMap GIS) is generally a polygon type. -It should be noted that the polygon dataset or record set itself should avoid including overlapping areas, otherwise the overlay analysis results may be wrong. -The data for overlay analysis must be data with the same geographic reference, including input data and result data. -In the case of a large amount of data for overlay analysis, it is necessary to create a spatial index on the result dataset to improve the display speed of the data -All the results of overlay analysis do not consider the system fields of the dataset
requires attention:
-When source_input is a dataset, overlay_input can be a dataset, a record set, and a list of surface geometry objects -When source_input is a record set, overlay_input can be a dataset, a record set, and a list of surface geometric objects -When source_input is a list of geometric objects, overlay_input can be a list of datasets, record sets and surface geometric objects -When source_input is a list of geometric objects, valid result datasource information must be set
Parameters:
  • source_input (DatasetVector or Recordset or list[Geometry]) – The source data of the overlay analysis, which can be a dataset, a record set, or a list of geometric objects. When the overlay analysis mode is update, xor and union, the source data only supports surface data. When the overlay analysis mode is clip, intersect, erase and identity, the source data supports point, line and surface.
  • overlay_input (DatasetVector or Recordset or list[Geometry]) – The overlay data involved in the calculation must be surface type data, which can be a dataset, a record set, or a list of geometric objects
  • overlay_mode (OverlayMode or str) – overlay analysis mode
  • source_retained (list[str] or str) – The fields that need to be retained in the source dataset or record set. When source_retained is str, support setting’,’ to separate multiple fields, for example “field1,field2,field3”
  • overlay_retained (list[str] or str) – The fields that need to be retained for the overlay data involved in the calculation. When overlay_retained is str, it is supported to set’,’ to separate multiple fields, such as “field1,field2,field3”. Invalid for CLIP and ERASE
  • tolerance (float) – Tolerance value of overlay analysis
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result data is saved. If it is empty, the result dataset is saved to the datasource where the overlay analysis source dataset is located.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • output_type (str or OverlayAnalystOutputType) – The type of the result dataset. For the intersection of faces, you can choose to return a point dataset.
  • is_support_overlap_in_layer (bool) – Whether to support objects with overlapping faces in the dataset. The default is False, that is, it is not supported. If there is overlap in the face dataset, the overlay analysis result may have errors. If it is set to True, a new algorithm will be used for calculation to support the recurrence of overlap in the face dataset
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.multilayer_overlay(inputs, overlay_mode, output_attribute_type='ONLYATTRIBUTES', tolerance=1e-10, out_data=None, out_dataset_name='OverlayOutput', progress=None)

Multi-layer overlay analysis supports overlay analysis of multiple data sets or multiple record sets.

>>> ds = open_datasource('E:/data.udb')
>>> input_dts = [ds['dltb_2017'], ds['dltb_2018'], ds['dltb_2019']]
>>> result_dt = multilayer_overlay(input_ds,'intersect','OnlyID', 1.0e-7)
>>> assert result_dt is not None
True
param inputs:data set or record set participating in overlay analysis
type inputs:list[DatasetVector] or list[Recordset] or list[list[Geometry]]
param overlay_mode:
 Overlay analysis mode, only supports intersection (OverlayMode.INTERSECT) and merge (OverlayMode.UNION)
type overlay_mode:
 OverlayMode or str
param output_attribute_type:
 multi-layer overlay analysis field attribute return type
type output_attribute_type:
 OverlayOutputAttributeType or str
param float tolerance:
 node tolerance
param out_data:The data source where the result data is saved. If it is empty, the result data set is saved to the data source where the overlay analysis source data set is located. When the inputs are all arrays of geometric objects, The data source for saving the result data must be set.
type out_data:Datasource or DatasourceConnectionInfo or str
param str out_dataset_name:
 result data set name
param function progress:
 progress information processing function, please refer to StepEvent
return:
rtype:
iobjectspy.analyst.create_random_points(dataset_or_geo, random_number, min_distance=None, clip_bounds=None, out_data=None, out_dataset_name=None, progress=None)

Randomly generate points within the geometric object. When generating random points, you can specify the number of random points and the distance between the random points. When the number of random points and the minimum distance are specified at the same time, the minimum distance will be met first. That is, the distance between the generated random points must be greater than the minimum distance, when the number may be less than the number of random points.

>>> ds = open_datasource('E:/data.udb')
>>> dt = ds['dltb']
>>> polygon = dt.get_geometries('SmID == 1')[0]
>>> points = create_random_points(polygon, 10)
>>> print(len(points))
10
>>> points = create_random_points(polygon, 10, 1500)
>>> print(len(points))
9
>>> assert compute_distance(points(0),points(1))> 1500
True
>>>
>>> random_dataset = create_random_points(dt, 10, 1500, None, ds,'random_points', None)
>>> print(random_dataset.type)
'Point'
param dataset_or_geo:
 A geometric object or dataset used to create random points. When specified as a single geometric object, line and area geometric objects are supported. When specified as a data set, data sets of point, line, and area types are supported.
type dataset_or_geo:
 GeoRegion or GeoLine or Rectangle or DatasetVector or str
param random_number:
 The number of random points or the name of the field where the number of random points is located. Only random points generated in the data set can be formulated as field names.
type random_number:
 str or float
param min_distance:
 The minimum distance or the field name of the minimum distance of the random point. When the random point distance value is None or 0, the generated random point does not consider the distance limit between two points. When it is greater than 0, the distance between any two random points must be greater than the specified distance. At this time, the number of random points generated may not necessarily be equal to the specified number of random points. Only random points generated in the data set can be formulated as field names.
type min_distance:
 str or float
param Rectangle clip_bounds:
 The range for generating random points, which can be None. When it is None, random points are generated in the entire data set or geometric object.
param out_data:The data source to store the result data
type out_data:Datasource
param str out_dataset_name:
 result data set name
param function progress:
 progress information processing function, please refer to StepEvent
return:The random point list or the data set where the random point is located. When a random point is generated in a geometric object, list[Point2D] will be returned. When a random point is generated in the data set, the data set will be returned.
rtype:list[Point2D] or DatasetVector or str
iobjectspy.analyst.regularize_building_footprint(dataset_or_geo, offset_distance, offset_distance_unit=None, regularize_method=RegularizeMethod.ANYANGLE, min_area=0.0, min_hole_area=0.0, prj=None, is_attribute_retained=False, out_data=None, out_dataset_name=None, progress=None)

Perform regularization processing on the area type building object to generate a regularized object covering the original area object.

>>> ds = open_datasource('E:/data.udb')
>>> dt = ds['building']
>>> polygon = dt.get_geometries('SmID == 1')[0]

Regularize the geometric objects, the offset distance is 0.1, the unit is the data set coordinate system unit:

>>> regularize_result = regularize_building_footprint(polygon, 0.1, None, RegularizeMethod.ANYANGLE,
>>> prj=dt.prj_coordsys)
>>> assert regularize_result.type == GeometryType.GEOREGION

True

Regularize the data set:

>>> regularize_dt = regularize_building_footprint(dt, 0.1,'meter', RegularizeMethod.RIGHTANGLES, min_area=10.0,
>>> min_hole_area=2.0, is_attribute_retained=False)
>>> assert regularize_dt.type = DatasetType.REGION

True

param dataset_or_geo:
 the building area object or area dataset to be processed
type dataset_or_geo:
 GeoRegion or Rectangle or DatasetVector or str
param float offset_distance:
 The maximum distance that a regularized object can be offset from the boundary of its original object. With .py:attr:.offset_distance_unit, you can set the linear unit value based on the data coordinate system, and you can also set the distance unit value.
param offset_distance_unit:
 The unit of the regularized offset distance, the default is None, that is, the unit of the data used.
type offset_distance_unit:
 Unit or str
param regularize_method:
 regularize method
type regularize_method:
 RegularizeMethod or str
param float min_area:
 The minimum area of ​​a regularized object. Objects smaller than this area will be deleted. It is valid when the value is greater than 0. When the spatial coordinate system or the coordinate system of the dataset is projection or latitude and longitude, The area unit is square meter. When the coordinate system is None or the plane coordinate system, the area unit corresponds to the data unit.
param float min_hole_area:
 The minimum area of ​​the hole in the regularized object. Holes smaller than this area will be deleted. It is valid when the value is greater than 0. When the spatial coordinate system or the coordinate system of the data set is projection or latitude and longitude, the area unit is square meters, When the coordinate system is None or a plane coordinate system, the area unit corresponds to the data unit
param prj:The coordinate system of the building area object. It is valid only when the input is a geometric object.
type prj:PrjCoordSys or str
param bool is_attribute_retained:
 Whether to save the attribute field value of the original object. Only valid when the input is a data set.
param out_data:The data source to store the result data
type out_data:Datasource
param str out_dataset_name:
 result data set name
param function progress:
 progress information processing function, please refer to StepEvent
return:The result is the regularized area object or area dataset. If the input is a region object, the generated result will also be a region object. When the input is a polygon dataset, the generated result is also a polygon dataset, and a status field will be generated in the generated polygon dataset. When the status field value is 0, it means that the regularization fails, and the saved object is the original object. When the status field value is 1, it means that the regularization is successful, and the saved object is a regularized object.
rtype:DatasetVector or GeoRegion
iobjectspy.analyst.tabulate_area(class_dataset, zone_dataset, class_field=None, zone_field=None, resolution=None, out_data=None, out_dataset_name=None, progress=None)

Tabulate the area, count the area of each category in the area, and output the attribute table. It is convenient for users to view the area summary of each category in each area. In the resulting attribute table:

-Each predicate in the regional data set has one record
-Each unique value of the category data set for area statistics has a field
-Each record will store the area of each category in each area

.. image:: ../image/TabulateArea.png
Parameters:
  • class_dataset (DatasetGrid or DatasetVector or str) – The category data set for area statistics to be performed. It supports raster, point, line, and area data sets. It is recommended to use a raster dataset first. If a point or line dataset is used, the area that intersects the feature will be output.
  • zone_dataset (DatasetGrid or DatasetVector or str) – The zone dataset, which supports raster and point, line and area datasets. The area is defined as all areas with the same value in the input, and the areas do not need to be connected. It is recommended to use a raster dataset first. If a vector dataset is used, it will be converted internally using “vector to raster”.
  • class_field (str) – The category field of area statistics. When class_dataset is DatasetVector, a valid category field must be specified.
  • zone_field (str) – Zone refers to a field. When zone_dataset is DatasetVector, a valid zone value field must be specified.
  • resolution (float) –
  • out_data (Datasource) – The data source to store the result data
  • out_dataset_name (str) – result data set name
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

Return type:

DatasetVector

iobjectspy.analyst.auto_compute_project_points(point_input, line_input, max_distance, out_data=None, out_dataset_name=None, progress=None)

Automatically calculate the vertical foot from point to line.

>>> ds = open_datasource('E:/data.udb')
>>> auto_compute_project_points(ds['point'], ds['line'], 10.0)
param point_input:
 input point data set or record set
type point_input:
 DatasetVector or Recordset
param line_input:
 input line data set or record set
type line_input:
 DatasetVector or Recordset
param float max_distance:
 Maximum query distance, the unit of distance is the same as the unit of the dataset coordinate system. When the value is less than 0, it means that the search distance is not limited.
param out_data:The data source to store the result data
type out_data:Datasource
param str out_dataset_name:
 result data set name
param function progress:
 progress information processing function, please refer to StepEvent
return:Return the point to the vertical foot of the line.
rtype:DatasetVector or str
iobjectspy.analyst.compute_natural_breaks(input_dataset_or_values, number_zones, value_field=None)
Calculate the natural break point. Jenks Natural Breaks (Jenks Natural Breaks) is a statistical method of grading and classification according to the numerical statistical distribution law. It can maximize the difference between classes, that is, make the variance within the group as small as possible, and the variance between groups as much as possible Big,

The features are divided into multiple levels or categories, and for these levels or categories, their boundaries will be set at locations where the data values ​​differ relatively large.

param input_dataset_or_values:
 The data set or list of floating-point numbers to be analyzed. Supports raster datasets and vector datasets.
type input_dataset_or_values:
 DatasetGrid or DatasetVector or list[float]
param int number_zones:
 number of groups
param str value_field:
 The name of the field used for natural break point segmentation. When the input data set is a vector data set, a valid field name must be set.
return:an array of natural break points, the value of each break point is the maximum value of the group
rtype:list[float]
iobjectspy.analyst.erase_and_replace_raster(input_data, replace_region, replace_value, out_data=None, out_dataset_name=None, progress=None)

Erase and fill the raster or image data set, that is, you can modify the raster value of the specified area.

>>> region = Rectangle(875.5, 861.2, 1172.6, 520.9)
>>> result = erase_and_replace_raster(data_dir +'example_data.udbx/seaport', region, (43,43,43))

Process raster data:

>>> region = Rectangle(107.352104894652, 30.1447395778174, 107.979276445055, 29.6558796240814)
>>> result = erase_and_replace_raster(data_dir +'example_data.udbx/DEM', region, 100)
param input_data:
 raster or image dataset to be erased
type input_data:
 DatasetImage or DatasetGrid or str
param replace_region:
 erase region
type replace_region:
 Rectangle or GeoRegion
param replace_value:
 The replacement value of the erased area, use replace_value to replace the grid value in the specified erased area.
type replace_value:
 float or int or tuple[int,int,int]
param out_data:The data source where the result data set is located
type out_data:Datasource or DatasourceConnectionInfo or str
param str out_dataset_name:
 result data set name
param function progress:
 progress information processing function, please refer to StepEvent
return:result data set or data set name
rtype:DatasetGrid or DatasetImage or str
iobjectspy.analyst.dissolve(input_data, dissolve_type, dissolve_fields, field_stats=None, attr_filter=None, is_null_value_able=True, is_preprocess=True, tolerance=1e-10, out_data=None, out_dataset_name='DissolveResult', progress=None)

Fusion refers to combining objects with the same fusion field value into a simple object or a complex object. Suitable for line objects and area objects. Sub-objects are the basic objects that make up simple objects and complex objects. A simple object consists of a sub-object, That is, the simple object itself; the complex object is composed of two or more sub-objects of the same type.

Parameters:
  • input_data (DatasetVector or str) – The vector dataset to be fused. Must be a line dataset or a polygon dataset.
  • dissolve_type (DissolveType or str) – dissolve type
  • dissolve_fields (list[str] or str) – Dissolve fields. Only records with the same field value of the dissolve field will be dissolved. When dissolve_fields is str, support setting’,’ to separate multiple fields, for example “field1,field2,field3”
  • field_stats (list[tuple[str,StatisticsType]] or list[tuple[str,str]] or str) – The name of the statistical field and the corresponding statistical type. stats_fields is a list, each element in the list is a tuple, the first element of the tuple is the field to be counted, and the second element is the statistics type. When stats_fields is str, it is supported to set’,’ to separate multiple fields, such as “field1:SUM, field2:MAX, field3:MIN”
  • attr_filter (str) – The filter expression of the object when the dataset is fused
  • tolerance (float) – fusion tolerance
  • is_null_value_able (bool) – Whether to deal with objects whose fusion field value is null
  • is_preprocess (bool) – Whether to perform topology preprocess
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result data is saved. If it is empty, the result dataset is saved to the datasource where the input dataset is located.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

>>> result = dissolve('E:/data.udb/zones','SINGLE','SmUserID','Area:SUM', tolerance=0.000001, out_data='E:/dissolve_out.udb')
iobjectspy.analyst.aggregate_points(input_data, min_pile_point, distance, unit=None, class_field=None, out_data=None, out_dataset_name='AggregateResult', progress=None)

Cluster the point dataset and use the density clustering algorithm to return the clustered category or the polygon formed by the same cluster. To cluster the spatial position of the point set, use the density clustering method DBSCAN, which can divide the regions with sufficiently high density into clusters, and can find clusters of arbitrary shapes in the spatial data with noise. It defines a cluster as the largest collection of points that are connected in density. DBSCAN uses threshold e and MinPts to control the generation of clusters. Among them, the area within the radius e of a given object is called the e-neighborhood of the object. If the e-neighborhood of an object contains at least the minimum number of MinPtS objects, the object is called the core object. Given a set of objects D, if P is in the e-neighborhood of Q, and Q is a core object, we say that the object P is directly density reachable from the object Q. DBSCAN looks for clusters by checking the e-domain of each point in the data. If the e-domain of a point P contains more than MinPts points, a new cluster with P as the core object is created, and then DBSCAN repeatedly Look for objects whose density is directly reachable from these core objects and join the cluster until no new points can be added.

Parameters:
  • input_data (DatasetVector or str) – input point dataset
  • min_pile_point (int) – The threshold for the number of density clustering points, which must be greater than or equal to 2. The larger the threshold value, the harsher the conditions for clustering into a cluster.
  • distance (float) – The radius of density clustering.
  • unit (Unit or str) – The unit of the density cluster radius.
  • class_field (str) – The field in the input point dataset used to store the result of density clustering. If it is not empty, it must be a valid field name in the point dataset. The field type is required to be INT16, INT32 or INT64. If the field name is valid but does not exist, an INT32 field will be created. If the parameter is valid, the cluster category will be saved in this field.
  • out_data (Datasource or DatasourceConnectionInfo or st) – result datasource information. The result datasource information cannot be empty with class_field at the same time. If the result datasource is valid, the result area object will be generated.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The result dataset or the name of the dataset. If the input result datasource is empty, a boolean value will be returned. True means clustering is successful, False means clustering fails.

Return type:

DatasetVector or str or bool

>>> result = aggregate_points('E:/data.udb/point', 4, 100,'Meter','SmUserID', out_data='E:/aggregate_out.udb')
iobjectspy.analyst.smooth_vector(input_data, smoothness, out_data=None, out_dataset_name=None, progress=None, is_save_topology=False)

Smooth vector datasets, support line datasets, area datasets and network datasets

  • Smooth purpose

    When there are too many line segments on the boundary of a polyline or polygon, it may affect the description of the original feature, and no further processing or analysis is used, or the display and printing effect is not ideal, so the data needs to be simplified. Simplified method generally include resampling (resample_vector()) and smoothing. Smoothing is a method of replacing the original polyline with a curve or straight line segment by adding nodes. It should be noted that after smoothing the polyline, Its length usually becomes shorter, and the direction of the line segment on the polyline will also change significantly, but the relative position of the two endpoints will not change; the area of the area object will usually become smaller after smoothing.

  • Setting of smoothing method and smoothing coefficient

    This method uses B-spline method to smooth the vector dataset. For an introduction to the B-spline method, please refer to the SmoothMethod class. The smoothness coefficient (corresponding to the smoothness parameter in the method) affects the degree of smoothness, The larger the smoothing coefficient, the smoother the result data. The recommended range of smoothness coefficient is [2,10]. This method supports smoothing of line dataset, surface dataset and network dataset.

    • Set the smoothing effect of different smoothing coefficients on the line dataset:
    ../_images/Smooth_1.png
    • Set the smoothing effect of different smoothing coefficients on the face dataset:
    ../_images/Smooth_2.png
Parameters:
  • input_data (DatasetVector or str) – dataset that needs to be smoothed, support line dataset, surface dataset and network dataset
  • smoothness (int) – The specified smoothness coefficient. A value greater than or equal to 2 is valid. The larger the value, the more the number of nodes on the boundary of the line object or area object, and the smoother it will be. The recommended value range is [2,10].
  • out_data (Datasource or DatasourceConnectionInfo or str) – The radius of the result datasource. If this parameter is empty, the original data will be smoothed directly, that is, the original data will be changed. If this parameter is not empty, the original data will be copied to this datasource first, Then smooth the copied dataset. The datasource pointed to by out_data can be the same as the datasource where the source dataset is located.
  • out_dataset_name (str) – The name of the result dataset. It is valid only when out_data is not empty.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • is_save_topology (bool) – whether to save the object topology
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.resample_vector(input_data, distance, resample_type=VectorResampleType.RTBEND, is_preprocess=True, tolerance=1e-10, is_save_small_geometry=False, out_data=None, out_dataset_name=None, progress=None, is_save_topology=False)

Resample vector datasets, support line datasets, area datasets and network datasets. Vector data resampling is to remove some nodes according to certain rules to achieve the purpose of simplifying the data (as shown in the figure below). The result may be different due to the use of different resampling methods. SuperMap provides two resampling methods, please refer to:py:class:.VectorResampleType

../_images/VectorResample.png

This method can resample line datasets, surface datasets and network datasets. When resampling the surface dataset, the essence is to resample the boundary of the surface object. For the common boundary of multiple area objects, if topological preprocessing is performed, the common boundary of one of the polygons is resampled only once, the common boundary of other polygons will be adjusted according to the result of the polygon re-sampling to make it fit, so There will be no gaps.

Note: When the resampling tolerance is too large, the correctness of the data may be affected, such as the intersection of two polygons at the common boundary.

Parameters:
  • input_data (DatasetVector or str) – The vector dataset that needs to be resampled, support line dataset, surface dataset and network dataset
  • distance (float) – Set the resampling distance. The unit is the same as the dataset coordinate system unit. The resampling distance can be set to a floating-point value greater than 0. But if the set value is less than the default value, the default value will be used. The larger the set re-sampling tolerance, the more simplified the sampling result data
  • resample_type (VectorResampleType or str) – Resampling method. Resampling supports the haphazard sampling algorithm and Douglas algorithm. Specific reference: py:class:.VectorResampleType. The aperture sampling is used by default.
  • is_preprocess (bool) – Whether to perform topology preprocess. It is only valid for face datasets. If the dataset is not topologically preprocessed, it may cause gaps, unless it can be ensured that the node coordinates of the common line of two adjacent faces in the data are exactly the same.
  • tolerance (float) – Node capture tolerance when performing topology preprocessing, the unit is the same as the dataset unit.
  • is_save_small_geometry (bool) – Whether to keep small objects. A small object refers to an object with an area of 0, and a small object may be generated during the resampling process. true means to keep small objects, false means not to keep.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The radius of the result datasource. If this parameter is empty, the original data will be sampled directly, that is, the original data will be changed. If this parameter is not empty, the original data will be copied to this datasource first, Then sample the copied dataset. The datasource pointed to by out_data can be the same as the datasource where the source dataset is located.
  • out_dataset_name (str) – The name of the result dataset. It is valid only when out_data is not empty.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • is_save_topology (bool) – whether to save the object topology
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.create_thiessen_polygons(input_data, clip_region, field_stats=None, out_data=None, out_dataset_name=None, progress=None)

Create Tyson polygons. The Dutch climatologist AH Thiessen proposed a method of calculating the average rainfall based on the rainfall of discretely distributed weather stations, that is, connecting all adjacent weather stations into a triangle, making the vertical bisectors of each side of these triangles. Therefore, several vertical bisectors around each weather station form a polygon. Use the rainfall intensity of a unique weather station contained in this polygon to represent the rainfall intensity in this polygonal area, and call this polygon the Thiessen polygon.

Characteristics of Tyson polygons:

-Each Tyson polygon contains only one discrete point data; -The distance from the point in the Tyson polygon to the corresponding discrete point is the closest; -The distance between the point on the edge of the Thiessen polygon and the discrete points on both sides is equal. -Tyson polygons can be used for qualitative analysis, statistical analysis, proximity analysis, etc. For example, the properties of discrete points can be used to describe the properties of the Thiessen polygon area; the data of discrete points can be used to calculate the data of the Thiessen polygon area -When judging which discrete points are adjacent to a discrete point, it can be directly derived from the Thiessen polygon, and if the Thiessen polygon is n-sided, it is adjacent to n discrete points; when a certain data point falls into When in a certain Thiessen polygon, it is closest to the corresponding discrete point, and there is no need to calculate the distance.

Proximity analysis is one of the most basic analysis functions in the GIS field. Proximity analysis is used to discover certain proximity relationships between things. The method of proximity analysis provided by the proximity analysis class is to realize the establishment of Thiessen polygons, which is to establish the Thiessen polygons according to the provided point data, so as to obtain the neighboring relationship between points. The Thiessen polygon is used to assign the surrounding area of the point in the point set to the corresponding point, so that any place located in the area owned by this point (that is, the Thiessen polygon associated with the point) is away from this point. It is smaller than the distance from other points. At the same time, the established Thiessen polygon also satisfies all the theories of the above-mentioned Thiessen polygon method.

How are Tyson polygons created? Use the following diagram to understand the process of creating a Tyson polygon:

-Scan the point data of the Tyson polygon from left to right and from top to bottom. If the distance between a certain point and the previously scanned point is less than the given proximity tolerance, the analysis will ignore this point; -Establish an irregular triangulation based on all points that meet the requirements after scanning and checking, that is, constructing a Delaunay triangulation; -Draw the mid-perpendicular line of each triangle side. These mid-perpendicular lines form the sides of the Tyson polygon, and the intersection of the mid-perpendicular lines is the vertex of the corresponding Tyson polygon; -The point of the point used to create the Tyson polygon will become the anchor point of the corresponding Tyson polygon.
Parameters:
  • input_data (DatasetVector or Recordset or list[Point2D]) – The input point data, which can be a point dataset, a point record set or a list of Point2D
  • clip_region (GeoRegion) – The clipping region of the specified clipping result data. This parameter can be empty, if it is empty, the result dataset will not be cropped
  • field_stats (list[str,StatisticsType] or list[str,str] or str) – The name of the statistical field and the corresponding statistical type, the input is a list, each element stored in the list is a tuple, the size of the tuple is 2, the first element is the name of the field to be counted, and the second element is statistic type. When stats_fields is str, it is supported to set’,’ to separate multiple fields, such as “field1:SUM, field2:MAX, field3:MIN”
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result area object is located. If out_data is empty, the generated Thiessen polygon surface geometry object will be directly returned
  • out_dataset_name (str) – The name of the result dataset. It is valid only when out_data is not empty.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

If out_data is empty, list[GeoRegion] will be returned, otherwise the result dataset or dataset name will be returned.

Return type:

DatasetVector or str or list[GeoRegion]

iobjectspy.analyst.summary_points(input_data, radius, unit=None, stats=None, is_random_save_point=False, is_save_attrs=False, out_data=None, out_dataset_name=None, progress=None)

Dilute the point dataset according to the specified distance, that is, use one point to represent all points within the specified distance. This method supports different units, and can choose the method of point thinning, and it can also do statistics on the original point set of thinning points. In the result dataset resultDatasetName, two new fields, SourceObjID and StatisticsObjNum, will be created. The SourceObjID field stores the point object obtained after thinning in the original dataset The SmID, StatisticsObjNum in represents the number of all points represented by the current point, including the point being thinned out and itself.

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset to be thinned
  • radius (float) – The radius of the thinning point. Take any one of the coordinates, the coordinates of all points within the radius of the point coordinates of landmark expressed through this point. Note that the unit of the radius of the thinning point should be selected.
  • unit (Unit or str) – The unit of the radius of the thinning point.
  • stats (list[StatisticsField] or str) – Make statistics on the original point set of the dilute points. Need to set the field name of the statistics, the field name of the statistics result and the statistics mode. When the array is empty, it means no statistics. When stats is str, support setting with’;’ Separate multiple StatisticsFields, each StatisticsField use’,’ to separate’source_field,stat_type,result_name’, for example: ‘field1,AVERAGE,field1_avg; field2,MINVALUE,field2_min’
  • is_random_save_point (bool) – Whether to save the dilute points randomly. True means to randomly select a point from the point set within the thinning radius to save, False means to take the point with the smallest sum of distances from all points in the point set in the thinning radius.
  • is_save_attrs (bool) – whether to retain attribute fields
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.clip_vector(input_data, clip_region, is_clip_in_region=True, is_erase_source=False, out_data=None, out_dataset_name=None, progress=None)

The vector dataset is cropped and the result is stored as a new vector dataset.

Parameters:
  • input_data (DatasetVector or str) – The specified vector dataset to be cropped. It supports point, line, surface, text, and CAD dataset.
  • clip_region (GeoRegion) – the specified clipping region
  • is_clip_in_region (bool) – Specify whether to clip the dataset in the clipping region. If it is True, the dataset in the cropping area will be cropped; if it is False, the dataset outside the cropping area will be cropped.
  • is_erase_source (bool) – Specify whether to erase the crop area. If it is True, it means the crop area will be erased. If it is False, the crop area will not be erased.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.update_attributes(source_data, target_data, spatial_relation, update_fields, interval=1e-06)

Update the attributes of the vector dataset, and update the attributes in source_data to the target_data dataset according to the spatial relationship specified by spatial_relation. For example, if you have a piece of point data and surface data, you need to average the attribute values in the point dataset, and then write the value to the surface object that contains the points. This can be achieved by the following code:

>>> result = update_attributes('ds/points','ds/zones','WITHIN', [('trip_distance','mean'), ('','count')])

The spatial_relation parameter refers to the spatial relationship between the source dataset (source_data) and the target updated dataset (target_data).

Parameters:
  • source_data (DatasetVector or str) – The source dataset. The source dataset provides attribute data, and the attribute values in the source dataset are updated to the target dataset according to the spatial relationship.
  • target_data (DatasetVector or str) – target dataset. The dataset to which the attribute data is written.
  • spatial_relation (SpatialQueryMode or str) – Spatial relationship type, the spatial relationship between the source data (query object) and the target data (query object), please refer to:py:class:SpatialQueryMode
  • update_fields (list[tuple(str,AttributeStatisticsMode)] or list[tuple(str,str)] or str) – Field statistical information. There may be multiple source data objects that satisfy the spatial relationship with the target data object. It is necessary to summarize the attribute field values of the source data, and write the statistical results into the target dataset as a list, each in the list The element is a tuple, the size of the tuple is 2, the first element of the tuple is the name of the field to be counted, and the second element of the tuple is the statistical type.
  • interval (float) – node tolerance
Returns:

Whether the attribute update was successful. Return True if the update is successful, otherwise False.

Return type:

bool

iobjectspy.analyst.simplify_building(source_data, width_threshold, height_threshold, save_failed=False, out_data=None, out_dataset_name=None)

Right-angle polygon fitting of surface objects If the distance from a series of continuous nodes to the lower bound of the minimum area bounding rectangle is greater than height_threshold, and the total width of the nodes is greater than width_threshold, then the continuous nodes are fitted.

Parameters:
  • source_data (DatasetVector or str) – the face dataset to be processed
  • width_threshold (float) – The threshold value from the point to the left and right boundary of the minimum area bounding rectangle
  • height_threshold (float) – The threshold value from the point to the upper and lower boundary of the minimum area bounding rectangle
  • save_failed (bool) – Whether to save the source area object when the area object fails to be orthogonalized. If it is False, the result dataset does not contain the failed area object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset.
  • out_dataset_name (str) – result dataset name
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.resample_raster(input_data, new_cell_size, resample_mode, out_data=None, out_dataset_name=None, progress=None)

Raster data is resampled and the result dataset is returned.

After the raster data has undergone geometric operations such as registration, correction, projection, etc., the center position of the raster pixel will usually change. Its position in the input raster is not necessarily an integer row and column number, so it needs to be based on the output raster Based on the position of each grid in the input grid, the input grid is resampled according to certain rules, the grid value is interpolated, and a new grid matrix is established. When performing algebraic operations between raster data of different resolutions, the raster size needs to be unified to a specified resolution, and the raster needs to be resampled at this time.

There are three common methods for raster resampling: nearest neighbor method, bilinear interpolation method and cubic convolution method. For a more detailed introduction to these three resampling methods, please refer to the ResampleMode class.

Parameters:
  • input_data (DatasetImage or DatasetGrid or str) – The specified dataset used for raster resampling. Support image dataset, including multi-band images
  • new_cell_size (float) – the cell size of the specified result grid
  • resample_mode (ResampleMode or str) – Resampling calculation method
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetImage or DatasetGrid or str

class iobjectspy.analyst.ReclassSegment(start_value=None, end_value=None, new_value=None, segment_type=None)

Bases: object

Raster reclassification interval class. This class is mainly used for the related settings of reclassification interval information, including the start value and end value of the interval.

This class is used to set the parameters of each reclassification interval in the reclassification mapping table during reclassification. The attributes that need to be set are different for different reclassification types.

-When the reclassification type is single value reclassification, you need to use the set_start_value() method to specify the single value of the source raster that needs to be re-assigned, and use the set_new_value() method to set the value The corresponding new value. -When the reclassification type is range reclassification, you need to use the set_start_value() method to specify the starting value of the source raster value range that needs to be re-assigned, and use the set_end_value() method to set the range End value,

And through the set_new_value() method to set the new value corresponding to the interval, you can also use the set_segment_type() method to set the interval type is “left open and right closed” or “left closed and right open”.

Construct a grid reclassified interval object

Parameters:
  • start_value (float) – the starting value of the grid reclassification interval
  • end_value (float) – the end value of the grid reclassification interval
  • new_value (float) – the interval value of the grid reclassification or the new value corresponding to the old value
  • segment_type (ReclassSegmentType or str) – Raster reclassification interval type
end_value

float – the end value of the raster reclassification interval

from_dict(values)

Read information from dict

Parameters:values (dict) – a dict containing ReclassSegment information
Returns:self
Return type:ReclassSegment
static make_from_dict(values)

Read information from dict to construct ReclassSegment object

Parameters:values (dict) – a dict containing ReclassSegment information
Returns:Raster reclassification interval object
Return type:ReclassSegment
new_value

float – the interval value of the grid reclassification or the new value corresponding to the old value

segment_type

ReclassSegmentType – Raster reclassification interval type

set_end_value(value)

The end value of the grid reclassification interval

Parameters:value (float) – the end value of the grid reclassification interval
Returns:self
Return type:ReclassSegment
set_new_value(value)

The interval value of the grid reclassification or the new value corresponding to the old value

Parameters:value (float) – The interval value of the grid reclassification or the new value corresponding to the old value
Returns:self
Return type:ReclassSegment
set_segment_type(value)

Set the type of grid reclassification interval

Parameters:value (ReclassSegmentType or str) – Raster reclassification interval type
Returns:self
Return type:ReclassSegment
set_start_value(value)

Set the starting value of the grid reclassification interval

Parameters:value (float) – the starting value of the grid reclassification interval
Returns:self
Return type:ReclassSegment
start_value

float – the starting value of the grid reclassification interval

to_dict()

Output current object information to dict

Returns:dict object containing current object information
Return type:dict
class iobjectspy.analyst.ReclassMappingTable

Bases: object

Raster reclassification mapping table class. Provides single-value or range reclassification of the source raster dataset, and includes the processing of non-valued data and unclassified cells.

The reclassification mapping table is used to illustrate the correspondence between the source data and the result data value. This corresponding relationship is expressed by these parts: reclassification type, reclassification interval set, processing of non-valued and unclassified data.

-Types of reclassification
There are two types of reclassification, single value reclassification and range reclassification. Single value reclassification is to re-assign certain single values. For example, a cell with a value of 100 in the source raster is assigned a value of 1 and output to the result raster; range reclassification re-assigns a value in an interval It is a value, such as re-assigning a value of 200 to cells in the source raster whose raster value is in the range of [100,500) and output to the result raster. Set the reclassification type through the set_reclass_type() method of this class.
-Reclassification interval collection
The reclassified interval set specifies the corresponding relationship between a certain raster value of the source raster or a raster value in a certain interval and the new value after reclassification, which is set by the set_segments() method of this class. This set is composed of several ReclassSegment objects. This object is used to set the information of each reclassification interval, including the start value and end value of the source raster single value or interval to be re-assigned, and the type of reclassification interval. And the interval value of the grid reclassification or the new value corresponding to the single value of the source raster, etc., see: py:class:.ReclassSegment class for details.
-Handling of non-valued and unclassified data

For the no value in the source raster data, you can set whether to keep no value through the set_retain_no_value() method of this class. If it is False, that is, if it is not kept as no value, you can use :py:meth: The set_change_no_value_to method specifies a value for no-value data.

For raster values that are not involved in the reclassification mapping table, you can use the set_retain_missing_value() method of this class to set whether to keep its original value. If it is False, that is, not to keep the original value, you can pass: The py:meth:set_change_missing_valueT_to method specifies a value for it.

In addition, this class also provides methods for exporting reclassification mapping table data as XML strings and XML files, and methods for importing XML strings or files. When multiple input raster data needs to apply the same classification range, they can be exported as a reclassification mapping table file. When the subsequent data is classified, the reclassification mapping table file is directly imported, and the imported raster data can be processed in batches. For the format and label meaning of the raster reclassification mapping table file, please refer to the to_xml method.

change_missing_value_to

float – Return the specified value of the grid that is not within the specified interval or single value.

change_no_value_to

float – Return the specified value of no-value data

from_dict(values)

Read the reclassification mapping table information from the dict object

Parameters:values (dict) – dict object containing reclassification mapping table information
Returns:self
Return type:ReclassMappingTable
static from_xml(xml)

Import the parameter value stored in the XML format string into the mapping table data, and return a new object.

Parameters:xml (str) – XML format string
Returns:Raster reclassification mapping table object
Return type:ReclassMappingTable
static from_xml_file(xml_file)

Import the mapping table data from the saved XML format mapping table file and return a new object.

Parameters:xml_file (str) – XML file
Returns:Raster reclassification mapping table object
Return type:ReclassMappingTable
is_retain_missing_value

bool – Whether the data in the source dataset that is not in the specified interval or outside the single value retain the original value

is_retain_no_value

bool – Return whether to keep the non-valued data in the source dataset as non-valued.

static make_from_dict(values)

Read the reclassification mapping table information from the dict object to construct a new object

Parameters:values (dict) – dict object containing reclassification mapping table information
Returns:reclassification mapping table object
Return type:ReclassMappingTable
reclass_type

ReclassType – Return the type of raster reclassification

segments

list[ReclassSegment] – Return the set of reclassification intervals. Each ReclassSegment is an interval range or the correspondence between an old value and a new value.

set_change_missing_value_to(value)

Set the specified value of the grid that is not within the specified interval or single value. If is_retain_no_value() is True, this setting is invalid.

Parameters:value (float) – the specified value of the grid that is not within the specified interval or single value
Returns:self
Return type:ReclassMappingTable
set_change_no_value_to(value)

Set the specified value of no-value data. When is_retain_no_value() is True, this setting is invalid.

Parameters:value (float) – the specified value of no value data
Returns:self
Return type:ReclassMappingTable
set_reclass_type(value)

Set the grid reclassification type

Parameters:value (ReclassType or str) – Raster reclassification type, the default value is UNIQUE
Returns:self
Return type:ReclassMappingTable
set_retain_missing_value(value)

Set whether the data in the source dataset that is not outside the specified interval or single value retain the original value.

Parameters:value (bool) – Whether the data in the source dataset that is not outside the specified interval or single value retain the original value.
Returns:self
Return type:ReclassMappingTable
set_retain_no_value(value)

Set whether to keep the no-value data in the source dataset as no-value. Set whether to keep the no-value data in the source dataset as no-value. -When the set_retain_no_value method is set to True, it means to keep the no-value data in the source dataset as no-value; -When the set_retain_no_value method is set to False, it means that the no-value data in the source dataset is set to the specified value (set_change_no_value_to())

Parameters:value (bool) –
Returns:self
Return type:ReclassMappingTable
set_segments(value)

Set reclassification interval collection

Parameters:value (list[ReclassSegment] or str) – Reclassification interval collection. When the value is str, it is supported to use’;’ to separate multiple ReclassSegments, and each ReclassSegment uses’,’ to separate the start value, end value, new value and partition type. E.g: ‘0,100,50,CLOSEOPEN; 100,200,150,CLOSEOPEN’
Returns:self
Return type:ReclassMappingTable
to_dict()

Output current information to dict

Returns:a dictionary object containing current information
Return type:dict
to_xml()

Output current object information as xml string

Returns:xml string
Return type:str
to_xml_file(xml_file)

This method is used to write the parameter settings of the reclassification mapping table object into an XML file, which is called a raster reclassification mapping table file, and its suffix is .xml. The following is an example of a raster reclassification mapping table file:

The meaning of each label in the reclassification mapping table file is as follows:

-<SmXml:ReclassType></SmXml:ReclassType> tag: reclassification type. 1 means single value reclassification, 2 means range reclassification. -<SmXml:SegmentCount></SmXml:SegmentCount> tag: reclassification interval collection, count parameter indicates the number of reclassification levels. -<SmXml:Range></SmXml:Range> tag: reclassification interval, the reclassification type is single value reclassification, the format is: interval start value-interval end value: new value-interval type. For the interval type, 0 means left open and right closed, 1 means left closed and right open. -<SmXml:Unique></SmXml:Unique> tag: reclassification interval, reclassification type is range reclassification, the format is: original value: new value. -<SmXml:RetainMissingValue></SmXml:RetainMissingValue> tag: whether to retain the original value of unrated cells. 0 means not reserved, 1 means reserved. -<SmXml:RetainNoValue></SmXml:RetainNoValue> tag: Whether the no-value data remains no-value. 0 means not to keep, 0 means not to keep. -<SmXml:ChangeMissingValueTo></SmXml:ChangeMissingValueTo> tag: the specified value of the ungraded cell. -<SmXml:ChangeNoValueTo></SmXml:ChangeNoValueTo> tag: the specified value for no value data.

Parameters:xml_file (str) – xml file path
Returns:Return True if export is successful, otherwise False
Return type:bool
iobjectspy.analyst.reclass_grid(input_data, re_pixel_format, segments=None, reclass_type='UNIQUE', is_retain_no_value=True, change_no_value_to=None, is_retain_missing_value=False, change_missing_value_to=None, reclass_map=None, out_data=None, out_dataset_name=None, progress=None)

Raster data is reclassified and the result raster dataset is returned. Raster reclassification is to reclassify the raster values of the source raster data and assign values according to new classification standards. The result is to replace the original raster values of the raster data with new values. For known raster data, sometimes it is necessary to reclassify it in order to make it easier to see the trend, find the rule of raster value, or to facilitate further analysis:

-Through reclassification, the new value can be used to replace the old value of the cell to achieve the purpose of updating the data. For example, when dealing with changes in land types, assign new grid values to wasteland that has been cultivated as cultivated land; -Through reclassification, a large number of grid values can be grouped and classified, and cells in the same group are given the same value to simplify the data. For example, categorize dry land, irrigated land, and paddy fields as agricultural land; -Through reclassification, a variety of raster data can be classified according to uniform standards. For example, if the factors affecting the location of a building include soil and slope, the input raster data of soil type and slope can be reclassified according to the grading standard of 1 to 10, which is convenient for further location analysis; -Through reclassification, you can set some cells that you don’t want to participate in the analysis to have no value, or you can add new measured values to the cells that were originally no value to facilitate further analysis and processing.

For example, it is often necessary to perform slope analysis on the raster surface to obtain slope data to assist in terrain-related analysis. But we may need to know which grade the slope belongs to instead of the specific slope value, to help us understand the steepness of the terrain, so as to assist in further analysis, such as site selection and road paving. At this time, you can use reclassification to divide different slopes into corresponding classes.

Parameters:
  • input_data (DatasetImage or DatasetGrid or str) – The specified dataset used for raster resampling. Support image dataset, including multi-band images
  • re_pixel_format (ReclassPixelFormat) – The storage type of the raster value of the result dataset
  • segments (list[ReclassSegment] or str) – Reclassification interval collection. Reclassification interval collection. When segments are str, it is supported to use’;’ to separate multiple ReclassSegments, and each ReclassSegment uses’,’ to separate the start value, end value, new value and partition type. For example: ‘0,100,50,CLOSEOPEN; 100,200,150,CLOSEOPEN’
  • reclass_type (ReclassType or str) – raster reclass type
  • is_retain_no_value (bool) – Whether to keep the no-value data in the source dataset as no-value
  • change_no_value_to (float) – The specified value of no value data. When is_retain_no_value is set to False, the setting is valid, otherwise it is invalid.
  • is_retain_missing_value (bool) – Whether the data in the source dataset that is not in the specified interval or outside the single value retain the original value
  • change_missing_value_to (float) – The specified value of the grid that is not within the specified interval or single value. When is_retain_no_value is set to False, the setting is valid, otherwise it is invalid.
  • reclass_map (ReclassMappingTable) – Raster reclassification mapping table class. If the object is not empty, use the value set by the object to reclassify the grid.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or DatasetImage or str

iobjectspy.analyst.aggregate_grid(input_data, scale, aggregation_type, is_expanded, is_ignore_no_value, out_data=None, out_dataset_name=None, progress=None)

Raster data aggregation, Return the result raster dataset. The raster aggregation operation is a process of reducing the raster resolution by an integer multiple to generate a new raster with a coarser resolution. At this time, each pixel is aggregated from a group of pixels in the original raster data, and its value is determined by the value of the original raster contained in it. It can be the sum, maximum, minimum, and average of the raster. Value, median. If it is reduced by n (n is an integer greater than 1) times, the number of rows and columns of the aggregated grid is 1/n of the original grid, that is, the cell size is n times the original. Aggregation can achieve the purpose of eliminating unnecessary information or deleting minor errors by generalizing the data.

Note: If the number of rows and columns of the original raster data is not an integer multiple of scale, use the is_expanded parameter to handle the fraction.

-If is_expanded is true, add a number to the zero to make it an integer multiple. The raster values of the expanded range are all non-valued. Therefore, the range of the result dataset will be larger than the original.

-If is_expanded is false, if the fraction is removed, the range of the result dataset will be smaller than the original.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset for aggregation operation.
  • scale (int) – The ratio of the specified grid size between the result grid and the input grid. The value is an integer value greater than 1.
  • aggregation_type (AggregationType) – aggregation operation type
  • is_expanded (bool) – Specify whether to deal with fractions. When the number of rows and columns of the original raster data is not an integer multiple of the scale, fractions will appear at the grid boundary.
  • is_ignore_no_value (bool) – The calculation method of the aggregation operation when there is no value data in the aggregation range. If it is True, use the rest of the grid values in the aggregation range except no value for calculation; if it is False, the aggregation result is no value.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.slice_grid(input_data, number_zones, base_output_zones, out_data=None, out_dataset_name=None, progress=None)

Natural segmentation and reclassification is suitable for unevenly distributed data.

Jenks natural interruption method:

The reclassification method uses the Jenks natural discontinuity method. The Jenks natural discontinuity method is based on the natural grouping inherent in the data. This is a form of variance minimization grading. The discontinuity is usually uneven, and the discontinuity is selected where the value changes drastically, so this method can appropriately group similar values and enable Maximize the difference between each class. The Jenks discontinuous point classification method puts similar values (clustering values) in the same category, so this method is suitable for unevenly distributed data values.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset to be reclassified.
  • number_zones (int) – The number of zones to reclassify the raster dataset.
  • base_output_zones (int) – The value of the lowest zone in the result raster dataset
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

Set the number of grading regions to 9, and divide the minimum to maximum raster data to be graded into 9 naturally. The lowest area value is set to 1, and the value after reclassification is increased by 1 as the starting value.

>>> slice_grid('E:/data.udb/DEM', 9, 1,'E:/Slice_out.udb')
iobjectspy.analyst.compute_range_raster(input_data, count, progress=None)

Calculate the natural breakpoint break value of the raster cell value

Parameters:
  • input_data (DatasetGrid or str) – raster dataset
  • count (int) – the number of natural segments
  • progress – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The break value of the natural segmentation (including the maximum and minimum values of the pixel)

Return type:

Array

iobjectspy.analyst.compute_range_vector(input_data, value_field, count, progress=None)

Calculate vector natural breakpoint interrupt value

Parameters:
  • input_data (DatasetVector or str) – vector dataset
  • value_field (str) – standard field of the segment
  • count (int) – the number of natural segments
  • progress – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The break value of the natural segment (including the maximum and minimum values of the attribute)

Return type:

Array

class iobjectspy.analyst.NeighbourShape

Bases: object

Neighbourhood shape base class. Neighborhoods can be divided into rectangular neighbourhoods, circular neighbourhoods, circular neighbourhoods and fan-shaped neighbourhoods according to their shapes. Neighbourhood shapes Related parameter settings

shape_type

NeighbourShapeType – Neighborhood shape type for domain analysis

class iobjectspy.analyst.NeighbourShapeRectangle(width, height)

Bases: iobjectspy._jsuperpy.analyst.sa.NeighbourShape

Rectangular neighborhood shape class

Construct a rectangular neighborhood shape class object

Parameters:
  • width (float) – the width of the rectangular neighborhood
  • height (float) – the height of the rectangular neighborhood
height

float – height of rectangular neighborhood

set_height(value)

Set the height of the rectangular neighborhood

Parameters:value (float) – the height of the rectangular neighborhood
Returns:self
Return type:NeighbourShapeRectangle
set_width(value)

Set the width of the rectangular neighborhood

Parameters:value (float) – the width of the rectangular neighborhood
Returns:self
Return type:NeighbourShapeRectangle
width

float – the width of the rectangular neighborhood

class iobjectspy.analyst.NeighbourShapeCircle(radius)

Bases: iobjectspy._jsuperpy.analyst.sa.NeighbourShape

Circular neighborhood shape class

Construct a circular neighborhood shape class object

Parameters:radius (float) – the radius of the circular neighborhood
radius

float – radius of circular neighborhood

set_radius(value)

Set the radius of the circular neighborhood

Parameters:value (float) – the radius of the circular neighborhood
Returns:self
Return type:NeighbourShapeCircle
class iobjectspy.analyst.NeighbourShapeAnnulus(inner_radius, outer_radius)

Bases: iobjectspy._jsuperpy.analyst.sa.NeighbourShape

Circular neighborhood shape class

Construct circular neighborhood shape objects

Parameters:
  • inner_radius (float) – inner ring radius
  • outer_radius (float) – outer ring radius
inner_radius

float – inner ring radius

outer_radius

float – outer ring radius

set_inner_radius(value)

Set inner ring radius

Parameters:value (float) – inner ring radius
Returns:self
Return type:NeighbourShapeAnnulus
set_outer_radius(value)

Set the radius of the outer ring

Parameters:value (float) – outer ring radius
Returns:self
Return type:NeighbourShapeAnnulus
class iobjectspy.analyst.NeighbourShapeWedge(radius, start_angle, end_angle)

Bases: iobjectspy._jsuperpy.analyst.sa.NeighbourShape

Sector neighborhood shape class

Construct a fan-shaped neighborhood shape class object

Parameters:
  • radius (float) – the radius of the shape neighborhood
  • start_angle (float) – The starting angle of the fan-shaped neighborhood. The unit is degrees. It is specified that the horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.
  • end_angle (float) – The ending angle of the fan-shaped neighborhood. The unit is degrees. It is specified that the horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.
end_angle

float – The ending angle of the fan-shaped neighborhood. The unit is degrees. The horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.

radius

float – the radius of the fan-shaped neighborhood

set_end_angle(value)

Set the ending angle of the fan-shaped neighborhood. The unit is degrees. It is specified that the horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.

Parameters:value (float) –
Returns:self
Return type:NeighbourShapeWedge
set_radius(value)

Set the radius of the fan-shaped neighborhood

Parameters:value (float) – the radius of the fan-shaped neighborhood
Returns:self
Return type:NeighbourShapeWedge
set_start_angle(value)

Set the starting angle of the fan-shaped neighborhood. The unit is degrees. It is specified that the horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.

Parameters:value (float) – The starting angle of the fan-shaped neighborhood. The unit is degrees. It is specified that the horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.
Returns:self
Return type:NeighbourShapeWedge
start_angle

float – The starting angle of the fan-shaped neighborhood. The unit is degree. The horizontal right is 0 degrees, and the angle is calculated by rotating clockwise.

iobjectspy.analyst.kernel_density(input_data, value_field, search_radius, resolution, bounds=None, out_data=None, out_dataset_name=None, progress=None)

Perform kernel density analysis on the point dataset or line dataset, and return the analysis result. Kernel density analysis uses a kernel function to calculate the value per unit area within the neighborhood of a point or line. The result is a smooth surface with a large intermediate value and a small peripheral value, which drops to 0 at the boundary of the neighborhood.

Parameters:
  • input_data (DatasetVector or str) – The point dataset or line dataset that needs nuclear density analysis.
  • value_field (str) – The name of the field that stores the measured value used for density analysis. If None is passed, all geometric objects will be treated as 1. Text type fields are not supported.
  • search_radius (float) – The search radius used to calculate the density in the neighborhood of the grid. The unit is the same as the unit of the dataset used for analysis. When calculating the unknown value of a certain grid position, the position will be taken as the center of the circle, and the value set by this attribute will be the radius. The sampling objects falling in this range will participate in the calculation, that is, the predicted value of the position will be determined by the range. Determined by the measured value of the internal sampling object. The larger the search radius, the smoother and more generalized the density grid generated. The smaller the value, the more detailed the information displayed in the generated grid.
  • resolution (float) – The resolution of the raster data of the density analysis result
  • bounds (Rectangle) – The range of density analysis, used to determine the range of the raster dataset obtained from the running result
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

>>> kernel_density(data_dir +'example_data.udb/taxi','passenger_count', 0.01, 0.001, out_data=out_dir +'density_result.udb'
iobjectspy.analyst.point_density(input_data, value_field, resolution, neighbour_shape, neighbour_unit='CELL', bounds=None, out_data=None, out_dataset_name=None, progress=None)

Perform point density analysis on the point dataset and return the analysis result. Simple point density analysis is to calculate the value per unit area within the specified neighborhood shape of each point. The calculation method is the specified measurement value divided by the neighborhood area. Where the neighbors of the points overlap, their density values are also added. The density of each output raster is the sum of the density values of all neighborhoods superimposed on the raster. The unit of the result raster value is the reciprocal of the square of the original dataset unit, that is, if the original dataset unit is meters, the unit of the result raster value is per square meter. Note that for geographic coordinate datasets, the unit of the result raster value is “per square degree”, which is meaningless.

Parameters:
  • input_data (DatasetVector or str) – The point dataset or line dataset that needs nuclear density analysis.
  • value_field (str) – The name of the field that stores the measured value used for density analysis. If None is passed, all geometric objects will be treated as 1. Text type fields are not supported.
  • resolution (float) – The resolution of the raster data of the density analysis result
  • neighbour_shape (NeighbourShape or str) – Find the shape of the neighborhood for calculating the density. If the input value is str, the required format is: -‘CIRCLE,radius’, such as’CIRCLE, 10’ -‘RECTANGLE,width,height’, such as’RECTANGLE,5.0,10.0’ -‘ANNULUS,inner_radius,outer_radius’, such as’ANNULUS,5.0,10.0’ -‘WEDGE,radius,start_angle,end_angle’, such as’WEDGE,10.0,0,45’
  • neighbour_unit (NeighbourUnitType or str) – The unit type of neighborhood statistics. You can use grid coordinates or geographic coordinates.
  • bounds (Rectangle) – The range of density analysis, used to determine the range of the raster dataset obtained from the running result
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

>>> point_density(data_dir +'example_data.udb/taxi','passenger_count', 0.0001,'CIRCLE,0.001','MAP', out_data=out_dir +'density_result.udb')
iobjectspy.analyst.clip_raster(input_data, clip_region, is_clip_in_region=True, is_exact_clip=False, out_data=None, out_dataset_name=None, progress=None)

The raster or image dataset is cropped, and the result is stored as a new raster or image dataset. Sometimes, our research scope or region of interest is small and only involves a part of the current raster data. At this time, we can crop the raster data, that is, use a GeoRegion object as the crop region to crop the raster data and extract the region The internal (external) raster data generates a new dataset. In addition, you can also choose to perform precise cropping or display cropping.

Parameters:
  • input_data (DatasetGrid or DatasetImage or str) – The specified dataset to be cropped. It supports raster datasets and image datasets.
  • clip_region (GeoRegion or Rectangle) – clipping region
  • is_clip_in_region (bool) – Whether to clip the dataset in the clipping region. If True, the dataset in the cropping area will be cropped; if False, the dataset outside the cropping area will be cropped.
  • is_exact_clip (bool) –

    Whether to use precise clipping. If it is True, it means that the raster or image dataset is clipped using precise clipping, and False means that it uses display clipping:

    -When using display cropping, the system will divide into blocks according to the size of pixels (see DatasetGrid.block_size_option, DatasetImage.block_size_option methods for details),
    Crop the raster or image dataset. At this time, only the data in the crop area is retained, that is, if the boundary of the crop area does not coincide with the boundary of the cell, then the cell will be divided. The part located in the clipping area will remain; the grid outside the clipping area and within the total range of the block where the part of the grid is clipped will still have grid values, but will not be displayed. This method is suitable for cutting big data.

    -When using precise cropping, the system will determine whether to keep the cell at the boundary of the crop area according to the position of the center point of the cell covered by the crop area. If you use the in-area clipping method, the center point of the cell is in the clipping area and it will be retained, otherwise it will not be retained.

  • out_data (Datasource or DatasourceConnectionInfo or str) – the datasource where the result dataset is located or directly generate the tif file
  • out_dataset_name (str) – The name of the result dataset. If it is set to generate tif file directly, this parameter is invalid.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name or third-party image file path.

Return type:

DatasetGrid or DatasetImage or str

>>> clip_region = Rectangle(875.5, 861.2, 1172.6, 520.9)
>>> result = clip_raster(data_dir +'example_data.udb/seaport', clip_region, True, False, out_data=out_dir +'clip_seaport.tif')
>>> result = clip_raster(data_dir +'example_data.udb/seaport', clip_region, True, False, out_data=out_dir +'clip_out.udb')
class iobjectspy.analyst.InterpolationDensityParameter(resolution, search_radius=0.0, expected_count=12, bounds=None)

Bases: iobjectspy._jsuperpy.analyst.sa.InterpolationParameter

Point density difference (Density) interpolation parameter class. Point density interpolation method, used to express the density distribution of sampling points. The resolution setting of the result raster of the point density interpolation needs to be combined with the size of the point dataset. Generally, the row and column value of the result raster (that is, the result raster dataset range divided by the resolution) can be better reflected within 500 Out density trend. Because point density interpolation only supports fixed-length search mode temporarily, the search_radius value setting is more important. This value needs to be set by the user according to the data distribution of the points to be interpolated and the range of the point dataset.

Construct a point density difference interpolation parameter class object

Parameters:
  • resolution (float) – the resolution used during interpolation
  • search_radius (float) – Find the search radius of the points involved in the operation
  • expected_count (int) – The number of points expected to participate in the interpolation operation
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
expected_count

int – Return the number of points expected to participate in the interpolation operation, indicating the minimum number of samples expected to participate in the operation

search_mode

SearchMode – During interpolation operation, the way to find points involved in the operation, only supports fixed-length search (KDTREE_FIXED_RADIUS)

search_radius

float – Find the search radius of the points involved in the operation

set_expected_count(value)

Set the number of points expected to participate in the interpolation operation

Parameters:value (int) – Indicates the minimum number of samples expected to participate in the operation
Returns:self
Return type:InterpolationDensityParameter
set_search_radius(value)

Set the search radius to find the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain position, the position will be the center of the circle, and the search_radius will be the radius. The sampling points within this range will participate in the calculation, that is, the position of the The predicted value is determined by the value of the sampling points in the range.

Parameters:value (float) – Find the search radius of the points involved in the operation
Returns:self
Return type:InterpolationDensityParameter
class iobjectspy.analyst.InterpolationIDWParameter(resolution, search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, power=1, bounds=None)

Bases: iobjectspy._jsuperpy.analyst.sa.InterpolationParameter

Inverse Distance Weighted parameter class,

Construct IDW interpolation parameter class.

Parameters:
  • resolution (float) – the resolution used during interpolation
  • search_mode (SearchMode or str) – search mode, QUADTREE is not supported
  • search_radius (float) – Find the search radius of the points involved in the operation
  • expected_count (int) – The number of points expected to participate in the interpolation operation
  • power (int) – The power of the distance weight calculation. The lower the power value, the smoother the interpolation result. The higher the power value, the more detailed the details of the interpolation result. This parameter should be a value greater than 0. If this parameter is not specified, the method will set it to 1 by default
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
expected_count

int – The number of points expected to participate in the interpolation operation. If the search_mode is set to KDTREE_FIXED_RADIUS, the number of points involved in the interpolation operation is also specified. When the number of points in the search range is less than the specified The number of points is assigned a null value.

power

int – the power of distance weight calculation

search_mode

SearchMode – Find the way to participate in the operation during interpolation operation. QUADTREE is not supported

search_radius

float – Find the search radius of the points involved in the operation

set_expected_count(value)

Set the number of points expected to participate in the interpolation operation. If search_mode is set to KDTREE_FIXED_RADIUS, and the number of points participating in the interpolation operation is specified at the same time, when the number of points in the search range is less than the specified number of points, a null value is assigned.

Parameters:value (int) – Indicates the minimum number of samples expected to participate in the operation
Returns:self
Return type:InterpolationIDWParameter
set_power(value)

Set the power of distance weight calculation. The lower the power value, the smoother the interpolation result, and the higher the power value, the more detailed the interpolation result. This parameter should be a value greater than 0. If this parameter is not specified, the method will set it to 1 by default.

Parameters:value (int) – the power of distance weight calculation
Returns:self
Return type:InterpolationIDWParameter
set_search_mode(value)

Set the way to find the points involved in the interpolation operation. Does not support QUADTREE

Parameters:value (SearchMode or str) – In interpolation operation, find the way to participate in the operation
Returns:self
Return type:InterpolationIDWParameter
set_search_radius(value)

Set the search radius to find the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain position, the position will be the center of the circle, and the search_radius will be the radius. The sampling points within this range will participate in the calculation, that is, the position of the The predicted value is determined by the value of the sampling points in the range.

If you set search_mode to KDTREE_FIXED_COUNT and specify the range of points involved in the search, when the number of points in the search range is less than the specified number of points, it will be assigned a null value. When the number of points in the search range is greater than the specified number of points, the nearest to the interpolation point will be returned The specified number of points are interpolated.

param float value:
 Find the search radius of the points involved in the operation
return:self
rtype:InterpolationIDWParameter
class iobjectspy.analyst.InterpolationKrigingParameter(resolution, krighing_type=InterpolationAlgorithmType.KRIGING, search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, max_point_count_in_node=50, max_point_count_for_interpolation=200, variogram=VariogramMode.SPHERICAL, angle=0.0, mean=0.0, exponent=Exponent.EXP1, nugget=0.0, range_value=0.0, sill=0.0, bounds=None)

Bases: iobjectspy._jsuperpy.analyst.sa.InterpolationParameter

Kriging (Kriging) interpolation method parameters.

The Kriging method is a spatial data interpolation processing method in geostatistics. The main purpose is to use the variance of each data point to deduce the weight relationship between an unknown point and each known point, and then use the variance of each data point to calculate the weight relationship between an unknown point and each known point. The value of the data point and its weight relationship with the unknown point infer the value of the unknown point. The biggest feature of the Kriging method is not only to provide a predicted value with a minimum estimation error, but also to clearly indicate the magnitude of the error value. Generally speaking, many geological parameters, such as topography itself, have a continuous nature, so any two points within a short distance must have a spatial relationship. Conversely, if two points on an irregular surface are far apart, they can be regarded as stastically indepedent in a statistical sense. This spatial continuity that changes with distance can be used by semivariogram ) To show. Therefore, if you want to deduce the value of an unknown point from the known scattered points, you can use the semivariogram to deduce the spatial relationship between the known points and the points to be evaluated. Then the semivariogram is derived from this space parameter, and the weight relationship between the unknown point and the known point can be derived from the semivariogram between each data point, and then the value of the unknown point can be derived. The advantage of Kriging’s method is based on spatial statistics as its solid theoretical foundation. The physical meaning is clear; it can not only estimate the spatial variation distribution of the measured parameters, but also estimate the variance distribution of the parameters. The disadvantage of the Kriging method is that the calculation steps are cumbersome, the calculation amount is large, and the variogram sometimes needs to be selected artificially based on experience.

The Kriging interpolation method can use two methods to obtain the sampling points involved in the interpolation, and then obtain the predicted value of the corresponding location point. One is to obtain all the sampling points in the range within a certain range around the location of the predicted value to be calculated. The predicted value of the location point is obtained through a specific interpolation calculation formula; the other is to obtain a certain number of sampling points around the location where the predicted value is to be calculated, and the predicted value of the location point is obtained through a specific interpolation calculation formula.

The Kriging interpolation process is a multi-step process, including:
-Create a variogram and covariance function to estimate the value of statistical correlation (also known as spatial autocorrelation); -Predict the unknown value of the position to be calculated.
Semivariogram and semivariogram:
-Calculate the semivariogram value of any two points that are h units apart in all sampling points, then the distance h between any two points is generally unique. The distance of all point pairs and the corresponding semivariogram value are quickly displayed in h In the coordinate space with the X coordinate axis and the Y coordinate axis as the semivariable function value, the semivariogram is obtained. The smaller the distance, the smaller the semivariogram, and as the distance increases, the spatial dependence between any two points becomes smaller, making the semivariogram value tend to a stable value. This stable value is called the sill value (Sill); and the minimum h value when the sill value is reached is called the autocorrelation threshold (Range).
Nugget effect:
-When the distance between points is 0 (for example, step size=0), the semivariable function value is 0. However, within an infinitely small distance, the semivariable function usually shows a nugget effect, which is a value greater than zero. If the intercept of the semivariable function on week Y is 2, the nugget effect value is 2. -The nugget effect is a measurement error, or a spatial variation at a small distance smaller than the sampling step, or both. The measurement error is mainly caused by the inherent error of the observation instrument. The spatial variation range of natural phenomena is large (can be on a small scale or on a large scale). Changes on scales smaller than the step size are represented as part of the nugget.

Obtaining the semivariogram is one of the key steps for spatial interpolation prediction. One of the main applications of the Kriging method is to predict the attribute value of non-sampling points. The semivariogram provides the spatial autocorrelation information of the sampling points. According to the semivariogram, Choose a suitable semi-variation model, that is, a curve model that fits the semi-variation graph.

Different models will affect the obtained prediction results. If the curve of the semivariogram close to the origin is steeper, the closer area will have a greater impact on the predicted value. Therefore, the output surface will be less smooth.

The semi-variable function models supported by SuperMap include exponential, spherical and Gaussian models. See the VariogramMode class for details

Construct the Kriging interpolation parameter object.

Parameters:
  • resolution (float) – the resolution used during interpolation
  • krighing_type (InterpolationAlgorithmType or str) – The algorithm type of interpolation analysis. Supports three settings: KRIGING, SimpleKRIGING, UniversalKRIGING, and KRIGING is used by default.
  • search_mode (SearchMode or str) – Search mode.
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a location, the location is the center of the circle, and search_radius is the radius. The sampling points within this range will participate in the calculation, that is, the prediction of the location The value is determined by the value of the sampling point in the range.
  • expected_count (int) – The number of points expected to participate in the interpolation operation. When the search method is variable length search, it indicates the maximum number of points expected to participate in the operation.
  • max_point_count_in_node (int) – The maximum number of points to find in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.
  • max_point_count_for_interpolation (int) – Set the maximum number of points involved in interpolation during block search. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation
  • variogram (VariogramMode or str) – Kriging (Kriging) interpolation type of semi-variable function. The default value is VariogramMode.SPHERICAL
  • angle (float) – Rotation angle value in Kriging algorithm
  • mean (float) – The average value of the interpolation field, that is, the sum of the interpolation field values of the sampling points divided by the number of sampling points.
  • exponent (Exponent or str) – the order of the trend surface equation in the sample data used for interpolation
  • nugget (float) – Nugget effect value.
  • range_value (float) – Autocorrelation threshold.
  • sill (float) – abutment value
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
angle

float – Rotation angle value in Kriging algorithm

expected_count

int – the number of points expected to participate in the interpolation operation

exponent

Exponent – the order of the trend surface equation in the sample data used for interpolation

max_point_count_for_interpolation

int – The maximum number of points involved in interpolation during block search

max_point_count_in_node

int – Maximum number of points to find in a single block

mean

float – The average value of the interpolation field, that is, the sum of the sampling point interpolation field values divided by the number of sampling points.

nugget

float – nugget effect value.

range

float – autocorrelation threshold

search_mode

SearchMode – During interpolation, the way to find points involved in the operation

search_radius

float – Find the search radius of the points involved in the operation

set_angle(value)

Set the rotation angle value in the Kriging algorithm

Parameters:value (float) – Rotation angle value in Kriging algorithm
Returns:self
Return type:InterpolationKrigingParameter
set_expected_count(value)

Set the number of points expected to participate in the interpolation operation

Parameters:value (int) – Indicates the minimum number of samples expected to participate in the operation
Returns:self
Return type:InterpolationIDWParameter
set_exponent(value)

Set the order of the trend surface equation in the sample data for interpolation

Parameters:value (Exponent or str) – The order of the trend surface equation in the sample point data used for interpolation
Returns:self
Return type:InterpolationKrigingParameter
set_max_point_count_for_interpolation(value)

When setting block search, the maximum number of points involved in interpolation. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation

Parameters:value (int) – Maximum number of points involved in interpolation during block search
Returns:self
Return type:InterpolationKrigingParameter
set_max_point_count_in_node(value)

Set the maximum number of search points in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.

Parameters:value (int) – The maximum number of search points in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in the block
Returns:self
Return type:InterpolationKrigingParameter
set_mean(value)

Set the average value of the interpolation field, that is, the sum of the sampling point interpolation field values divided by the number of sampling points.

Parameters:value (float) – The average value of the interpolation field, that is, the sum of the sampling point interpolation field values divided by the number of sampling points.
Returns:self
Return type:InterpolationKrigingParameter
set_nugget(value)

Set the value of the nugget effect.

Parameters:value (float) – Nugget effect value.
Returns:self
Return type:InterpolationKrigingParameter
set_range(value)

Set autocorrelation threshold

Parameters:value (float) – autocorrelation threshold
Returns:self
Return type:InterpolationKrigingParameter
set_search_mode(value)

Set the way to find the points involved in the interpolation operation

Parameters:value (SearchMode or str) – In interpolation operation, find the way to participate in the operation
Returns:self
Return type:InterpolationIDWParameter
set_search_radius(value)

Set the search radius to find the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain position, the position will be the center of the circle, and the search_radius will be the radius. The sampling points within this range will participate in the calculation, that is, the position of the The predicted value is determined by the value of the sampling points in the range.

The search mode is set to “variable length search” (KDTREE_FIXED_COUNT), and a fixed number of sample points within the maximum search radius will be used for interpolation. The maximum search radius is 0.2 times the diagonal length of the rectangle corresponding to the area range of the point dataset.

Parameters:value (float) – Find the search radius of the points involved in the operation
Returns:self
Return type:InterpolationIDWParameter
set_sill(value)

Set abutment value

Parameters:value (float) – abutment value
Returns:self
Return type:InterpolationKrigingParameter
set_variogram_mode(value)

Set the semi-variable function type for Kriging interpolation. The default value is VariogramMode.SPHERICAL

Parameters:value (VariogramMode or) – Kriging (Kriging) interpolation semi-variable function type
Returns:self
Return type:InterpolationKrigingParameter
sill

float – abutment value

variogram_mode

VariogramMode – Kriging interpolation semi-variable function type. The default value is VariogramMode.SPHERICAL

class iobjectspy.analyst.InterpolationRBFParameter(resolution, search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, max_point_count_in_node=50, max_point_count_for_interpolation=200, smooth=0.100000001490116, tension=40, bounds=None)

Bases: iobjectspy._jsuperpy.analyst.sa.InterpolationParameter

Radial Basis Function (RBF) interpolation parameter class

Construct a parameter class object of radial basis function interpolation.

Parameters:
  • resolution (float) – the resolution used during interpolation
  • search_mode (SearchMode or str) – Search mode.
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a location, the location is the center of the circle, and search_radius is the radius. The sampling points within this range will participate in the calculation, that is, the prediction of the location The value is determined by the value of the sampling point in the range.
  • expected_count (int) – The number of points expected to participate in the interpolation operation. When the search method is variable length search, it indicates the maximum number of points expected to participate in the operation.
  • max_point_count_in_node (int) – The maximum number of points to find in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.
  • max_point_count_for_interpolation (int) – Set the maximum number of points involved in interpolation during block search. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation
  • smooth (float) – smoothing coefficient, the value range is [0,1]
  • tension (float) – Tension coefficient
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
expected_count

int – The number of points expected to participate in the interpolation operation

max_point_count_for_interpolation

int – The maximum number of points involved in interpolation during block search

max_point_count_in_node

int – Maximum number of search points in a single block

search_mode

SearchMode – Find the way to participate in the operation during interpolation operation. KDTREE_FIXED_RADIUS is not supported

search_radius

float – Find the search radius of the points involved in the operation

set_expected_count(value)

Set the number of points expected to participate in the interpolation operation

Parameters:value (int) – Indicates the minimum number of samples expected to participate in the operation
Returns:self
Return type:InterpolationRBFParameter
set_max_point_count_for_interpolation(value)

When setting block search, the maximum number of points involved in interpolation. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation

Parameters:value (int) – Maximum number of points involved in interpolation during block search
Returns:self
Return type:InterpolationRBFParameter
set_max_point_count_in_node(value)

Set the maximum number of search points in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.

Parameters:value (int) – The maximum number of search points in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in the block
Returns:self
Return type:InterpolationRBFParameter
set_search_mode(value)

Set the way to find the points involved in the interpolation operation.

Parameters:value (SearchMode or str) – In interpolation operation, find the way to participate in the operation
Returns:self
Return type:InterpolationRBFParameter
set_search_radius(value)

Set the search radius to find the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain location, the location will be the center of the circle and the search_radiu s will be the radius. The sampling points within this range will participate in the calculation, that is, the location The predicted value of is determined by the value of the sampling points in the range.

The search mode is set to “variable length search” (KDTREE_FIXED_COUNT), and a fixed number of sample points within the maximum search radius will be used for interpolation. The maximum search radius is 0.2 times the diagonal length of the rectangle corresponding to the area range of the point dataset.

Parameters:value (float) – Find the search radius of the points involved in the operation
Returns:self
Return type:InterpolationRBFParameter
set_smooth(value)

Set smooth coefficient

Parameters:value (float) – smoothness coefficient
Returns:self
Return type:InterpolationRBFParameter
set_tension(value)

Set tension coefficient

Parameters:value (float) – Tension coefficient
Returns:self
Return type:InterpolationRBFParameter
smooth

float – smoothness coefficient

tension

float – tension coefficient

iobjectspy.analyst.interpolate(input_data, parameter, z_value_field, pixel_format, z_value_scale=1.0, out_data=None, out_dataset_name=None, progress=None)

Interpolation analysis class. This class provides interpolation analysis function, which is used to interpolate discrete point data to obtain a raster dataset. Interpolation analysis can use limited sampling point data to predict the numerical situation around the sampling point through interpolation. In order to grasp the overall distribution of data in the study area, the discrete points sampled not only reflect the numerical value of their location, but also reflect the numerical distribution of the area.

Why is interpolation required?

Because of the spatial correlation between the geographical space elements, that is, things that are adjacent to each other tend to be homogeneous, that is, have the same or similar characteristics. For example, if it rains on one side of the street, then the other side of the street is In most cases, it must be raining. If in a larger area, the climate of one township should be the same as the climate of another township that borders it, etc. Based on this reasoning, we can use the information of known locations. To indirectly obtain the information of other places adjacent to it, interpolation analysis is based on this idea, and it is also one of the important application values of interpolation.

Interpolating sampling point data in a certain area to generate raster data is actually rasterizing the research area according to a given grid size (resolution). Each grid unit in the raster data corresponds to a region. The value of a grid cell is calculated by some interpolation method from the value of its neighboring sampling points. Therefore, it is possible to predict the numerical value around the sampling point, and then understand the value distribution of the entire area. Among them, the interpolation methods mainly include the inverse distance weight interpolation method, the Kriging interpolation method, and the radial basis function RBF (Radial Basis Function) interpolation. The interpolation analysis function can predict the unknown value of any geographic point data, such as elevation, rainfall, chemical concentration, noise level and so on.

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset or point record set that needs interpolation analysis
  • parameter (InterpolationParameter) – parameter information required by the interpolation method
  • z_value_field (str) – The name of the field that stores the value used for interpolation analysis. Interpolation analysis does not support text type fields.
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.interpolate_points(points, values, parameter, pixel_format, prj, out_data, z_value_scale=1.0, out_dataset_name=None, progress=None)

Perform interpolation analysis on the point array and return the analysis result

Parameters:
  • points (list[Point2D]) – point data that needs interpolation analysis
  • values (list[float]) – The values corresponding to the point array for interpolation analysis.
  • parameter (InterpolationParameter) – parameter information required by the interpolation method
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • prj (PrjCoordSys) – The coordinate system of the point array. The generated result dataset also refers to this coordinate system.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

iobjectspy.analyst.idw_interpolate(input_data, z_value_field, pixel_format, resolution, search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, power=1, bounds=None, z_value_scale=1.0, out_data=None, out_dataset_name=None, progress=None)

Use IDW interpolation method to interpolate point dataset or record set. Specific reference: py:meth:interpolate and:py:class:.InterpolationIDWParameter

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset or point record set that needs interpolation analysis
  • z_value_field (str) – The name of the field that stores the value used for interpolation analysis. Interpolation analysis does not support text type fields.
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • resolution (float) – the resolution used during interpolation
  • search_mode (SearchMode or str) – During interpolation operation, find the way to participate in the operation. Does not support QUADTREE
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain position, the position will be the center of the circle, and the search_radius will be the radius. The sampling points within this range will participate in the calculation, that is, the position of the The predicted value is determined by the value of the sampling points in the range. If you set search_mode to KDTREE_FIXED_COUNT and specify the range of points involved in the search, when the number of points in the search range is less than the specified number of points, it is assigned a null value, and when the number of points in the search range is greater than the specified number of points, the nearest to the interpolation point is returned Specify the number of points for interpolation.
  • expected_count (int) – The number of points expected to participate in the interpolation operation. If search_mode is set to KDTREE_FIXED_RADIUS, and the number of points participating in the interpolation operation is specified at the same time, when the number of points in the search range is less than the specified number of points, a null value is assigned.
  • power (int) – The power of distance weight calculation. The lower the power value, the smoother the interpolation result, and the higher the power value, the more detailed the interpolation result. This parameter should be a value greater than 0. If you do not specify this parameter, the method sets it to 1 by default.
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.density_interpolate(input_data, z_value_field, pixel_format, resolution, search_radius=0.0, expected_count=12, bounds=None, z_value_scale=1.0, out_data=None, out_dataset_name=None, progress=None)

Use the point density interpolation method to interpolate the point dataset or record set. Specific reference: py:meth:interpolate and:py:class:.InterpolationDensityParameter

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset or point record set that needs interpolation analysis
  • z_value_field (str) – The name of the field that stores the value used for interpolation analysis. Interpolation analysis does not support text type fields.
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • resolution (float) – the resolution used during interpolation
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a certain position, the position will be the center of the circle, and the search_radius will be the radius. The sampling points within this range will participate in the calculation, that is, the position of the The predicted value is determined by the value of the sampling points in the range.
  • expected_count (int) – The number of points expected to participate in the interpolation operation
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.kriging_interpolate(input_data, z_value_field, pixel_format, resolution, krighing_type='KRIGING', search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, max_point_count_in_node=50, max_point_count_for_interpolation=200, variogram_mode=VariogramMode.SPHERICAL, angle=0.0, mean=0.0, exponent=Exponent.EXP1, nugget=0.0, range_value=0.0, sill=0.0, bounds=None, z_value_scale=1.0, out_data=None, out_dataset_name=None, progress=None)

Use Kriging interpolation method to interpolate point dataset or record set. Specific reference: py:meth:interpolate and:py:class:.InterpolationKrigingParameter

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset or point record set that needs interpolation analysis
  • z_value_field (str) – The name of the field that stores the value used for interpolation analysis. Interpolation analysis does not support text type fields.
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • resolution (float) – the resolution used during interpolation
  • krighing_type (InterpolationAlgorithmType or str) – The algorithm type of interpolation analysis. Supports three settings: KRIGING, SimpleKRIGING, UniversalKRIGING, and KRIGING is used by default.
  • search_mode (SearchMode or str) – Search mode.
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a location, the location is the center of the circle, and search_radius is the radius. The sampling points within this range will participate in the calculation, that is, the prediction of the location The value is determined by the value of the sampling point in the range.
  • expected_count (int) – The number of points expected to participate in the interpolation operation. When the search method is variable length search, it indicates the maximum number of points expected to participate in the operation.
  • max_point_count_in_node (int) – The maximum number of points to find in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.
  • max_point_count_for_interpolation (int) – Set the maximum number of points involved in interpolation during block search. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation
  • variogram (VariogramMode or str) – Kriging (Kriging) interpolation type of semi-variable function. The default value is VariogramMode.SPHERICAL
  • angle (float) – Rotation angle value in Kriging algorithm
  • mean (float) – The average value of the interpolation field, that is, the sum of the interpolation field values of the sampling points divided by the number of sampling points.
  • exponent (Exponent or str) – the order of the trend surface equation in the sample data used for interpolation
  • nugget (float) – Nugget effect value.
  • range_value (float) – Autocorrelation threshold.
  • sill (float) – abutment value
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.rbf_interpolate(input_data, z_value_field, pixel_format, resolution, search_mode=SearchMode.KDTREE_FIXED_COUNT, search_radius=0.0, expected_count=12, max_point_count_in_node=50, max_point_count_for_interpolation=200, smooth=0.100000001490116, tension=40, bounds=None, z_value_scale=1.0, out_data=None, out_dataset_name=None, progress=None)

Use the radial basis function (RBF) interpolation method to interpolate the point dataset or record set. Specific reference: py:meth:interpolate and:py:class:.InterpolationRBFParameter

Parameters:
  • input_data (DatasetVector or str or Recordset) – point dataset or point record set that needs interpolation analysis
  • z_value_field (str) – The name of the field that stores the value used for interpolation analysis. Interpolation analysis does not support text type fields.
  • pixel_format (PixelFormat or str) – Specify the pixels stored in the result raster dataset, BIT64 is not supported
  • resolution (float) – the resolution used during interpolation
  • search_mode (SearchMode or str) – Search mode.
  • search_radius (float) – Find the search radius of the points involved in the operation. The unit is the same as the unit of the point dataset (or the dataset to which the record set belongs) used for interpolation. The search radius determines the search range of the points involved in the calculation. When calculating the unknown value of a location, the location will be the center of the circle and the search_radius will be the radius. All sampling points within this range will participate in the calculation, that is, the prediction of the location The value is determined by the value of the sampling point in the range.
  • expected_count (int) – The number of points expected to participate in the interpolation operation. When the search method is variable length search, it indicates the maximum number of points expected to participate in the operation.
  • max_point_count_in_node (int) – The maximum number of points to find in a single block. When using QuadTree to find interpolation points, you can set the maximum number of points in a block.
  • max_point_count_for_interpolation (int) – Set the maximum number of points involved in interpolation during block search. Note that this value must be greater than zero. When using QuadTree to find interpolation points, you can set the maximum number of points involved in interpolation
  • smooth (float) – smoothing coefficient, the value range is [0,1]
  • tension (float) – Tension coefficient
  • bounds (Rectangle) – The range of interpolation analysis, used to determine the range of running results
  • z_value_scale (float) – The scaling ratio of the interpolation analysis value
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.vector_to_raster(input_data, value_field, clip_region=None, cell_size=None, pixel_format=PixelFormat.SINGLE, out_data=None, out_dataset_name=None, progress=None, no_value=-9999, is_all_touched=True)

Convert a vector dataset to a raster dataset by specifying the conversion parameter settings.

Parameters:
  • input_data (DatasetVector or str) – The vector dataset to be converted. Support point, line and area dataset
  • value_field (str) – The field for storing raster values in the vector dataset
  • clip_region (GeoRegion or Rectangle) – effective region for conversion
  • cell_size (float) – the cell size of the result raster dataset
  • pixel_format (PixelFormat or str) – If the vector data is converted to a raster dataset with pixel formats of UBIT1, UBIT4 and UBIT8, the objects with a value of 0 in the vector data will be lost in the result raster.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • no_value (float) – No value of raster dataset
  • is_all_touched (bool) – Whether to convert all the grids in contact with the polyline, the default is True. If False, the brezenhams rasterization method is used.
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.raster_to_vector(input_data, value_field, out_dataset_type=DatasetType.POINT, back_or_no_value=-9999, back_or_no_value_tolerance=0.0, specifiedvalue=None, specifiedvalue_tolerance=0.0, valid_region=None, is_thin_raster=True, smooth_method=None, smooth_degree=0.0, out_data=None, out_dataset_name=None, progress=None)

Convert the raster dataset to a vector dataset by specifying the conversion parameter settings.

Parameters:
  • input_data (DatasetGrid or DatasetImage or str) – The raster dataset or image dataset to be converted
  • value_field (str) – The field where the value is stored in the result vector dataset
  • out_dataset_type (DatasetType or str) – The result dataset type, supporting point, line and area dataset. When the result dataset type is line data aggregation, is_thin_raster, smooth_method, smooth_degree are valid.
  • back_or_no_value (int or tuple) –

    Set the background color of the raster or a value indicating no value, which is only valid when the raster is converted to a vector. Allow users to specify a value to identify those cells that do not need to be converted:

    -When the converted raster data is a raster dataset, the cells whose raster value is the specified value are regarded as no value, these cells will not be converted, and the original no value of the raster will be regarded as a valid value To participate in the conversion. -When the converted raster data is an image dataset, the cells whose raster value is the specified value are regarded as the background color, so they will not participate in the conversion.

    It should be noted that the raster value in the image dataset represents a color or color index value, which is related to its pixel format (PixelFormat). For BIT32, UBIT32, RGBA, RGB and BIT16

    Format image dataset, its raster value corresponds to RGB color, you can use a tuple or int to represent RGB value or RGBA value

    For image datasets in UBIT8 and UBIT4 format, the raster value corresponds to the index value of the color. Therefore, the value that should be set for this attribute is the index value of the color that is regarded as the background color.

  • back_or_no_value_tolerance (int or float or tuple) –

    The tolerance of the grid background color or the tolerance of no value, which is only valid when the grid is converted to a vector. Used to cooperate with the back_or_no_value method (specify the grid no value or background color) to jointly determine which values in the raster data will not be converted:

    -When the converted raster data is a raster dataset, if the value specified as a valueless is a and the specified tolerance of no value is b, the raster value is within the range of [ab,a+b] cells were view of no value. It should be noted that the tolerance of no value is the tolerance of the value of no value specified by the user, and has nothing to do with the original no value in the grid. -When the converted raster data is an image dataset, the tolerance value is a 32-bit integer value or tuple, and the tuple is used to represent the RGB value or RGBA value. -The meaning of this value is related to the pixel format of the image dataset: for an image dataset whose raster value corresponds to RGB color, this value is converted into three tolerance values corresponding to R, G, and B within the system.

    For example, the color specified as the background color is (100,200,60), the specified tolerance value is 329738, and the corresponding RGB value of this value is (10,8,5), then the value is between (90,192,55) and (110,208, 65) The colors in between are all background colors; for an image dataset whose raster value is the color index value, the tolerance value is the tolerance of the color index value, and the grid values within the tolerance range are regarded as the background color.
  • specifiedvalue (int or float or tuple) – The specified raster value when the raster is converted to a vector by value. Only the raster with this value is converted to a vector.
  • specifiedvalue_tolerance (int or float or tuple) – the tolerance of the specified raster value when the raster is converted to a vector by value
  • valid_region (GeoRegion or Rectangle) – valid region for conversion
  • is_thin_raster (bool) – Whether to perform raster refinement before conversion.
  • smooth_method (SmoothMethod or str) – smooth method, only valid when the raster is converted to vector line data
  • smooth_degree (int) –

    Smoothness. The greater the smoothness value, the greater the smoothness value, and the higher the smoothness of the resulting vector line. It is valid when smooth_method is not NONE. The effective value of smoothness is related to the smoothing method. The smoothing methods include B-spline method and angle grinding method:

    -When the smoothing method is B-spline method, the effective value of smoothness is an integer greater than or equal to 2, and the recommended value range is [2,10]. -When the smoothing method is the angle grinding method, the smoothness represents the number of angle grinding in one smoothing process, and it is effective when set to an integer greater than or equal to 1

  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.cost_distance(input_data, cost_grid, max_distance=-1.0, cell_size=None, out_data=None, out_distance_grid_name=None, out_direction_grid_name=None, out_allocation_grid_name=None, progress=None)

According to the given parameters, a cost distance grid, a cost direction grid and a cost allocation grid are generated.

In practical applications, the straight-line distance often cannot meet the requirements. For example, the straight-line distance from B to the nearest source A is the same as the straight-line distance from C to the nearest source A. If the traffic on the BA section is congested and the traffic on the CA section is smooth, the time consumption must be different; in addition, the path corresponding to the straight-line distance It is often not feasible to reach the nearest source. For example, when you encounter obstacles such as rivers and mountains, you need to detour, and then you need to consider the distance.

The method generates a corresponding cost from the source raster grid dataset and time-consuming, consume direction of the grid (optional) and consumption costs allocation grid (optional). The source data can be vector data (point, line, area) or raster data. For raster data, cells other than the identified source are required to have no value.

  • The value of the cost distance grid represents the minimum cost value from the cell to the nearest source (it can be various types of cost factors, or a weighted cost factor of interest). Nearest source It is the source that costs the least to reach all sources in the current cell. The cells with no value in the cost raster will still have no value in the output cost distance raster.

    The calculation method for the cost of a cell to reach the source is to start from the center of the cell to be calculated, and multiply the distance traveled by the least cost path to the nearest source on each cell by the value of the corresponding cell on the cost grid. The value accumulation is the cost value from the cell to the source. Therefore, the calculation of the cost distance is related to the cell size and the cost grid. In the following schematic diagram, The cell size (cell_size) of the source raster and the cost raster are both 2, and the minimum cost route for the cell (2,1) to the source (0,0) is shown in the red line on the right:

    ../_images/CostDistance_1.png

    Then the minimum cost (that is, the cost distance) for cell (2,1) to reach the source is:

    ../_images/CostDistance_2.png
  • The value of the cost direction grid expresses the travel direction of the least costly path from the cell to the nearest source. In the cost direction grid, there are a total of eight possible directions of travel (true north, true south, true west, true east, northwest, southwest, southeast, and northeast),

and these eight directions are coded using eight integers from 1 to 8 ,As shown below. Note that the cell where the source is located has a value of 0 in the consumption direction grid, and the cells with no value in the consumption grid will be assigned a value of 15 in the output consumption direction grid.

../_images/CostDistance_3.png
  • The value of the cost allocation grid is the value of the nearest source of the cell (when the source is a raster, it is the value of the nearest source; when the source is a vector object, it is the SMID of the nearest source). distance. Cells with no value in the cost grid will still have no value in the output cost allocation grid.

    The figure below is a schematic diagram of the generated distance. Among them, on the cost grid, the blue arrow is used to mark the travel route of the cell to the nearest source, and the value of the cost direction grid indicates the travel direction of the least cost route from the current cell to the nearest source.

    ../_images/CostDistance_4.png

The figure below is an example of generating a cost-distance raster, where the source dataset is a point dataset, and the cost raster is the reclassification result of the slope raster of the corresponding area. The cost-distance raster, cost-direction raster and cost allocation are generated Grid.

../_images/CostDistance.png
Parameters:
  • input_data (DatasetVector or DatasetGrid or str) – Generate the source dataset of the distance raster. The source refers to the research object or features of interest, such as schools, roads, or fire hydrants. The dataset that contains the source is the source dataset. The source dataset can be Point, line, and area datasets can also be raster datasets. The raster in the raster dataset with valid values is the source, and if there is no value, it is considered that the location has no source.
  • cost_grid (DatasetGrid) – Cost grid. The grid value cannot be negative. The dataset is a raster dataset, and the value of each cell represents the unit cost when passing through this cell.
  • max_distance (float) – The maximum distance of the generated distance grid, the calculation result of the grid greater than this distance is no value. If the shortest distance between cell A of a certain grid and the nearest source is greater than this value, the value of this grid in the result dataset will take no value.
  • cell_size (float) – The resolution of the result dataset, which is an optional parameter for generating a distance grid
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_distance_grid_name (str) – The name of the result distance grid dataset. If the name is empty, a valid dataset name will be automatically obtained.
  • out_direction_grid_name (str) – the name of the direction raster dataset, if it is empty, no direction raster dataset will be generated
  • out_allocation_grid_name (str) – the name of the allocated raster dataset, if it is empty, the allocated raster dataset will not be generated
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

If the generation is successful, return the result dataset or a tuple of dataset names, where the first is the distance raster dataset, the second is the direction raster dataset, and the third is the assigned raster dataset. If not set The name of the direction raster dataset and the name of the assigned raster dataset, the corresponding value is None

Return type:

tuple[DataetGrid] or tuple[str]

iobjectspy.analyst.cost_path(input_data, distance_dataset, direction_dataset, compute_type, out_data=None, out_dataset_name=None, progress=None)

According to the cost distance grid and cost direction grid, analyze the shortest path grid from the target to the nearest source. This method calculates the shortest path of each target object to the nearest source based on the given target dataset, and the cost distance grid and cost direction grid obtained by the function of “generate cost distance grid”, that is, the minimum cost path. This method does not need to specify the dataset where the source is located,

because the location of the source can be reflected in the distance grid and the direction grid, that is, the cell with the grid value of 0. The generated shortest path raster is a binary raster, a cell with a value of 1 represents a path, and other cells have a value of 0.

For example, take a shopping mall (a point dataset) as the source and each residential area (a polygon dataset) as the target, and analyze how to reach the nearest shopping mall from each residential area. The realization process is, first of all Generate a distance grid and a direction grid for the source (shopping mall), and then use the residential area as the target area. Through the shortest path analysis, the shortest path from each residential area (target) to the nearest shopping mall (source) is obtained. The The shortest path has two meanings: through the straight-line distance grid and the straight-line direction grid, the smallest straight-line distance path will be obtained; through the cost distance grid and the cost direction grid, the least costly path will be obtained.

Note that in this method, the input cost distance grid and cost direction grid must be matched, that is to say, both should be generated at the same time using the “generate cost distance grid” function. In addition, there are three ways to calculate the shortest path: pixel path, area path and single path. For specific meanings, please refer to the ComputeType class.

Parameters:
  • input_data (DatasetVector or DatasetGrid or DatasetImage or str) – The dataset where the target is located. It can be a point, line, area, or raster dataset. If it is raster data, the cells other than the identification target are required to have no value.
  • distance_dataset (DatasetGrid or str) – Expense distance raster dataset.
  • direction_dataset (DatasetGrid or str) – consumption direction raster dataset
  • compute_type (ComputeType or str) – the calculation method of the grid distance shortest path analysis
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.cost_path_line(source_point, target_point, cost_grid, smooth_method=None, smooth_degree=0, progress=None, barrier_regions=None)

According to the given parameters, calculate the least costly path between the source point and the target point (a two-dimensional vector line object). This method is used to calculate the minimum cost path between the source point and the target point according to the given source point, target point and cost grid

The figure below is an example of calculating the minimum cost path between two points. In this example, the reclassification result of the slope of the DEM grid is used as the cost grid to analyze the least costly path between the given source point and the target point.

../_images/CostPathLine.png
Parameters:
  • source_point (Point2D) – the specified source point
  • target_point (Point2D) – the specified target point
  • cost_grid (DatasetGrid) – Cost grid. The grid value cannot be negative. The dataset is a raster dataset, and the value of each cell represents the unit cost when passing through this cell.
  • smooth_method (SmoothMethod or str) – The method of smoothing the resulting route when calculating the shortest path between two points (source and target)
  • smooth_degree (int) – When calculating the shortest path between two points (source and target), smooth the resulting route. The greater the smoothness value, the greater the smoothness value, and the higher the smoothness of the resulting vector line. It is valid when smooth_method is not NONE. The effective value of smoothness is related to the smoothing method. The smoothing methods include B-spline method and angle grinding method: -When the smoothing method is B-spline method, the effective value of smoothness is an integer greater than or equal to 2, and the recommended value range is [2,10]. -When the smoothing method is the angle grinding method, the smoothness represents the number of angle grinding in one smoothing process, and it is effective when set to an integer greater than or equal to 1
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • barrier_regions (DatasetVector or str or GeoRegion or list[GeoRegion]) – Obstacle surface dataset or surface object, which will bypass the obstacle surface during analysis
Returns:

Return the line object representing the shortest path and the cost of the shortest path

Return type:

tuple[GeoLine,float]

iobjectspy.analyst.path_line(target_point, distance_dataset, direction_dataset, smooth_method=None, smooth_degree=0)

According to the distance grid and the direction grid, the shortest path (a two-dimensional vector line object) from the target point to the nearest source is analyzed. This method analyzes the shortest path from a given target point to the nearest source based on the distance grid and the direction grid. The distance grid and the direction grid can be a cost distance grid and a cost direction grid, or a surface distance grid and a surface direction grid.

-When the distance grid is a cost distance grid and the direction grid is a cost direction grid, this method calculates the least cost path. The cost distance grid and cost direction grid can be generated by the costDistance method. Note that this method requires both to be the result of the same generation. -When the distance grid is a surface distance grid and the direction grid is a surface direction grid, this method calculates the shortest surface distance path. The surface distance grid and the surface direction grid can be generated by the surfaceDistance method. Similarly, this method requires both to be the result of the same generation.

The location of the source can be reflected in the distance grid and the direction grid, that is, the cell with a grid value of 0. There can be one source or multiple sources. When there are multiple sources, the shortest path is the path from the destination point to its nearest source.

The following figure shows the source, surface grid, cost raster and target point. The cost raster is the result of reclassification after calculating the slope of the surface raster.

../_images/PathLine_2.png

Use the source and surface grids as shown in the figure above to generate the surface distance grid and the surface direction grid, and then calculate the shortest surface distance path from the target point to the nearest source; use the source and cost grid to generate the cost distance grid and cost direction grid Grid, and then calculate the least costly path from the target point to the nearest source. The resulting path is shown in the figure below:

../_images/PathLine_3.png
Parameters:
  • target_point (Point2D) – The specified target point.
  • distance_dataset (DatasetGrid) – The specified distance grid. It can be a cost distance grid or a surface distance grid.
  • direction_dataset (DatasetGrid) – The specified direction grid. Corresponding to the distance grid, it can be a consumption direction grid or a surface direction grid.
  • smooth_method (SmoothMethod or str) – The method of smoothing the resulting route when calculating the shortest path between two points (source and target)
  • smooth_degree (int) – When calculating the shortest path between two points (source and target), smooth the resulting route. The greater the smoothness value, the greater the smoothness value, and the higher the smoothness of the resulting vector line. It is valid when smooth_method is not NONE. The effective value of smoothness is related to the smoothing method. The smoothing methods include B-spline method and angle grinding method: -When the smoothing method is B-spline method, the effective value of smoothness is an integer greater than or equal to 2, and the recommended value range is [2,10]. -When the smoothing method is the angle grinding method, the smoothness represents the number of angle grinding in one smoothing process, and it is effective when set to an integer greater than or equal to 1
Returns:

Return the line object representing the shortest path and the cost of the shortest path

Return type:

tuple[GeoLine,float]

iobjectspy.analyst.straight_distance(input_data, max_distance=-1.0, cell_size=None, out_data=None, out_distance_grid_name=None, out_direction_grid_name=None, out_allocation_grid_name=None, progress=None)

According to the given parameters, a straight-line distance grid, a straight-line direction grid and a straight-line distribution grid are generated.

This method is used to generate corresponding straight-line distance grids, straight-line direction grids (optional) and straight-line distribution grids (optional) for the source dataset. The area scope of the three result datasets is consistent with the scope of the source dataset. The source data for generating the straight-line distance raster can be vector data (point, line, area) or raster data. For raster data, cells other than the identification source are required to have no value.

  • The value of the straight-line distance grid represents the Euclidean distance from the cell to the nearest source (ie, the straight-line distance). The nearest source is the source with the shortest straight-line distance between the current cell and all sources. For each cell, the line connecting its center to the center of the source is the distance from the cell to the source.

The calculation method is to calculate the two right-angled sides of the right triangle formed by the two, so the straight-line distance is only calculated It is related to the cell size (ie resolution). The figure below is a schematic diagram of straight-line distance calculation, where the cell size of the source grid (cell_size) is 10.

../_images/StraightDistance_1.png

Then the distance L from the cell in the third row and third column to the source is:

../_images/StraightDistance_2.png
  • The value of the straight line direction grid represents the azimuth angle from the cell to the nearest source, in degrees. Take the true east direction as 90 degrees, the true south as 180 degrees, the true west as 270 degrees, and the true north as 360 degrees. The range is 0-360 degrees, and the grid value of the corresponding source is specified as 0 degrees.
  • The value of the straight line distribution grid is the value of the nearest source of the cell (when the source is a raster, it is the value of the nearest source; when the source is a vector object, it is the SMID of the nearest source), so the grid is allocated from the straight line You can know which is the nearest source of each cell.

The figure below is a schematic diagram of generating a straight line distance. The cell size is 2.

../_images/StraightDistance_3.png

The straight-line distance grid is usually used to analyze the situation where there is no obstacle or equivalent cost in the route passed. For example, when a rescue plane flies to the nearest hospital, there is no obstacle in the air, so the cost of which route is the same is the same. The grid can determine the distance from the location of the rescue aircraft to the surrounding hospitals; according to the straight-line distribution grid, the nearest hospital to the location of the rescue aircraft can be obtained; the straight-line direction grid can determine the location of the nearest hospital at the location of the rescue aircraft .

However, in the example of a rescue vehicle driving to the nearest hospital, because there are various types of obstacles on the surface, the cost of using different routes is not the same. In this case, the cost distance grid needs to be used for analysis. For the cost distance grid, please refer to the CostDistance method.

The following figure is an example of generating a straight-line distance grid, where the source dataset is a point dataset, and a straight-line distance grid, a straight-line direction grid, and a straight-line distribution grid are generated.

../_images/StraightDistance.png

Note: When the minimum bounds of the dataset is in some special cases, the Bounds of the result dataset shall be selected according to the following rules:

  • When the height and width of the Bounds of the source dataset are both 0 (for example, there is only one vector point), the height and width of the Bounds of the result dataset are both the left boundary value (Left) and the lower boundary value of the source dataset Bounds (Right) The one with the smaller absolute value of the two.
  • When the Bounds height of the source dataset is 0 but the width is not 0 (for example, there is only one horizontal line), the Bounds height and width of the result dataset are equal to the width of the source dataset Bounds.
  • When the Bounds width of the source dataset is 0 and the height is not 0 (for example, there is only one vertical line), the height and width of the Bounds of the result dataset are equal to the height of the source dataset Bounds.
Parameters:
  • input_data (DatasetVector or DatasetGrid or DatasetImage or str) – Generate the source dataset of the distance raster. The source refers to the research object or features of interest, such as schools, roads, or fire hydrants. The dataset that contains the source is the source dataset. The source dataset can be Point, line, and area datasets can also be raster datasets. The raster in the raster dataset with valid values is the source, and if there is no value, it is considered that the location has no source.
  • max_distance (float) – The maximum distance of the generated distance grid, the calculation result of the grid greater than this distance is no value. If the shortest distance between cell A of a certain grid and the nearest source is greater than this value, the value of this grid in the result dataset will take no value.
  • cell_size (float) – The resolution of the result dataset, which is an optional parameter for generating a distance grid
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_distance_grid_name (str) – The name of the result distance grid dataset. If the name is empty, a valid dataset name will be automatically obtained.
  • out_direction_grid_name (str) – the name of the direction raster dataset, if it is empty, no direction raster dataset will be generated
  • out_allocation_grid_name (str) – the name of the allocated raster dataset, if it is empty, the allocated raster dataset will not be generated
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

If the generation is successful, return the result dataset or a tuple of dataset names, where the first is the distance raster dataset, the second is the direction raster dataset, and the third is the assigned raster dataset. If not set The name of the direction raster dataset and the name of the assigned raster dataset, the corresponding value is None

Return type:

tuple[DataetGrid] or tuple[str]

iobjectspy.analyst.surface_distance(input_data, surface_grid_dataset, max_distance=-1.0, cell_size=None, max_upslope_degrees=90.0, max_downslope_degree=90.0, out_data=None, out_distance_grid_name=None, out_direction_grid_name=None, out_allocation_grid_name=None, progress=None)
According to the given parameters, a surface distance grid, a surface direction grid and a surface allocation grid are generated. This method generates corresponding surface distance grid, surface direction grid (optional) and surface allocation grid (optional) according to the source dataset and surface grid.
The source data can be vector data (point, line, area) or raster data. For raster data, cells other than the identification source are required to have no value.
  • The value of the surface distance grid represents the shortest distance from the cell on the surface grid to the surface of the nearest source. The nearest source refers to the source with the shortest surface distance from the current cell to all sources. The cells with no value in the surface raster will still have no value in the output surface distance raster. The calculation method of the surface distance d from the current cell (set to g1) to the next cell (set to g2) is:

    ../_images/SurfaceDistance_1.png

    Among them, b is the difference between the grid value of g1 (ie elevation) and the grid value of g2; a is the linear distance between the center points of g1 and g2, and its value considers two cases, when g2 is adjacent to g1 When one of the top, bottom, left, and right cells of a, the value of a is equal to the cell size; when g2 is one of the four cells diagonally adjacent to g1, the value of a is the cell size multiplied by the root No. 2.

    The distance value from the current cell to the nearest source is the surface distance value along the shortest path. In the diagram below, the cell size (CellSize) of the source grid and the surface grid are both 1. The shortest surface path of cell (2,1) to source (0,0) is shown by the red line in the right figure:

    ../_images/SurfaceDistance_2.png

    Then the shortest surface distance of cell (2,1) to the source is:

    ../_images/SurfaceDistance_3.png
  • The value of the surface direction grid expresses the travel direction of the shortest surface distance path from the cell to the nearest source. In the surface direction grid, there are eight possible directions of travel (true north, True South, True West, True East, Northwest, Southwest, Southeast, Northeast), use the eight integers from 1 to 8 to encode these eight directions, as shown in the figure below. Note that the cell where the source is located has a value of 0 in the surface orientation grid, and the cell with no value in the surface grid will be assigned a value of 15 in the output surface orientation grid.

    ../_images/CostDistance_3.png
  • The value of the surface allocation grid is the value of the nearest source of the cell (when the source is a raster, it is the value of the nearest source; when the source is a vector object, it is the SMID of the nearest source). The shortest surface distance. The cells with no value in the surface raster will still have no value in the output surface allocation raster. The figure below is a schematic diagram of the generated surface distance. Among them, on the surface grid, according to the result surface direction grid, the blue arrow is used to mark the travel direction of the cell to the nearest source.

    SurfaceDistance_4.png

Through the above introduction, we can understand that by combining the surface distance grid and the corresponding direction and distribution grid, we can know which source is the nearest to each cell on the surface grid, what is the surface distance and how to reach the nearest source.

Note that you can specify the maximum upslope angle (max_upslope_degrees) and the maximum downslope angle (max_downslope_degree) when generating the surface distance, so as to avoid passing through cells whose upslope angle exceeds the specified value when looking for the nearest source. Traveling from the current cell to the next cell with a higher elevation is an uphill. The uphill angle is the angle between the uphill direction and the horizontal plane. If the uphill angle is greater than the given value, this direction of travel will not be considered; The cell travels to the next cell whose elevation is less than the current elevation. The downward slope angle is the angle between the downward slope direction and the horizontal plane. Similarly, if the downward slope angle is greater than the given value, this direction of travel will not be considered. If the current cell cannot find the nearest source due to the limitation of the up and down slope angle, the value of the cell in the surface distance grid is no value, and it is also no value in the direction grid and the distribution grid.

The following figure is an example of generating a surface distance grid, where the source dataset is a point dataset, and the surface grid is a DEM grid of the corresponding area. The surface distance grid, surface direction grid and surface allocation grid are generated.

../_images/SurfaceDistance.png
Parameters:
  • input_data (DatasetVector or DatasetGrid or DatasetImage or str) – Generate the source dataset of the distance raster. The source refers to the research object or features of interest, such as schools, roads, or fire hydrants. The dataset that contains the source is the source dataset. The source dataset can be Point, line, surface dataset may be a set of raster data, the raster dataset having has the raster effective value of the source, then no value for that position is not regarded as a source.
  • surface_grid_dataset (DatasetGrid or str) – surface grid
  • max_distance (float) – The maximum distance of the generated distance grid, the calculation result of the grid greater than this distance is no value. If the shortest distance between cell A of a certain grid and the nearest source is greater than this value, the value of this grid in the result dataset will take no value.
  • cell_size (float) – The resolution of the result dataset, which is an optional parameter for generating a distance grid
  • max_upslope_degrees (float) – Maximum uphill angle. The unit is degree, and the value range is greater than or equal to 0. The default value is 90 degrees, that is, the uphill angle is not considered. If the maximum uphill angle is specified, the uphill angle of the terrain will be considered when choosing the route. Moving from the current cell to the next cell with higher elevation is the uphill, and the uphill angle is the angle between the uphill direction and the horizontal plane. If the uphill angle is greater than the given value, this direction of travel will not be considered, that is, the given route will not pass through the area where the uphill angle is greater than this value. It is conceivable that there may be no eligible routes due to the setting of this value. In addition, because the slope is expressed in the range of 0 to 90 degrees, although it can be specified as a value greater than 90 degrees, the effect is the same as that of specifying 90 degrees, that is, the upward slope angle is not considered.
  • max_downslope_degree (float) – Set the maximum downslope angle. The unit is degree, and the value range is greater than or equal to 0. If the maximum downhill angle is specified, the downhill angle of the terrain will be considered when choosing the route. Traveling from the current cell to the next cell whose elevation is less than the current elevation is downhill, and the downhill angle is the angle between the downhill direction and the horizontal plane. If the downhill angle is greater than the given value, this direction of travel will not be considered, that is, the given route will not pass through the area where the downhill angle is greater than this value. It is conceivable that there may be no eligible routes due to the setting of this value. In addition, because the slope is expressed in the range of 0 to 90 degrees, although it can be specified as a value greater than 90 degrees, the effect is the same as the specified 90 degrees, that is, the downslope angle is not considered.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_distance_grid_name (str) – The name of the result distance grid dataset. If the name is empty, a valid dataset name will be automatically obtained.
  • out_direction_grid_name (str) – the name of the direction raster dataset, if it is empty, no direction raster dataset will be generated
  • out_allocation_grid_name (str) – the name of the allocated raster dataset, if it is empty, the allocated raster dataset will not be generated
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

If the generation is successful, return the result dataset or a tuple of dataset names, where the first is the distance raster dataset, the second is the direction raster dataset, and the third is the assigned raster dataset. If not set The name of the direction raster dataset and the name of the assigned raster dataset, the corresponding value is None

Return type:

tuple[DataetGrid] or tuple[str]

iobjectspy.analyst.surface_path_line(source_point, target_point, surface_grid_dataset, max_upslope_degrees=90.0, max_downslope_degree=90.0, smooth_method=None, smooth_degree=0, progress=None, barrier_regions=None)

According to the given parameters, calculate the shortest surface distance path between the source point and the target point (a two-dimensional vector line object). This method is used to calculate the shortest surface distance path between the source point and the target point according to the given source point, target point and surface grid.

Setting the maximum upslope angle (max_upslope_degrees) and the maximum downslope angle (max_downslope_degree) can make the analyzed route not pass through too steep terrain. Note, however, that if you specify the up and down slope angle limit, you may not get the analysis result, which is related to the value of the maximum up and down slope angle and the terrain expressed by the surface grid. The following figure shows the shortest path of the surface distance when the maximum upslope angle and the maximum downslope angle are set to 5 degrees, 10 degrees, and 90 degrees respectively (that is, there is no restriction on the up and down slope angle), due to the restriction on the up and down slope angle , So the shortest path of the surface distance is based on the premise that the maximum up and down slope angle is not exceeded.

../_images/SurfacePathLine.png
Parameters:
  • source_point (Point2D) – The specified source point.
  • target_point (Point2D) – The specified target point.
  • surface_grid_dataset (DatasetGrid or str) – surface grid
  • max_upslope_degrees (float) – Maximum upslope angle. The unit is degree, and the value range is greater than or equal to 0. The default value is 90 degrees, that is, the uphill angle is not considered. If the maximum uphill angle is specified, the uphill angle of the terrain will be considered when choosing the route. Moving from the current cell to the next cell with higher elevation is the uphill, and the uphill angle is the angle between the uphill direction and the horizontal plane. If the uphill angle is greater than the given value, this direction of travel will not be considered, that is, the given route will not pass through the area where the uphill angle is greater than this value. It is conceivable that there may be no eligible routes due to the setting of this value. In addition, because the slope is expressed in the range of 0 to 90 degrees, although it can be specified as a value greater than 90 degrees, the effect is the same as that of specifying 90 degrees, that is, the upward slope angle is not considered.
  • max_downslope_degree (float) – Set the maximum downslope angle. The unit is degree, and the value range is greater than or equal to 0. If the maximum downhill angle is specified, the downhill angle of the terrain will be considered when choosing the route. Traveling from the current cell to the next cell whose elevation is less than the current elevation is downhill, and the downhill angle is the angle between the downhill direction and the horizontal plane. If the downhill angle is greater than the given value, this direction of travel will not be considered, that is, the given route will not pass through the area where the downhill angle is greater than this value. It is conceivable that there may be no eligible routes due to the setting of this value. In addition, since the slope is expressed in the range of 0 to 90 degrees, although it can be specified as a value greater than 90 degrees, the effect is the same as the specified 90 degrees, that is, the downslope angle is not considered.
  • smooth_method (SmoothMethod or str) – The method of smoothing the resulting route when calculating the shortest path between two points (source and target)
  • smooth_degree (int) – When calculating the shortest path between two points (source and target), smooth the resulting route. The greater the smoothness value, the greater the smoothness value, and the higher the smoothness of the resulting vector line. It is valid when smooth_method is not NONE. The effective value of smoothness is related to the smoothing method. The smoothing methods include B-spline method and angle grinding method: -When the smoothing method is B-spline method, the effective value of smoothness is an integer greater than or equal to 2, and the recommended value range is [2,10]. -When the smoothing method is the angle grinding method, the smoothness represents the number of angle grinding in one smoothing process, and it is effective when set to an integer greater than or equal to 1
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • barrier_regions (DatasetVector or GeoRegion or list[GeoRegion]) – Obstacle surface dataset or surface object, which will bypass the obstacle surface during analysis
Returns:

Return the line object representing the shortest path and the cost of the shortest path

Return type:

tuple[GeoLine,float]

iobjectspy.analyst.calculate_hill_shade(input_data, shadow_mode, azimuth, altitude_angle, z_factor, out_data=None, out_dataset_name=None, progress=None)

The three-dimensional shaded map refers to a raster map that reflects the undulations of the terrain by simulating the umbra and falling shadows of the actual ground surface. By using a hypothetical light source to illuminate the ground surface and combining the slope and aspect information obtained from the raster data, the gray value of each pixel is obtained. The gray value of the slope facing the light source is higher, and the gray value of the back light source is lower. It is a shaded area, which visually shows the topography and topography of the actual surface. The hillshade map calculated from raster data often has a very realistic three-dimensional effect, so it is called a three-dimensional shaded map.

../_images/CalculateHillShade.png

The three-dimensional shaded map has important value in describing the three-dimensional condition of the surface and terrain analysis. When other thematic information is superimposed on the three-dimensional shaded map, the application value and intuitive effect of the three-dimensional shaded map will be improved.

When generating a three-dimensional shaded image, you need to specify the position of the imaginary light source, which is determined by the azimuth and height angle of the light source. The azimuth angle determines the direction of the light source, and the elevation angle is the angle of inclination when the light source is illuminated. For example, when the azimuth angle of the light source is 315 degrees and the elevation angle is 45 degrees, its relative position to the ground is shown in the figure below.

../_images/CalculateHillShade_1.png

FIG dimensional shading of three types: rendering shadows, rendering and shading effects, through the through: py: class: ShadowMode specified class.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset of the 3D shaded image to be generated
  • shadow_mode (ShadowMode or str) – The rendering type of the 3D shaded image
  • azimuth (float) –

    The azimuth angle of the specified light source. Used to determine the direction of the light source, starting from the true north direction line where the light source is located, in a clockwise direction to the angle between the light source and the target direction line, the range is 0-360 degrees, and the true north direction is 0 degrees, in a clockwise direction Increment.

    ../_images/Azimuth.png
  • altitude_angle (float) –

    The altitude angle of the specified light source. Used to determine the inclination angle of the light source, it is the angle between the direction line of the light source and the target and the horizontal plane, the range is 0-90 degrees. When the height angle of the light source is 90 degrees, the light source is orthorectified to the ground.

    ../_images/AltitudeAngle.png
  • z_factor (float) – The specified elevation zoom factor. This value refers to the unit conversion coefficient of the grid value (Z coordinate, that is, the elevation value) relative to the X and Y coordinates in the grid. Usually, in calculations where X, Y, and Z are all involved, the elevation value needs to be multiplied by an elevation scaling factor to make the three units consistent. For example, the unit in the X and Y directions is meters, and the unit in the Z direction is feet. Since 1 foot is equal to 0.3048 meters, you need to specify a zoom factor of 0.3048. If it is set to 1.0, it means no scaling.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.calculate_slope(input_data, slope_type, z_factor, out_data=None, out_dataset_name=None, progress=None)

Calculate the slope and return the slope raster dataset, which is the slope map. Slope is the angle formed by the tangent plane at a certain point on the ground surface and the horizontal plane. The larger the slope value, the steeper the terrain

Note

When calculating the slope, the unit of the raster value (ie elevation) to be calculated is required to be the same as the unit of the x and y coordinates. If they are inconsistent, they can be adjusted by the elevation scaling factor (corresponding to the zFactor parameter in the method). However, note that when the conversion between the elevation value unit and the coordinate unit cannot be adjusted by a fixed value, the data needs to be processed by other means. One of the most common cases is when the DEM grid uses a geographic coordinate system, The unit is degrees, and the unit of elevation value is meters. At this time, it is recommended to perform projection conversion on the DEM raster and convert the x and y coordinates to plane coordinates.

Parameters:
  • input_data (DatasetGrid or str) – the specified raster dataset to be calculated slope
  • slope_type (SlopeType or str) – unit type of slope
  • z_factor (float) – The specified elevation zoom factor. This value refers to the unit conversion coefficient of the grid value (Z coordinate, that is, the elevation value) relative to the X and Y coordinates in the grid. Usually, in calculations where X, Y, and Z are all involved, the elevation value needs to be multiplied by an elevation scaling factor to make the three units consistent. For example, the unit in the X and Y directions is meters, and the unit in the Z direction is feet. Since 1 foot is equal to 0.3048 meters, you need to specify a zoom factor of 0.3048. If it is set to 1.0, it means no scaling.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.calculate_aspect(input_data, out_data=None, out_dataset_name=None, progress=None)

Calculate the aspect and return the aspect raster dataset, which is the aspect map. Slope direction refers to the direction of the slope surface, which represents the steepest downslope direction somewhere on the terrain surface. The slope direction reflects the direction the slope faces. The slope direction of any slope can be any direction from 0 to 360 degrees, so the result range of slope direction calculation

is 0 to 360 degrees. Calculate clockwise from true north (0 degrees)
Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset of the aspect to be calculated
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.compute_point_aspect(input_data, specified_point)

Calculate the aspect of the specified point on the DEM grid. The aspect of the specified point on the DEM grid is calculated in the same way as the aspect map (calculate_aspect method), which is that the cell where the point is located and the surrounding phase The 3 × 3 plane formed by the eight adjacent cells is used as the calculation unit, and the horizontal elevation change rate and the vertical elevation change rate are calculated by the third-order inverse distance square weight difference method to obtain the aspect. For more introduction, please refer to the calculate_aspect() method.

Note

When the cell of the specified point has no value, the calculation result is -1, which is different from generating an aspect map; when the specified point is outside the range of the DEM grid dataset, the calculation result is -1.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset of the aspect to be calculated
  • specified_point (Point2D) – The specified geographic coordinate point.
Returns:

The aspect of the specified point. The unit is degrees.

Return type:

float

iobjectspy.analyst.compute_point_slope(input_data, specified_point, slope_type, z_factor)

Calculate the slope at the specified point on the DEM grid. The slope at a specified point on the DEM grid is calculated in the same way as the slope map (calculate_slope method). The 3 × 3 plane formed by the cell where the point is located and the eight adjacent cells around it is used as the calculation unit. The slope is calculated by calculating the horizontal elevation change rate and the vertical elevation change rate through the third-order inverse distance square weight difference method. For more introduction, please refer to the calculate_slope method.

Note

When the cell of the specified point has no value, the calculation result is -1, which is different from generating a slope map; when the specified point is outside the range of the DEM grid dataset, the calculation result is -1.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset of the aspect to be calculated
  • specified_point (Point2D) – The specified geographic coordinate point.
  • slope_type (SlopeType or str) – Specified slope unit type. It can be expressed in degrees, radians, or percentages. Taking the angle used as an example, the result range of the slope calculation is 0 to 90 degrees.
  • z_factor (float) – The specified elevation zoom factor. This value refers to the unit transformation coefficient of the grid value (Z coordinate, that is, the elevation value) relative to the X and Y coordinates in the DEM grid. Usually, in calculations where X, Y, and Z are all involved, the elevation value needs to be multiplied by an elevation scaling factor to make the three units consistent. For example, the unit in the X and Y directions is meters, and the unit in the Z direction is feet. Since 1 foot is equal to 0.3048 meters, you need to specify a zoom factor of 0.3048. If it is set to 1.0, it means no scaling.
Returns:

The slope at the specified point. The unit is the type specified by the type parameter.

Return type:

float

iobjectspy.analyst.calculate_ortho_image(input_data, colors, no_value_color, out_data=None, out_dataset_name=None, progress=None)

Generate orthographic 3D images according to the given color set.

Orthoimages use digital differential correction technology to obtain the reasonable sunshine intensity of the current point through the elevation of neighboring grids, and perform orthoimage corrections.

Parameters:
  • input_data (DatasetGrid or str) – The specified DEM grid of the 3D orthoimage to be calculated.
  • colors (Colors or dict[float,tuple]) – The set of colors after 3D projection. If the input is a dict, it indicates the corresponding relationship between the elevation value and the color value. It is not necessary to list all the raster values (elevation values) and their corresponding colors of the grid to be calculated in the elevation color comparison table. For the elevation values not listed in the elevation color comparison table, the color in the result image will pass Plug it out.
  • no_value_color (tuple or int) – color of non-value grid
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.compute_surface_area(input_data, region)

Calculate the surface area, that is, calculate the total surface area of the three-dimensional surface fitted by the DEM grid in the selected polygon area.

Parameters:
  • input_data (DatasetGrid or str) – Specifies the DEM grid of the surface area to be calculated.
  • region (GeoRegion) – the specified polygon used to calculate the surface area
Returns:

The value of the surface area. The unit is square meter. Return -1 to indicate that the calculation failed.

Return type:

float

iobjectspy.analyst.compute_surface_distance(input_data, line)

Calculate the grid surface distance, that is, calculate the surface distance along the specified line segment or polyline segment on the three-dimensional surface fitted by the DEM grid.

Note

-The distance measured by surface measurement is on the curved surface, so it is larger than the value on the flat surface. -When the line used for measurement exceeds the range of the DEM grid, the line object will be clipped according to the range of the dataset, and the surface distance will be calculated according to the part of the line within the range of the dataset.

Parameters:
  • input_data (DatasetGrid or str) – The specified DEM grid of the surface distance to be calculated.
  • line (GeoLine) – The two-dimensional line used to calculate the surface distance.
Returns:

The value of the surface distance. The unit is meters.

Return type:

float

iobjectspy.analyst.compute_surface_volume(input_data, region, base_value)

Calculate the surface volume, that is, calculate the volume in the space between the three-dimensional surface fitted by the DEM grid in the selected polygon area and a reference plane.

Parameters:
  • input_data (DatasetGrid or str) – DEM grid of the volume to be calculated.
  • region (GeoRegion) – The polygon used to calculate the volume.
  • base_value (float) – The value of the base plane. The unit is the same as the grid value unit of the DEM grid to be calculated.
Returns:

The value of the specified datum plane. The unit is the same as the grid value unit of the DEM grid to be calculated.

Return type:

float

iobjectspy.analyst.divide_math_analyst(first_operand, second_operand, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Grid division operation. Divide the raster values of the two input raster datasets pixel by pixel. For the specific use of raster algebraic operations, refer to: py:meth:expression_math_analyst

If the input two pixel types (PixelFormat) are integer raster datasets, then the integer type result dataset will be output; otherwise, the floating point result dataset will be output. If the pixel type accuracy of the two input raster datasets is different, the pixel type of the result dataset of the operation is consistent with the higher accuracy of the two.

Parameters:
  • first_operand (DatasetGrid or str) – The specified first raster dataset.
  • second_operand (DatasetGrid or str) – The specified second raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the dataset involved in the operation are inconsistent, the intersection of the ranges of all dataset will be used as the calculation area.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.plus_math_analyst(first_operand, second_operand, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Grid addition operation. Add the raster values of the two input raster datasets pixel by pixel. For the specific use of raster algebraic operations, refer to: py:meth:expression_math_analyst

If the input two pixel types (PixelFormat) are integer raster datasets, then the integer type result dataset will be output; otherwise, the floating point result dataset will be output. If the pixel type accuracy of the two input raster datasets is different, the pixel type of the result dataset of the operation is consistent with the higher accuracy of the two.

Parameters:
  • first_operand (DatasetGrid or str) – The specified first raster dataset.
  • second_operand (DatasetGrid or str) – The specified second raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If None, the calculation represents the entire area, if the operation involved in the range of inconsistent dataset, the dataset used all ranges calculated as the intersection region domain.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.minus_math_analyst(first_operand, second_operand, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Raster subtraction operation. Subtract the raster value of the second dataset from the raster value of the first raster dataset on a pixel-by-cell basis. When performing this operation, the order of the input raster dataset is important, and the order is different, and the result is usually different. For the specific use of raster algebraic operations, refer to: py:meth:expression_math_analyst

If the input two pixel types (PixelFormat) are integer raster datasets, then the integer type result dataset will be output; otherwise, the floating point result dataset will be output. If the pixel type accuracy of the two input raster datasets is different, the pixel type of the result dataset of the operation is consistent with the higher accuracy of the two.

Parameters:
  • first_operand (DatasetGrid or str) – The specified first raster dataset.
  • second_operand (DatasetGrid or str) – The specified second raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the dataset involved in the operation are inconsistent, the intersection of the ranges of all dataset will be used as the calculation area.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.multiply_math_analyst(first_operand, second_operand, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Grid multiplication operation. Multiply the raster values of the two input raster datasets pixel by pixel. For the specific use of raster algebraic operations, refer to: py:meth:expression_math_analyst

If the input two pixel types (PixelFormat) are integer raster datasets, then the integer type result dataset will be output; otherwise, the floating point result dataset will be output. If the pixel type accuracy of the two input raster datasets is different, the pixel type of the result dataset of the operation is consistent with the higher accuracy of the two.

Parameters:
  • first_operand (DatasetGrid or str) – The specified first raster dataset.
  • second_operand (DatasetGrid or str) – The specified second raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the dataset involved in the operation are inconsistent, the intersection of the ranges of all dataset will be used as the calculation area.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.to_float_math_analyst(input_data, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Raster floating point operations. Convert the raster value of the input raster dataset to floating point. If the input raster value is double-precision floating-point type, the result raster value after floating-point operation is also converted into single-precision floating-point type.

Parameters:
  • input_data (DatasetGrid or str) – The specified first raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the dataset involved in the operation are inconsistent, the intersection of the ranges of all dataset will be used as the calculation area.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.to_int_math_analyst(input_data, user_region=None, out_data=None, out_dataset_name=None, progress=None)

Grid rounding operation. Provides rounding operations on the raster values of the input raster dataset. The result of the rounding operation is to remove the decimal part of the raster value and only keep the integer of the raster value. If the input raster value is an integer type, the result of the rounding operation is the same as the input raster value.

Parameters:
  • input_data (DatasetGrid or str) – The specified first raster dataset.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the dataset involved in the operation are inconsistent, the intersection of the ranges of all dataset will be used as the calculation area.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.expression_math_analyst(expression, pixel_format, out_data, is_ingore_no_value=True, user_region=None, out_dataset_name=None, progress=None)

Raster algebra operation class. Used to provide mathematical operations and functional operations on one or more raster datasets.

The idea of raster algebraic operation is to use algebraic viewpoints to analyze geographical features and phenomena in space. In essence, it is to perform mathematical operations and functional operations on multiple raster datasets (DatasetGrid). The pixel value of the result raster is calculated from the value of one or more input pixels at the same position of the raster through algebraic rules.

Many functions in raster analysis are based on raster algebra operations. As the core content of raster analysis, raster algebra operations are very versatile and can help us solve various types of practical problems. For example, in the calculation of the amount of fill and excavation in a construction project, the DEM grid before and after the implementation of the project can be subtracted from the result grid to obtain the elevation difference before and after the construction, and the pixel value of the result grid Multiplying the actual area represented by the pixel, you can know the amount of filling and excavation of the project; another example, if you want to extract the area with the average rainfall of 20 mm and 50 mm across the country in 2000, you can pass The relational expression of “20<average annual rainfall<50” is obtained by computing the raster data of annual average rainfall.

There are two main ways to perform raster algebra operations through this type of method:

-Use the basic calculation methods provided by this class. This class provides six methods for basic operations, including plus (addition), minus (subtraction), multiply (multiplication),
divide (division operation), to_int (rounding operation) and to_float (floating point operation). Using these methods can complete the arithmetic operation of one or more grid data corresponding to the grid value. For relatively simple calculations, you can call these methods multiple times, such as (A/B)-(A/C).
-Execute calculation expressions. Using expressions can not only implement operator operations on one or more raster datasets, but also perform functional operations. Operators include arithmetic operators, relational operators and Boolean operators,
Arithmetic operations mainly include addition (+), subtraction (-), multiplication (*), and division (/); Boolean operations mainly include and (And), or (Or), exclusive or (Xor), and not (Not); relations Operations mainly include =, <, >, <>, >=, <=. Note that there are three possible output results for Boolean operations and relational operations: true=1, false=0, and no value (as long as there is an input value of no value, the result is no value).

In addition, it also supports 21 commonly used function operations, as shown in the following figure:

../_images/MathAnalyst_Function.png

Perform raster algebraic operation expressions and support custom expression raster operations. Through custom expressions, arithmetic operations, conditional operations, logical operations, functional operations (common functions, trigonometric functions), and compound operations can be performed. The composition of raster algebraic operation expressions needs to follow the following rules:

-The operation expression should be a string of the form:

[DatasourceAlias1.Raster1] + [DatasourceAlias2.Raster2] Use “[datasource alias.dataset name]” to specify the raster dataset participating in the operation; pay attention to use square brackets to enclose the name.

-Raster algebraic operation supports four operators (“+”, “-“, “*”, “/”), conditional operators (“>”, “>=”, “<”, “<=” ,” <>”, “==” ), logical operators (“|”, “&”, “Not()”, “^”) and some common mathematical functions (“abs()”, “acos()”, “asin()”, “atan()”, “acot()”, “cos()”, “cosh()”, “cot()”, “exp()”, “floor()”, “mod (,)”, “ln()”, “log()”, “pow(,)”, “sin()”, “sinh()”, “sqrt()”, “tan()”, “tanh ()”, “Isnull()”, “Con(,,)”, “Pick(,,,..)”). -The functions in the expression of algebraic operations can be nested, and the grid results calculated by the conditional operator are all binary values (such as greater than, less than, etc.), that is, the ones that meet the conditions are replaced by 1, and those that are not satisfied are used. Instead of 0, if you want to use other values to represent the values that meet the conditions and those that do not meet the conditions, you can use the condition extraction function Con(,,). For example: “Con(IsNull([SURFACE_ANALYST.Dem3]) ,100,Con([SURFACE_ANALYST.Dem3]> 100,[SURFACE_ANALYST.Dem3] ,-9999)) “, the meaning of this expression is: raster dataset Dem3 In the datasource aliased as SURFACE_ANALYST, change the grid with no value to 100. In the remaining grids, the value greater than 100 remains unchanged, and the value less than or equal to 100 is changed to -9999. -If there are negative values less than zero in the raster calculation, please add parentheses, such as: [DatasourceAlias1.Raster1]-([DatasourceAlias2.Raster2]). -In the expression, the operand connected by the operator can be a raster dataset, or a number or a mathematical function. -The argument of a mathematical function can be a numeric value, a certain dataset, or an operation expression of one dataset or multiple dataset. -The expression must be a single-line expression without carriage return. -The expression must contain at least one input raster dataset.

Note

-If the pixel types (PixelFormat) of the two datasets involved in the calculation are different, the pixel type of the result dataset of the calculation is consistent with the higher precision of the two. For example, if one is a 32-bit integer and the other is a single-precision floating-point type, then after the addition operation, the pixel type of the result dataset will be a single-precision floating-point type. -For the no-value data in the raster dataset, if no value is ignored, the result will still be no value regardless of the operation; if no value is not ignored, it means that no value will participate in the operation. For example, two raster datasets A and B are added together. A cell in A has no value with a value of -9999, and B corresponds to a cell with a value of 3000. If no value is ignored, the cell value of the calculation result is -6999 .

Parameters:
  • expression (str) – Custom raster operation expression.
  • pixel_format (PixelFormat or str) – The pixel format of the specified result dataset. Note that if the accuracy of the specified pixel type is lower than the accuracy of the pixel type of the raster dataset involved in the operation, the operation result may be incorrect.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • is_ingore_no_value (bool) – Whether to ignore raster data without value. true means to ignore the non-valued data, that is, the non-valued grid does not participate in the operation.
  • user_region (GeoRegion) – The valid calculation region specified by the user. If it is None, it means that all areas are calculated. If the ranges of the datasets involved in the operation are inconsistent, the intersection of the ranges of all datasets will be used as the calculation area.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetGrid or str

class iobjectspy.analyst.StatisticsField(source_field=None, stat_type=None, result_field=None)

Bases: object

Statistics on the field. Mainly used for: py:meth:summary_points

Initialization object

Parameters:
  • source_field (str) – the name of the field being counted
  • stat_type (StatisticsFieldType or str) – stat type
  • result_field (str) – result field name
result_field

str – result field name

set_result_field(value)

Set the result field name

Parameters:value (str) – result field name
Returns:self
Return type:StatisticsField
set_source_field(value)

Set the name of the field to be counted

Parameters:value (str) – field name
Returns:self
Return type:StatisticsField
set_stat_type(value)

Set field statistics type

Parameters:value (StatisticsFieldType or str) – field statistics type
Returns:self
Return type:StatisticsField
source_field

str – the name of the field to be counted

stat_type

StatisticsFieldType – field statistics type

iobjectspy.analyst.create_line_one_side_multi_buffer(input_data, radius, is_left, unit=None, segment=24, is_save_attributes=True, is_union_result=False, is_ring=True, out_data=None, out_dataset_name='BufferResult', progress=None)

Create single-sided multiple buffers for vector line datasets. Please refer to: py:meth:create_buffer for buffer introduction. Single-sided multiple buffers of a line means that multiple buffers are generated on one side of the online object. The left side refers to the left side of the node sequence direction along the line object, and the right side refers to the right side of the node sequence direction.

../_images/LineOneSideMultiBuffer.png
Parameters:
  • input_data (DatasetVector or Recordset) – The specified source vector dataset for creating multiple buffers. Only support line dataset or line record set
  • radius (list[float] or tuple[float] or str) – The specified multiple buffer radius list. The unit is specified by the unit parameter.
  • is_left (bool) – Whether to generate the left buffer. Set to True to generate a buffer on the left side of the line, otherwise generate a buffer on the right side.
  • unit (BufferRadiusUnit) – The specified buffer radius unit.
  • segment (int) – specified edge segment fitting number
  • is_save_attributes (bool) – Whether to preserve the field attributes of the object for buffer analysis. This parameter is invalid when the result face dataset is merged, that is, it is valid when is_union_result is False.
  • is_union_result (bool) – Whether to merge the buffers, that is, whether to merge all the buffer areas generated by each object of the source data and return.
  • is_ring (bool) – Whether to generate a ring buffer. Set to True, when generating multiple buffers, the outer buffer is a ring-shaped area adjacent to the inner data; set to False, the outer buffer is an area containing the inner data.
  • out_data (Datasource) – The datasource to store the result data
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.create_multi_buffer(input_data, radius, unit=None, segment=24, is_save_attributes=True, is_union_result=False, is_ring=True, out_data=None, out_dataset_name='BufferResult', progress=None)

Create multiple buffers for vector datasets. For buffer introduction, please refer to: py:meth:create_buffer

Parameters:
  • input_data (DatasetVector or Recordset) – The specified source vector dataset or record set for creating multiple buffers. Support point, line, area dataset and network dataset. Analyzing the network dataset is to buffer the edges in it.
  • radius (list[float] or tuple[float]) – The specified multiple buffer radius list. The unit is specified by the unit parameter.
  • unit (BufferRadiusUnit or str) – The specified buffer radius unit.
  • segment (int) – Specified edge segment fitting number.
  • is_save_attributes (bool) – Whether to preserve the field attributes of the object for buffer analysis. This parameter is invalid when the result face dataset is merged, that is, it is valid when is_union_result is False.
  • is_union_result (bool) – Whether to merge the buffers, that is, whether to merge all the buffer areas generated by each object of the source data and return.
  • is_ring (bool) – Whether to generate a ring buffer. Set to True, when generating multiple buffers, the outer buffer is a ring-shaped area adjacent to the inner data; set to False, the outer buffer is an area containing the inner data.
  • out_data (Datasource) – The datasource to store the result data
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.compute_min_distance(source, reference, min_distance, max_distance, out_data=None, out_dataset_name=None, progress=None)

The closest distance calculation. Calculate the minimum value of the distance between each object in the “calculated record set” and all objects in the query range in the “reference record set” (ie, the closest distance), and save the closest distance information to a new attribute table Data collection. The closest distance calculation function is used to calculate the distance from each object in the “calculated record set” (called the “computed object”) to all the objects in the query range (called the “reference object”) in the “reference record set” The minimum value in, which is the closest distance, the result of the calculation is a pure attribute table dataset, which records the distance information from the “computed object” to the nearest “reference object”, and is stored using three attribute fields, namely: Source_ID (The SMID of the “computed object”), depending on the type of the reference object, it may be Point_ID, Line_ID, Region_ID (the SMID of the “reference object”) and Distance (the distance between the previous two). If the object is calculated with a plurality of reference test objects with the shortest distance, the attribute table corresponding add multiple records.

  • Supported data types

    The “computed record set” only supports two-dimensional point record sets, and the “reference record set” can be a record set obtained from two-dimensional point, line, and surface dataset and two-dimensional network dataset. From the two-dimensional network dataset, you can obtain a record set with arcs, or a record set with nodes (obtained from a subset of the network dataset). These two types of record sets can be used as “reference record sets”. Find the nearest edge or the nearest node.

    “Calculated record set” and “reference record set” can be the same record set, or different record sets queried from the same dataset. In these two cases, the distance from the object to itself will not be calculated.

  • Query range

    The query range is composed of a minimum distance and a maximum distance specified by the user. It is used to filter the “reference objects” that do not participate in the calculation, that is, starting from the “calculated object”, only the distance between the minimum distance and the maximum distance (including Equal to) the “reference object” involved in the calculation. If the query range is set from “0” to “-1”, it means calculating the closest distance to all objects in the “reference record set”.

    As shown in the figure below, the red dot comes from the “calculated record set”, the square comes from the “reference record set”, and the pink area represents the query range, and only the blue square within the query range participates in the closest distance calculation, which means this example The result of the calculation only contains the SMID and distance value of the red dot and the closest blue square

    ../_images/ComputeDistance.png
  • Precautions:

    • The dataset to which “calculated record set” and “reference record set” belong must have the same coordinate system.

    • As shown in the figure below, the distance from the point to the line object is the minimum distance from the calculated point to the entire line object, that is, the shortest distance between the point found on the line and the calculated point; similarly, the distance from the point to the area object is the calculated point The minimum distance to the entire boundary of the area object.

      ../_images/ComputeDistance_1.png
    • When calculating the distance between two objects, the distance is 0 when it contains or (partially) overlaps. For example, if a point object is on an online object, the distance between the two is 0.

Parameters:
  • source (DatasetVector or Recordset or str) – The specified record set to be calculated. Only supports two-dimensional point record set and dataset
  • reference (DatasetVector or Recordset or str) – The specified reference record set. Support two-dimensional point, line, area record set and dataset
  • min_distance (float) – The minimum distance of the specified query range. The value range is greater than or equal to 0. The unit is the same as the unit of the dataset to which the calculated record set belongs.
  • max_distance (float) – The maximum distance of the specified query range. The value range is a value greater than 0 and -1. When set to -1, it means that the maximum distance is not limited. The unit is the same as the unit of the dataset to which the calculated record set belongs.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The specified datasource used to store the result attribute table dataset.
  • out_dataset_name (str) – The name of the specified result attribute table dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector

iobjectspy.analyst.compute_range_distance(source, reference, min_distance, max_distance, out_data=None, out_dataset_name=None, progress=None)

Range distance calculation. Calculate the distance from each object in the “calculated record set” to each object in the query range in the “reference record set”, and save the distance information to a new attribute table dataset.

This function is used to calculate the distance from each object in record set A to each object in the query range in record set B. Record set A is called “computed record set”, and the objects in it are called “computed object” , Record set B is called “reference record set”, and the objects in it are called “reference objects”. “Calculated record set” and “reference record set” can be the same record set, or different record sets queried from the same dataset. In these two cases, the distance from the object to itself will not be calculated.

The query range is composed of a minimum distance and a maximum distance. It is used to filter the “reference objects” that do not participate in the calculation, that is, starting from the “calculated object”, only those whose distance is between the minimum distance and the maximum distance (including equals) The “reference object” participates in the calculation.

As shown in the figure below, the red dot is the “calculated object”, the square is the “reference object”, and the pink area represents the query range. Only the blue squares within the query range participate in the distance calculation, that is to say, the calculation in this example The result only contains the SMID and distance values of the red dot and the blue square in the pink area.

../_images/ComputeDistance.png

The result of the range distance calculation is a pure attribute table dataset, which records the distance information from the “computed object” to the “reference object”, and is stored in three attribute fields: Source_ID (SMID of the “computed object”), Depending on the type of the reference object, it may be Point_ID, Line_ID, Region_ID (the SMID of the “reference object”), and Distance (the distance between the previous two).

Precautions:

  • The dataset to which “calculated record set” and “reference record set” belong must have the same coordinate system.

  • As shown in the figure below, the distance from the point to the line object is the minimum distance from the calculated point to the entire line object, that is, the shortest distance between the point found on the line and the calculated point; similarly, the distance from the point to the area object is the calculated point The minimum distance to the entire boundary of the area object.

    ../_images/ComputeDistance_1.png
  • When calculating the distance between two objects, the distance is 0 when it contains or (partially) overlaps. For example, if a point object is on an online object, the distance between the two is 0.

Parameters:
  • source (DatasetVector or Recordset or str) – The specified record set to be calculated. Only supports two-dimensional point record set or dataset
  • reference (DatasetVector or Recordset or str) – The specified reference record set. Only supports 2D point, line, area record set or dataset
  • min_distance (float) – The minimum distance of the specified query range. The value range is greater than or equal to 0. The unit is the same as the unit of the dataset to which the calculated record set belongs.
  • max_distance (float) – The maximum distance of the specified query range. The value range is greater than or equal to 0, and must be greater than or equal to the minimum distance. The unit is the same as the unit of the dataset to which the calculated record set belongs.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The specified datasource used to store the result attribute table dataset.
  • out_dataset_name (str) – The name of the specified result attribute table dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector

iobjectspy.analyst.integrate(source_dataset, tolerance, unit=None, precision_orders=None, progress=None)

Data integration is performed on the data set, and the integration process includes node capture and insertion point operations. It has a similar function with py:func:.preprocess:, which can handle topological errors in the data, The difference from py:func:.preprocess: is that data integration will be iterated multiple times until there are no topological errors in the data (no node capture and interpolation are required).

>>> ds = open_datasource('E:/data.udb')
>>> integrate(ds['building'], 1.0e-6)
True
>>> integrate(ds['street'], 1.0,'meter')
True
Parameters:
  • source_dataset (DatasetVector or str) – the data set being processed
  • tolerance (float) – node tolerance
  • unit (Unit or str) – node tolerance unit, when it is None, the data set coordinate system unit is used. If the data set coordinate system is a projected coordinate system, angle units are prohibited.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

Return True if successful, otherwise False

Return type:

bool

iobjectspy.analyst.eliminate(source, region_tolerance, vertex_tolerance, is_delete_single_region=False, progress=None, group_fields=None, priority_fields=None)

Fragmented polygon merging, that is, the polygons in the dataset smaller than the specified area are merged into adjacent polygons. Currently only supports merging broken polygons into the adjacent polygon with the largest area.

In the process of data production and processing, or after superimposing inaccurate data, some fragmented and useless polygons may be generated, called broken polygons. You can merge these broken polygons into adjacent polygons through the “Merge broken polygons” function, or delete isolated broken polygons (polygons that do not intersect or tangent to other polygons) to simplify the data.

Generally, polygons whose area is much smaller than other objects in the dataset are considered “broken polygons”, usually between one millionth and one millionth of the largest area in the same dataset, but the minimum polygon tolerancecan be set according to the needs of actual research.
In the data shown in the figure below, there are many useless broken polygons on the boundaries of larger polygons.
../_images/Eliminate_1.png

The figure below is the result of “merging broken polygons” on the data. Compared with the figure above, it can be seen that the broken polygons have been merged into the adjacent larger polygons.

../_images/Eliminate_2.png

Note

  • This method is suitable for situations where two faces have a common boundary, and the common boundary will be removed after processing.
  • After merging broken polygons, the number of objects in the dataset may be reduced.
Parameters:
  • source (DatasetVector or str) – The specified dataset to be merged by broken polygons. Only supports vector 2D surface datasets. Specifying other types of datasets will throw an exception.
  • region_tolerance (float) – The specified minimum polygon tolerance. The unit is the same as the area calculated by the system (SMAREA field). Compare the value of the SMAREA field with the tolerance value, and polygons smaller than this value will be eliminated. The value range is greater than or equal to 0. Specifying a value less than 0 will throw an exception.
  • vertex_tolerance (float) – The specified node tolerance. The unit is the same as the unit of the dataset for merging broken polygons. If the distance between two nodes is less than this tolerance value, the two nodes will be automatically merged into one node during the merging process. The value range is greater than or equal to 0. Specifying a value less than 0 will throw an exception.
  • is_delete_single_region (bool) – Specify whether to delete isolated small polygons. If true, the isolated small polygon will be deleted, otherwise it will not be deleted.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
  • group_fields (list[str] or tuple[str] or str) – group fields, only polygons with the same field value can be merged
  • priority_fields (list[str] or tuple[str] or str) –

    Priority fields of the merged object, valid when the grouping field is not empty. The user can specify multiple priority fields or not, if the priority field is specified, the order of the fields will be followed. When the attribute field value of the merged polygon is equal to the attribute field value of the adjacent polygon, it will be merged to the corresponding polygon. If not equal, the field value of the next priority field will be compared. If all the priority field values are not equal, it will be merged into the adjacent polygon with the largest area by default.

    For example, the user specifies three priority fields of A, B, and C
    -When the value of the A field of the merged polygon F1 is equal to the value of the A field of the adjacent object F2, F1 is merged into F2 -If field A is not equal, compare the value of field B. If the value of field B of F1 is equal to the value of field B of adjacent object F2, but the value of field A of F1 is also equal to the value of field A of F3, then F1 is merged To F3, because the first field has a higher priority. -If there are two objects F2 and F3 whose A field value is equal to F1’s A field value, the polygon with the largest area is used by default, that is, if Area(F2)> Area(F3), then F1 is merged into F2, otherwise merged Go to F3.

    When the priority field is empty, the maximum area principle is used, that is, the small polygons (the merged polygons) will be merged into the polygon with the largest area.

Returns:

Return True if the integration is successful, False if it fails

Return type:

bool

iobjectspy.analyst.eliminate_specified_regions(source, small_region_ids, vertex_tolerance, exclude_region_ids=None, group_fields=None, priority_fields=None, is_max_border=False, progress=None)

Specify the ID of the polygon to be merged, and perform the operation of merging broken polygons. For related introduction about merging broken polygons, please refer to:py:meth:.eliminate for details.

Parameters:
  • source (DatasetVector or str) – The specified dataset to be merged by broken polygons. Only supports vector 2D surface datasets. Specifying other types of datasets will throw an exception.
  • small_region_ids (int or list[int] or tuple[int]) – Specify the ID of the small polygon to be merged. If the specified object finds a neighboring object that meets the requirements, it will be merged into the neighboring object, and the small polygon will be deleted.
  • vertex_tolerance (float) – The specified node tolerance. The unit is the same as the unit of the dataset for merging broken polygons. If the distance between the two nodes is less than this tolerance limit, the merging process will automatically Automatically merge these two nodes into one node. The value range is greater than or equal to 0. Specifying a value less than 0 will throw an exception.
  • exclude_region_ids (int or list[int] or tuple[int]) – Specify the IDs of the polygons to be excluded, that is, the IDs of the objects that are not involved in the operation.
  • group_fields (list[str] or tuple[str] or str) – group fields, only polygons with the same field value can be merged
  • priority_fields (list[str] or tuple[str] or str) –

    Priority fields of the merged object, valid when the grouping field is not empty. The user can specify multiple priority fields or not, if the priority field is specified, the order of the fields will be followed. When the attribute field value of the merged polygon is equal to the attribute field value of the adjacent polygon, it will be merged to the corresponding polygon. If not equal, the field value of the next priority field will be compared. If all the priority field values are not equal, the wait will be merged into the adjacent polygon with the largest area or the polygon with the largest common boundary by default.

    For example, the user specifies three priority fields of A, B, and C
    -When the value of the A field of the merged polygon F1 is equal to the value of the A field of the adjacent object F2, F1 is merged into F2 -If field A is not equal, compare the value of field B. If the value of field B of F1 is equal to the value of field B of adjacent object F2, but the value of field A of F1 is also equal to the value of field A of F3, then F1 is merged To F3, because the first field has a higher priority. -If the A field value of two objects F2 and F3 is equal to the value of F1 field A, the polygon with the largest area or the polygon with the largest common boundary will be used by default.

    When the priority field is empty, the principle of maximum area is used, that is, small polygons (merged polygons) will be merged on the polygon with the largest area or the polygon with the largest common boundary.

  • is_max_border (bool) – Set whether to merge objects with the maximum boundary method: -If True, the specified small polygons will be merged into the longest adjacent common boundary polygon -If False, the specified small polygons will be merged into the adjacent polygon with the largest area.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return True if the integration is successful, False if it fails

Return type:

bool

iobjectspy.analyst.edge_match(source, target, edge_match_mode, tolerance=None, is_union=False, edge_match_line=None, out_data=None, out_dataset_name=None, progress=None)

The edge of the map frame is automatically connected to the two two-dimensional line dataset.

Parameters:
  • source (DatasetVector) – Connect the edge source dataset. It can only be a two-dimensional line dataset.
  • target (DatasetVector) – Param target data. It can only be a two-dimensional line dataset, which has the same coordinate system as the edge source data.
  • edge_match_mode (EdgeMatchMode or str) – Edge matching mode.
  • tolerance (float) – Param tolerance. The unit is the same as the unit of the dataset to be edged.
  • is_union (bool) – Whether to merge the edges.
  • edge_match_line (GeoLine) – The edge line of the data edge match. It is used to calculate the intersection when the edge connection method is the intersection position and EdgeMatchMode.THE_INTERSECTION. If it is not set, the intersection will be calculated automatically according to the dataset range. After setting the edge line, the endpoints of the objects that are associated with the edge will be as close to the edge line as possible.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the associated data is located.
  • out_dataset_name (str) – The name of the dataset to which the associated data is connected.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

If the edge connection dataset is set and the edge connection is successful, the edge connection dataset object or dataset name will be returned. If no edge connection dataset is set, no edge connection dataset will be generated, and whether the connection is successful will be returned.

Return type:

DatasetVector or str or bool

iobjectspy.analyst.region_to_center_line(region_data, out_data=None, out_dataset_name=None, progress=None)

Extracting the centerline of a polygon dataset or record set is generally used to extract the centerline of a river.

This method is used to extract the center line of the area object. If the surface contains an island hole, the island hole will be bypassed during extraction and the shortest path will be used to bypass it. As shown below.

../_images/RegionToCenterLine_1.png

If the area object is not a simple long strip, but has a bifurcated structure, the extracted center line is the longest segment. As shown below.

../_images/RegionToCenterLine_2.png
Parameters:
  • region_data (Recordset or DatasetVector) – The specified region record set or region dataset of the center line to be extracted
  • out_data (Datasource or DatasourceConnectionInfo or str) – result datasource information or datasource object
  • out_dataset_name (str) – the name of the result centerline dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset object or result dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.dual_line_to_center_line(source_line, max_width, min_width, out_data=None, out_dataset_name=None, progress=None)

Extract the center line from the double-line record set or dataset according to the given width. This function is generally used to extract the centerline of a two-lane road or river. The double lines are required to be continuous and parallel or basically parallel. The extraction effect is as shown below.

../_images/DualLineToCenterLine.png

Note

  • Double lines are generally two-line roads or two-line rivers, which can be line data or surface data.
  • The max_width and min_width parameters are used to specify the maximum width and minimum width of the double line in the record set, and are used to extract the center line of the double line between the minimum and maximum width. The centerline is not extracted for the double lines that are less than the minimum width and the part greater than the maximum width, and the double lines that are greater than the maximum width are retained, and the double lines that are less than the minimum width are discarded.
  • For two-lane roads or complex intersections in two-lane rivers, such as five-forks and six-forks, or where the maximum and minimum widths of the two-lane differ greatly, the extracted results may not be ideal.
Parameters:
  • source_line (DatasetVector or Recordset or str) – The specified two-line record set or dataset. It is required to be a dataset or record set of polygon type.
  • max_width (float) – The maximum width of the specified double line. Requires a value greater than 0. The unit is the same as the dataset to which the double-line record set belongs.
  • min_width (float) – The minimum width of the specified double line. Requires a value greater than or equal to 0. The unit is the same as the dataset to which the double-line record set belongs.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The specified datasource used to store the result centerline dataset.
  • out_dataset_name (str) – The name of the specified result centerline dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset object or result dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.grid_extract_isoline(extracted_grid, interval, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

It is used to extract contour lines from a raster dataset and save the result as a dataset.

Contours are smooth curves or polylines connected by a series of points with the same value, such as contour lines and isotherms. The distribution of contour lines reflects the change of values on the grid surface. The denser the distribution of contour lines, the more drastic changes in the surface value of the grid. For example, if it is a contour line, the denser the gradient, the steeper the slope. On the contrary, the slope is gentler. By extracting contour lines, locations with the same values of elevation, temperature, precipitation, etc. can be found, and the distribution of contour lines can also show steep and gentle areas of change.

As shown below, the image above is the DEM raster data of a certain area, and the image below is the contour lines extracted from the image above. The elevation information of DEM raster data is stored in each raster unit. The raster has a size. The size of the raster depends on the resolution of the raster data, that is, each raster unit represents the corresponding on the actual ground. The size of the plot, therefore, raster data cannot accurately reflect the elevation information at each location, and vector data has relatively great advantages in this respect. Therefore, the contour lines are extracted from the raster data and the grid The grid data becomes vector data, which can highlight the details of the data for easy analysis. For example, from the contour data, you can clearly distinguish the steep and relaxed parts of the terrain, and you can distinguish the ridges and valleys.

../_images/SurfaceAnalyst_1.png ../_images/SurfaceAnalyst_2.png

SuperMap provides two methods to extract isolines:

  • EExtract equidistant contour lines by setting datum_value and interval. This method is to calculate which elevation contours to extract in the two directions before and after the reference value at the interval of the equivalence distance.
For example, for DEM raster data with an elevation range of 15-165, set the reference value to 50 and the equidistance to 20, then the elevations of the extracted contours are: 30, 50, 70, 90, 110, 130, and 150, respectively.
  • Specify a set of Z values through the expected_z_values method, and only extract contours/surfaces whose elevation is the value in the set. For example, if the elevation range of DEM raster data is 0-1000, and the Z value set is specified as [20,300,800],
then the extracted result will only have three isolines of 20, 300, 800 or isosurfaces composed of the three.

Note

  • If the attributes set by the above two methods are called at the same time, only the expected_z_values method is valid, that is, only the contour of the specified value is extracted. So If you want to extract equidistant contours, you cannot call the expected_z_values method.
Parameters:
  • extracted_grid (DatasetGrid or str) – The parameters required for the specified extraction operation.
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isoline, must be greater than 0.
  • datum_value (float) –

    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation contours to extract. The reference value is used as an initial starting value to generate the contour, and it is calculated in two directions at the interval of the equivalence distance, so it is not necessarily the value of the minimum contour. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) – The set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted. When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.
  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method that SuperMap uses when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to 1 times the source raster resolution.

    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated. The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default values to extract contours.

  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – Set the smoothness of the isoline or isosurface. A smoothness of 0 or 1 means no smoothing is performed. The larger the value, the higher the smoothness. When extracting contour lines, smoothness can be set freely
  • clip_region (GeoRegion) – The specified clip region object. If you do not need to trim the operation result, you can use the None value to replace this parameter.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it will directly return a list of contour objects.
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or the name of the dataset obtained by extracting the contour, or a list of contour objects.

Return type:

DatasetVector or str or list[GeoLine]

iobjectspy.analyst.grid_extract_isoregion(extracted_grid, interval, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

Used to extract isosurfaces from raster datasets.

SuperMap provides two methods to extract isosurfaces:

  • Extract equidistant isosurfaces by setting datum_value and interval. This method is to calculate which elevation contours to extract in the two directions before and after the reference value at the interval of the equivalence distance. For example, for DEM raster data with an elevation range of 15-165, set the reference value to 50 and the equidistance to 20, then the elevations of the extracted contours are: 30, 50, 70, 90, 110, 130, and 150, respectively.
  • Specify a set of Z values through the expected_z_values method, and only extract the isosurface whose elevation is the value in the set. For example, DEM raster data with an elevation range of 0-1000, Specify the Z value set as [20,300,800], then the extracted result will only have an isosurface consisting of 20, 300, and 800.

Note

  • If the attributes set by the above two methods are called at the same time, only the setExpectedZValues method is valid, that is, only the isosurface of the specified value is extracted. Therefore, if you want to extract equidistant isosurfaces, you cannot call the expected_z_values method.
Parameters:
  • extracted_grid (The specified raster dataset to be extracted.) – DatasetGrid or str
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isolines, must be greater than 0
  • datum_value (float) –

    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation isosurfaces are to be extracted. The reference value is used as an initial starting value for generating contour lines, and is calculated in two directions before and after the equivalence distance, so it is not necessarily the minimum isosurface value. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) –

    The set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted.

    When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.
  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method that SuperMap uses when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to 1 times the source raster resolution.

    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated. The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default values to extract contours.

  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – For isosurface extraction, the method of first extracting the isoline and then generating the isosurface is adopted. If the smoothness is set to 2, the intermediate result dataset, that is, the number of points of the isoline object will be 2 of the original dataset. When the smoothness setting value continues to increase, the number of points will increase exponentially by 2, which will greatly reduce the efficiency of isosurface extraction and may even lead to extraction failure.
  • clip_region (GeoRegion) – The specified clip region object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it will directly return the isosurface object list
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or dataset name obtained by extracting the isosurface, or the list of isosurface objects

Return type:

DatasetVector or str or list[GeoRegion]

iobjectspy.analyst.point_extract_isoline(extracted_point, z_value_field, resolution, interval, terrain_interpolate_type=None, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

It is used to extract contour lines from a point dataset and save the result as a dataset. The realization principle of the method is similar to the method of “extracting isolines from the point dataset”, the difference is that, The object of operation here is a point dataset. Therefore, the implementation process is to first use the IDW interpolation method (‘InterpolationAlgorithmType.IDW`) to perform interpolation analysis on the point data in the point dataset to obtain the raster dataset

(the intermediate result of the method implementation, The raster value is single-precision floating-point type), and then the contours are extracted from the raster dataset.

The points in the point data are scattered, and the point data can represent the location information very well, but the other attribute information of the point itself cannot be expressed. For example, the elevation information of a large number of sampling points in a certain research area has been obtained, as follows As shown (above), from the map, we can’t see the trend of the ups and downs of the terrain. We can’t see where the terrain is steep or where the terrain is flat. If we use the principle of contour lines, the information contained in the data of these points can be equal The value line is expressed in the form of connecting adjacent points with the same elevation value to form the contour map shown in the figure below, then the terrain information about this area is clearly displayed. The contours extracted from different point data have different meanings, mainly based on the information represented by the point data. If the value of the point represents temperature, then the extracted contour is the isotherm; if the value of the point represents rainfall, then The extracted isoline is the isoprecipitation line, and so on.

../_images/SurfaceAnalyst_3.png ../_images/SurfaceAnalyst_4.png

Note

  • When extracting contours (surfaces) from point data (point dataset/record set/three-dimensional point collection), if the resolution of the intermediate result raster obtained by interpolation is too small,

it will cause the extraction of contours (surfaces) to fail . Here is a judgment method: divide the length and width of the Bounds of the point data by the set resolution, which is the number of rows and columns of the intermediate result grid. If any of the number of rows and columns is greater than 10000, it is considered that the resolution is set too small , The system will throw an exception at this time

Parameters:
  • extracted_point (DatasetVector or str or Recordset) – The specified point dataset or record set to be extracted
  • z_value_field (str) – The specified field name for extraction operation. When extracting contours, the value in this field will be used to perform interpolation analysis on the point dataset.
  • resolution (float) – The resolution of the specified intermediate result (raster dataset).
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isolines, must be greater than 0
  • terrain_interpolate_type (TerrainInterpolateType or str) – terrain interpolation type.
  • datum_value (float) –
    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation contours to extract.
    The reference value is used as an initial starting value to generate the contour, and it is calculated in two directions at the interval of the equivalence distance, so it is not necessarily the value of the minimum contour. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) – The set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted. When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.
  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method used by SuperMap when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to 1 times the source raster resolution.

    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated.
    The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default
    values to extract contours.
  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – Set the smoothness of the isoline or isosurface. A smoothness of 0 or 1 means no smoothing is performed. The larger the value, the higher the smoothness. When extracting contour lines, smoothness can be set freely
  • clip_region (GeoRegion) – The specified clip region object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it will directly return a list of contour objects.
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or the name of the dataset obtained by extracting the contour, or the list of contour objects

Return type:

DatasetVector or str or list[GeoLine]

iobjectspy.analyst.points_extract_isoregion(extracted_point, z_value_field, interval, resolution=None, terrain_interpolate_type=None, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

Used to extract isosurfaces from point dataset. The realization principle of the method is to first use the IDW interpolation method (InterpolationAlgorithmType.IDW) to perform interpolation analysis on the point dataset. Obtain the raster dataset (the intermediate result of the method, the raster value is single-precision floating point), and then extract the isolines from the raster dataset, and finally the isosurface is formed by the isolines.

An isosurface is a surface that is closed by adjacent isolines. The change of the isosurface can intuitively indicate the change between adjacent contours, such as elevation, temperature, precipitation, pollution or atmospheric pressure, etc.,
to express with the isosurface is very intuitive and effective. The effect of isosurface distribution is the same as the distribution of contour lines, and it also reflects the changes on the grid surface. The denser the isosurface distribution indicates greater changes in the grid surface value, and vice versa. The value changes less; the narrower the isosurface, the greater the change in the raster surface value, and the opposite, the less the change in the raster surface value.

As shown below, the above picture is a point dataset that stores elevation information, and the picture below is an isosurface extracted from the above point dataset. From the isosurface data, the undulations of the terrain can be clearly analyzed. The denser the iso-surface, the narrower the area, the steeper the terrain, on the contrary, the sparser the iso-surface, and the wider the area, the more relaxed the terrain, with less change.

../_images/SurfaceAnalyst_5.png ../_images/SurfaceAnalyst_6.png

Note

  • When extracting isosurfaces from point data (point dataset/record set/three-dimensional point collection), if the resolution of the intermediate result raster obtained by interpolation is too small, the isosurface extraction will fail.

Here is a judgment method: use the length and width of the Bounds of the point data to be divided by the set resolution, which is the number of rows and columns of the intermediate result grid. If any of the number of rows and columns is greater than 10000, it is considered that the resolution is set too small , The system will throw an exception at this time.

Parameters:
  • extracted_point (DatasetVector or str or Recordset) – The specified point dataset or record set to be extracted
  • z_value_field (str) – The specified field name for extraction operation. When extracting the isosurface, the value in this field will be used to perform interpolation analysis on the point dataset.
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isolines, must be greater than 0
  • resolution (float) – The resolution of the specified intermediate result (raster dataset).
  • terrain_interpolate_type (TerrainStatisticType) – The specified terrain interpolation type.
  • datum_value (float) –

    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation isosurfaces are to be extracted. The reference value is used as an initial starting value for generating contour lines, and is calculated in two directions before and after the equivalence distance, so it is not necessarily the minimum isosurface value. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) – The set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted. When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.
  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method used by SuperMap when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to

    1 times the source raster resolution.
    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated. The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default values to extract contours.
  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – Set the smoothness of the isosurface. A smoothness of 0 or 1 means no smoothing is performed. The larger the value, the higher the smoothness. For isosurface extraction, the method of first extracting the isoline and then generating the isosurface is adopted. If the smoothness is set to 2, the intermediate result dataset, that is, the number of points of the isoline object will be 2 of the original dataset. When the smoothness setting value continues to increase, the number of points will increase exponentially by 2, which will greatly reduce the efficiency of isosurface extraction and may even lead to extraction failure.
  • clip_region (GeoRegion) – The specified clip region object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it will directly return the isosurface object list
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or dataset name obtained by extracting the isosurface, or the list of isosurface objects

Return type:

DatasetVector or str or list[GeoRegion]

iobjectspy.analyst.point3ds_extract_isoline(extracted_points, resolution, interval, terrain_interpolate_type=None, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

It is used to extract contour lines from a set of 3D points and save the result as a dataset. The realization principle of the method is to first use the three-dimensional information (elevation or temperature, etc.) stored in the point set, that is, data other than the coordinate information of the point, to perform interpolation analysis on the point data to obtain a raster dataset (the intermediate result of the method implementation, The raster value is single-precision floating-point type), and then the contours are extracted from the raster dataset.

Point data extraction isoline introduction reference: py:meth:point_extract_isoline

Note

  • When extracting contours (surfaces) from point data (point dataset/record set/three-dimensional point collection), if the resolution of the intermediate result raster obtained by interpolation is too small, it will cause the extraction of contours (surfaces) to fail . Here is a judgment method: divide the length and width of the Bounds of the point data by the set resolution, which is the number of rows and columns of the intermediate result grid. If any of the number of rows and columns is greater than 10000, it is considered that the resolution is set too small , The system will throw an exception at this time the resolution is set too small , The system will throw an exception at this time
Parameters:
  • extracted_points (list[Point3D]) – Specifies the point string of the contour to be extracted. The points in the point string are three-dimensional points. Each point stores X, Y coordinate information and only one three-dimensional information (for example: elevation information, etc.).
  • resolution (float) – The resolution of the specified intermediate result (raster dataset).
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isolines, must be greater than 0
  • terrain_interpolate_type (TerrainInterpolateType or str) – terrain interpolation type.
  • datum_value (float) –

    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation contours to extract. The reference value is used as an initial starting value to generate the contour, and it is calculated in two directions at the interval of the equivalence distance, so it is not necessarily the value of the minimum contour. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) – The set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted. When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.
  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method that SuperMap uses when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to 1 times the source raster resolution.

    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated. The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default values to extract contours.

  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – Set the smoothness of the isoline or isosurface. A smoothness of 0 or 1 means no smoothing is performed. The larger the value, the higher the smoothness. When extracting contour lines, the smoothness can be set freely;
  • clip_region (GeoRegion) – The specified clip region object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it directly Return a list of contour objects
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or the name of the dataset obtained by extracting the contour, or the list of contour objects

Return type:

DatasetVector or str or list[GeoLine]

iobjectspy.analyst.point3ds_extract_isoregion(extracted_points, resolution, interval, terrain_interpolate_type=None, datum_value=0.0, expected_z_values=None, resample_tolerance=0.0, smooth_method='BSPLINE', smoothness=0, clip_region=None, out_data=None, out_dataset_name=None, progress=None)

It is used to extract the isosurface from the 3D point set and save the result as a dataset. The realization principle of the method is to first use the third-dimensional information (elevation or temperature, etc.) stored in the point set, that is, data other than the coordinate information of the point, and use the IDW interpolation method (InterpolationAlgorithmType.IDW) to perform interpolation analysis on the point data to obtain the grid. Grid dataset (the intermediate result of the method, the raster value is single-precision floating point type), and then the isosurface is extracted from the raster dataset.

Point data extraction isosurface introduction, refer to: py:meth:points_extract_isoregion

Parameters:
  • extracted_points (list[Point3D]) – Specifies the point string of the isosurface to be extracted. The points in the point string are three-dimensional points. Each point stores X, Y coordinate information and only one third-dimensional information (for example: elevation information, etc.) .
  • resolution (float) – the resolution of the specified intermediate result (raster dataset)
  • interval (float) – equivalence interval, equivalence interval is the interval value between two isolines, must be greater than 0
  • terrain_interpolate_type (TerrainInterpolateType or str) – The specified terrain interpolation type.
  • datum_value (float) –

    Set the reference value of the contour. The reference value and the interval (interval) jointly determine which elevation isosurfaces are to be extracted. The reference value is used as an initial starting value for generating contour lines, and is calculated in two directions before and after the equivalence distance, so it is not necessarily the minimum isosurface value. For example, for DEM raster data with an elevation range of 220-1550, if the reference value is 500 and the equidistance is 50, the result of extracting the contour is: the minimum contour value is 250, and the maximum contour value is 1550.

    When expected_z_values are set at the same time, only the values set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • expected_z_values (list[float] or str) –

    TThe set of Z values of expected analysis results. The Z value collection stores a series of values, which is the value of the contour to be extracted. That is, only the contours whose elevation values are in the Z value set will be extracted.

    When datum_value is set at the same time, only the value set by expected_z_values will be considered, that is, only contour lines whose elevation is these values will be extracted.

  • resample_tolerance (float) –

    The distance tolerance factor for resampling. By resampling the extracted contour lines, the final contour data extracted can be simplified. The resampling method that SuperMap uses when extracting contours/surfaces is the light barrier method (VectorResampleType.RTBEND), which requires a resampling distance tolerance for sampling control. Its value is obtained by multiplying the resampling distance tolerance coefficient by the source raster resolution, and the value is generally 0 to 1 times the source raster resolution.

    The distance tolerance coefficient for resampling is 0 by default, that is, no sampling is performed to ensure the correct result, but by setting reasonable parameters, the execution speed can be accelerated. The larger the tolerance value, the fewer the control points at the boundary of the contour, and the intersection of the contour may occur at this time. Therefore, it is recommended that users first use the default

    values to extract contours.
  • smooth_method (SmoothMethod or str) – the method used for smoothing
  • smoothness (int) – Set the smoothness of the isosurface. A smoothness of 0 or 1 means no smoothing is performed. The larger the value, the higher the smoothness. For isosurface extraction, the method of first extracting the isoline and then generating the isosurface is adopted. If the smoothness is set to 2, the intermediate result dataset, that is, the number of points of the isoline object will be 2 of the original dataset. When the smoothness setting value continues to increase, the number of points will increase exponentially by 2, which will greatly reduce the efficiency of isosurface extraction and may even lead to extraction failure.
  • clip_region (GeoRegion) – The specified clip region object.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset. If it is empty, it will directly return the isosurface object list
  • out_dataset_name (str) – The name of the specified extraction result dataset.
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

The dataset or dataset name obtained by extracting the isosurface, or the list of isosurface objects

Return type:

DatasetVector or str or list[GeoRegion]

iobjectspy.analyst.grid_basic_statistics(grid_data, function_type=None, progress=None)

For basic statistical analysis of raster, you can specify the type of transformation function. It is used to perform basic statistical analysis on raster datasets, including maximum, minimum, average, and standard deviation.

When the transformation function is specified, the data used for statistics is the value obtained after the function transformation of the original raster value.

Parameters:
  • grid_data (DatasetGrid or str) – grid data to be counted
  • function_type (FunctionType or str) – transformation function type
  • progress (progress information processing function, please refer to:py:class:.StepEvent) – function
Returns:

basic statistical analysis results

Return type:

BasicStatisticsAnalystResult

class iobjectspy.analyst.BasicStatisticsAnalystResult

Bases: object

Raster basic statistical analysis result class

first_quartile

float – the first quartile calculated by the basic statistical analysis of the grid

kurtosis

float – kurtosis calculated by basic statistical analysis of the raster

max

float – the maximum value calculated by basic statistical analysis of the grid

mean

float – the minimum value calculated by the basic statistical analysis of the grid

median

float – the median calculated by basic statistical analysis of the grid

min

float

skewness

float – the skewness calculated by basic statistical analysis of the raster

std

float – the mean square deviation (standard deviation) calculated by basic statistical analysis of the raster

third_quartile

float – the third quartile calculated by the basic statistical analysis of the grid

to_dict()

Output as dict object

Return type:dict
iobjectspy.analyst.grid_common_statistics(grid_data, compare_datasets_or_value, compare_type, is_ignore_no_value, out_data=None, out_dataset_name=None, progress=None)

Commonly used statistical analysis of raster, which compares a raster dataset row by row and column by row with one (or more) raster datasets or a fixed value according to a certain comparison method, and the comparison result is a “true” pixel value If it is 1, the value of the pixel that is “false” is 0.

A note about no value:

  • When the grid of the source dataset to be counted has no value, if no value is ignored, the statistical result grid is also no value, otherwise the no value is used to participate in the statistics; when the grid of each comparison dataset has no value , If no value is ignored, the statistics (calculation of the grid to be counted and the comparison dataset) are not included in the result, otherwise the no value is used for comparison.
  • When no value does not participate in the operation (that is, no value is ignored), the value of no value in the statistical result dataset is determined by the pixel format of the result raster and is the maximum pixel value. For example, the pixel format of the result raster dataset is PixelFormat.
UBIT8, that is, each pixel is represented by 8 bits, and the value of no value is 255. In this method, the pixel format of the result raster is determined by the number of comparison raster datasets. The corresponding relationship among the number of datasets obtained, the pixel format of the result raster, and the non-valued value in the result raster is as follows:
../_images/CommonStatistics.png
Parameters:
  • grid_data (DatasetGrid or str) – The specified grid data to be counted.
  • compare_datasets_or_value (list[DatasetGrid] or list[str] or float) – The specified dataset collection or fixed value for comparison. When specifying a fixed value, the unit of the fixed value is the same as that of the raster dataset to be counted.
  • compare_type (StatisticsCompareType or str) – the specified comparison type
  • is_ignore_no_value (bool) – Specify whether to ignore no value. If it is true, that is, no value is ignored, the no value in the calculation area will not participate in the calculation, and the result grid value is still no value; if it is false, the no value in the calculation area will participate in the calculation.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

statistical result raster dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.grid_neighbour_statistics(grid_data, neighbour_shape, is_ignore_no_value=True, grid_stat_mode='SUM', unit_type='CELL', out_data=None, out_dataset_name=None, progress=None)

Statistical analysis of the grid neighborhood.

Neighborhood statistical analysis is to count the pixels in the specified extended area of each pixel in the input dataset, and use the result of the calculation as the value of the pixel. Statistical methods include: sum, maximum, minimum, mode, minority, median, etc., please refer to the GridStatisticsMode enumeration type. The currently provided neighborhood range types (see NeighbourShapeType enumeration type) are: rectangle, circle, ring, and sector.

The figure below shows the principle of neighborhood statistics. Assuming that “sum” is used as the statistical method to do rectangular neighborhood statistics, the neighborhood size is 3×3, then for the cell located in the second row and third column in the figure, Its value is determined by the sum of all pixel values in a 3×3 rectangle that is diffused around it as the center.

../_images/NeighbourStatistics.png

The application of neighborhood statistics is very extensive. E.g:

  • Calculate the biological species in each neighborhood (statistical method: species) on the grid representing the distribution of species species, so as to observe the species abundance in the area;

  • Calculate the slope difference in the neighborhood on the slope grid (statistical method: value range) to evaluate the terrain undulations in the area;

    ../_images/NeighbourStatistics_1.png
  • Neighborhood statistics are also used in image processing, such as counting the average value (called mean filtering) or median (called median filtering) in the neighborhood to achieve a smoothing effect, thereby removing noise or excessive details, etc. Wait.

    ../_images/NeighbourStatistics_2.png
Parameters:
  • grid_data (DatasetGrid or str) – The specified grid data to be counted.
  • neighbour_shape (NeighbourShape) – neighborhood shape
  • is_ignore_no_value (bool) – Specify whether to ignore no value. If it is true, that is, no value is ignored, the no value in the calculation area will not participate in the calculation, and the result grid value is still no value; if it is false, the no value in the calculation area will participate in the calculation.
  • grid_stat_mode (GridStatisticsMode or str) – Statistical method of neighborhood analysis
  • unit_type (NeighbourUnitType or str) – unit type of neighborhood statistics
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

statistical result raster dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.altitude_statistics(point_data, grid_data, out_data=None, out_dataset_name=None)

Elevation statistics: Count the grid values corresponding to each point in the two-dimensional point dataset, and generate a three-dimensional point dataset. The Z value of the three-dimensional point object is the elevation value of the grid pixel being counted.

Parameters:
  • point_data (DatasetVector or str) – two-dimensional point dataset
  • grid_data (DatasetGrid or str) – grid dataset to be counted
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
Returns:

Statistical 3D dataset or dataset name

Return type:

DatasetGrid or str

class iobjectspy.analyst.GridHistogram(source_data, group_count, function_type=None, progress=None)

Bases: object

Create a histogram of the given raster dataset.

A histogram, also known as a histogram, is a series of rectangular blocks with different heights to represent the distribution of a piece of data. Generally, the horizontal axis represents the category, and the vertical axis represents the distribution.

The horizontal axis of the grid histogram represents the grouping of grid values. The grid values will be divided into these N (100 by default) groups, that is, each group corresponds to a grid value range; the vertical axis represents the frequency, that is the number of cells whose grid value is within the value range of each group.

The figure below is a schematic diagram of the grid histogram. The minimum and maximum values of the raster data are 0 and 100 respectively, and the number of groups is 10, and the frequency of each group is obtained, and the following histogram is drawn. The frequency of the group is marked above the rectangular block. For example, the grid value range of the sixth group is [50,60), and there are 3 cells in the raster data with values in this range, so the frequency of this group is 3. .

../_images/BuildHistogram.png

Note: The value range of the last group of the histogram group is front closed and then closed, and the rest are front closed and then opened.

After obtaining the GridHistogram object of the raster dataset through this method, the frequency of each group can be returned through the get_frequencies method of the object, and the group number of the grid histogram can be re-specified through the get_group_count method, and then passed The get_frequencies method returns the frequency of each group.

The figure below is an example of creating a raster histogram. In this example, the minimum grid value is 250, the maximum grid value is 1243, and the number of groups is 500. Get the frequency of each group and draw the grid histogram as shown on the right. From the raster histogram on the right, you can intuitively understand the distribution of raster values in the raster dataset.

../_images/BuildHistogram_1.png

Construct a raster histogram object

Parameters:
  • source_data (DatasetGrid or str) – the specified raster dataset
  • group_count (int) – The number of groups of the specified histogram. Must be greater than 0.
  • function_type (The specified transformation function type.) – FunctionType
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
class HistogramSegmentInfo(count, max_value, min_value, range_max, range_min)

Bases: object

栅格直方图每个分段区间的信息类。

count

int – 分段区间内容值的个数

max

float – 分段区间内容值的最大值

min

float – 分段区间内容值的最小值

range_max

float – 分段区间的最大值

range_min

float – 分段区间的最小值

get_frequencies()

Return the frequency of each group in the grid histogram. Each group of the histogram corresponds to a grid value range, and the number of all cells within this range is the frequency of the group.

Returns:Return the frequency of each group of the raster histogram.
Return type:list[int]
get_group_count()

Return the number of groups on the horizontal axis of the grid histogram.

Returns:Return the number of groups on the horizontal axis of the grid histogram.
Return type:int
get_segments()

Return the interval information of each group of the grid histogram.

Returns:The interval information of each group of the grid histogram.
Return type:list[GridHistogram.HistogramSegmentInfo]
set_group_count(count)

Set the number of groups on the horizontal axis of the grid histogram.

Parameters:count (int) – The number of groups on the horizontal axis of the grid histogram. Must be greater than 0.
Return type:self
iobjectspy.analyst.thin_raster(source, back_or_no_value, back_or_no_value_tolerance, out_data=None, out_dataset_name=None, progress=None)

Raster refinement is usually used before converting the raster to vector line data.

The refinement of raster data can reduce the number of cells used to identify linear features in raster data, thereby improving the speed and accuracy of vectorization. It is generally used as the preprocessing before raster to line vector data to make the conversion effect better. For example, a scanned contour map may use 5 or 6 cells to display the width of a contour line. After the refinement process, the width of the contour line is displayed in only one cell, which is beneficial for more Vectorize well.

../_images/ThinRaster.png

Explanation about no value/background color and its tolerance:

When performing grid refinement, users are allowed to identify those cells that do not need to be refined. For raster datasets, these values are determined by the no value and its tolerance; for image datasets, it is determined by the background color and its tolerance.

  • When the raster dataset is refined, the cell whose raster value is the value specified by the back_or_no_value parameter is regarded as no value and does not participate in the refinement, and the original no value of the raster will be used as a valid value to participate in the refinement ; At the same time, the cells within the tolerance of no value specified by the back_or_no_value_tolerance parameter do not participate in the refinement. For example, specify the value of no value as a, and specify the tolerance of no value as b, Then the cells whose grid value is in the range of [ab,a+b] will not participate in the refinement.
  • When the image dataset is refined, the cells whose raster value is the specified value are regarded as the background color and do not participate in the refinement; at the same time, the cells within the tolerance range of the background color

specified by the back_or_no_value_tolerance parameter are not Participate in refinement.

It should be noted that the raster value in the image dataset represents a color value. Therefore, if you want to set a certain color as the background color, the value specified for the back_or_no_value parameter should be the color (RGB value) converted to a 32-bit integer. The value after the type will be converted accordingly within the system according to the pixel format. The tolerance of the background color is also a 32-bit integer value. This value is converted into three tolerance values corresponding to R, G, and B within the system. For example, the color designated as the background color is (100,200,60), the designated tolerance value is 329738, and the value corresponds to RGB If the value is (10,8,5), the colors between (90,192,55) and (110,208,65) will not participate in the refinement.

Note: For raster datasets, if the specified value without value is outside the range of the raster dataset to be refined, the analysis will fail and None will be returned.

Parameters:
  • source (DatasetImage or DatasetGrid or str) – The specified raster dataset to be refined. Support image dataset.
  • back_or_no_value (int or tuple) – Specify the background color of the grid or a value indicating no value. You can use an int or tuple to represent an RGB or RGBA value.
  • back_or_no_value_tolerance (float or tuple) – The tolerance of the grid background color or the tolerance of no value. You can use a float or tuple to represent an RGB or RGBA value.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

Dataset or str

iobjectspy.analyst.build_lake(dem_grid, lake_data, elevation, progress=None)

To dig a lake, that is, to modify the elevation value of the DEM dataset in the area of the polygon dataset to the specified value. Lake digging refers to displaying lake information on the DEM dataset based on the existing lake surface data. As shown in the figure below, after digging the lake, the grid value of DEM at the corresponding position of the lake surface data becomes the specified elevation value, and the grid value of the entire lake area is the same.

../_images/BuildLake.png
Parameters:
  • dem_grid (DatasetGrid or str) – The specified DEM raster dataset of the lake to be dug.
  • lake_data (DatasetVector or str) – The designated lake area is a polygon dataset.
  • elevation (str or float) – The elevation field of the specified lake area or the specified elevation value. If it is str, the field type is required to be numeric. If it is specified as None or an empty string, or the specified field does not exist in the lake area dataset, the lake will be dug according to the minimum elevation on the DEM grid corresponding to the lake area boundary. The unit of the elevation value is the same as that of the DEM raster dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return True if successful, otherwise False

Return type:

bool

iobjectspy.analyst.build_terrain(source_datas, lake_dataset=None, lake_altitude_field=None, clip_data=None, erase_data=None, interpolate_type='IDW', resample_len=0.0, z_factor=1.0, is_process_flat_area=False, encode_type='NONE', pixel_format='SINGLE', cell_size=0.0, out_data=None, out_dataset_name=None, progress=None)

Create terrain based on the specified terrain construction parameter information. DEM (Digital Elevation Model, Digital Elevation Model) is mainly used to describe the spatial distribution of regional landforms. It is a digital terrestrial model (DTM) whose ground characteristics are elevation and elevation. It is usually formed by data interpolation by measuring elevation points (or sampling elevation points from contour lines). This method is used to construct terrain, that is, to generate DEM raster by interpolation for point or line datasets with elevation information.

../_images/BuildTerrain_1.png

The source_datas parameter can be used to specify the dataset used to construct the terrain. It supports only elevation points, only contour lines, and supports both elevation points and contour lines.

Parameters:
  • source_datas (dict[DatasetVector,str] or dict[str,str]) – The point dataset and line dataset used to construct, and the elevation field of the dataset. The coordinate system of the dataset is required to be the same.
  • lake_dataset (DatasetVector or str) – The lake surface dataset. In the result dataset, the elevation value in the area of the lake surface dataset is smaller than the elevation value of the neighboring neighbors.
  • lake_altitude_field (str) – the elevation field of the lake surface dataset
  • clip_data (DatasetVector or str) –

    Set the dataset used for clipping. When constructing the terrain, only the DEM results in the clipping area are retained, and the parts outside the area are given no value.

    ../_images/BuildTerrainParameter_1.png
  • erase_data (DatasetVector or str) –

    The dataset used for erasing. When building the terrain, the resultant DEM raster value in the erased area has no value. Only valid when interpolate_type is set to TIN.

    ../_images/BuildTerrainParameter_2.png
  • interpolate_type (TerrainInterpolateType or str) – Terrain interpolation type. The default value is IDW.
  • resample_len (float) – sampling distance. Only valid for line dataset. The unit is consistent with the unit of the line dataset used to construct the terrain. Only valid when interpolate_type is set to TIN. First, resample the line dataset to filter out some dense nodes, and then generate the TIN model to improve the generation speed.
  • z_factor (float) – elevation zoom factor
  • is_process_flat_area (bool) – Whether to deal with flat areas. Contour generation DEM can better deal with mountain tops and valleys, and point generation DEM can also deal with flat areas, but the effect is not as good as contour generation DEM processing. The main reason is that the result of judging flat areas based on points is relatively rough.
  • encode_type (EncodeType or str) – encoding method. For raster datasets, currently supported encoding methods are unencoded, SGL, and LZW
  • pixel_format (PixelFormat or str) – the pixel format of the result dataset
  • cell_size (float) – The size of the grid cell of the result dataset. If specified as 0 or a negative number, the system will use L/500 (L is the diagonal length of the rectangle corresponding to the area range of the source dataset) as the cell Grid size.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

Dataset or str

iobjectspy.analyst.area_solar_radiation_days(grid_data, latitude, start_day, end_day=160, hour_start=0, hour_end=24, day_interval=5, hour_interval=0.5, transmittance=0.5, z_factor=1.0, out_data=None, out_total_grid_name='TotalGrid', out_direct_grid_name=None, out_diffuse_grid_name=None, out_duration_grid_name=None, progress=None)

Calculate the total amount of solar radiation in a multi-day area, that is, the solar radiation of each grid in the entire DEM range. Need to specify the start time, end time, start date, and end date of each day.

Parameters:
  • grid_data (DatasetGrid or str) – DEM grid data to be calculated for solar radiation
  • latitude (float) – the average latitude of the area to be calculated
  • start_day (datetime.date or str or int) – start date, it can be a string in the format “%Y-%m-%d”, if it is an int, it means the day of the year
  • end_day (datetime.date or str or int) – end date, can be a string in the format “%Y-%m-%d”, if it is int, it means the day of the year
  • hour_start (float or str or datetime.datetime) – The starting time point. If you enter float, you can enter a value in the range of [0,24], which means the hour in the day. You can also enter a datetime.datatime or “%H:%M:%S” format string
  • hour_end (float or str or datetime.datetime) – the end time point, if you enter float, you can enter a value in the range of [0,24], which means the hour in the day. You can also enter a datetime.datatime or “%H:%M:%S” format string
  • day_interval (int) – interval in days, the unit is day
  • hour_interval (float) – hour interval, the unit is hour.
  • transmittance (float) – The transmittance of solar radiation through the atmosphere, the value range is [0,1].
  • z_factor (float) – elevation zoom factor
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_total_grid_name (str) – The name of the total radiation result dataset, the dataset name must be legal
  • out_direct_grid_name (str) – The name of the direct radiation result dataset. The dataset name must be legal, and the interface will not automatically obtain a valid dataset name
  • out_diffuse_grid_name (str) – The name of the scattered radiation result dataset. The dataset name must be legal, and the effective dataset name will not be automatically obtained in the interface
  • out_duration_grid_name (str) – The name of the dataset for the duration of direct solar radiation. The name of the dataset must be legal, and the interface will not automatically obtain a valid dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return a tuple of four elements:

  • The first one is the total radiation result dataset,
  • If the name of the direct radiation result dataset is set, the second one is the direct radiation result dataset, otherwise it is None,
  • If you set the name of the scattered radiation result dataset, the third one is the scattered radiation result dataset, otherwise it is None
  • If you set the name of the result dataset of the direct sun duration, the fourth one is the result dataset of the direct sun duration, otherwise it is None

Return type:

tuple[DatasetGrid] or tuple[str]

iobjectspy.analyst.area_solar_radiation_hours(grid_data, latitude, day, hour_start=0, hour_end=24, hour_interval=0.5, transmittance=0.5, z_factor=1.0, out_data=None, out_total_grid_name='TotalGrid', out_direct_grid_name=None, out_diffuse_grid_name=None, out_duration_grid_name=None, progress=None)

To calculate the solar radiation within a day, you need to specify the start time, end time and start date as the date to be calculated

Parameters:
  • grid_data (DatasetGrid or str) – DEM grid data to be calculated for solar radiation
  • latitude (float) – the average latitude of the area to be calculated
  • day (datetime.date or str or int) – The specified date to be calculated. It can be a string in the format “%Y-%m-%d”. If it is an int, it means the day of the year.
  • hour_start (float or str or datetime.datetime) – The starting time point. If you enter float, you can enter a value in the range of [0,24], which means the hour in the day. You can also enter a datetime.datatime or “%H:%M:%S” format string
  • hour_end (float or str or datetime.datetime) – the end time point, if you enter float, you can enter a value in the range of [0,24], which means the hour in the day. You can also enter a datetime.datatime or “%H:%M:%S” format string
  • hour_interval (float) – hour interval, the unit is hour.
  • transmittance (float) – The transmittance of solar radiation through the atmosphere, the value range is [0,1].
  • z_factor (float) – elevation zoom factor
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_total_grid_name (str) – The name of the total radiation result dataset, the dataset name must be legal
  • out_direct_grid_name (str) – The name of the direct radiation result dataset. The dataset name must be legal, and the interface will not automatically obtain a valid dataset name
  • out_diffuse_grid_name (str) – The name of the scattered radiation result dataset. The dataset name must be legal, and the effective dataset name will not be automatically obtained in the interface
  • out_duration_grid_name (str) – The name of the dataset for the duration of direct solar radiation. The name of the dataset must be legal, and the interface will not automatically obtain a valid dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return a tuple of four elements:

  • The first one is the total radiation result dataset,
  • If the name of the direct radiation result dataset is set, the second one is the direct radiation result dataset, otherwise it is None,
  • If you set the name of the scattered radiation result dataset, the third one is the scattered radiation result dataset, otherwise it is None
  • If you set the name of the result dataset of the direct sun duration, the fourth one is the result dataset of the direct sun duration, otherwise it is None

Return type:

tuple[DatasetGrid] or tuple[str]

iobjectspy.analyst.raster_mosaic(inputs, back_or_no_value, back_tolerance, join_method, join_pixel_format, cell_size, encode_type='NONE', valid_rect=None, out_data=None, out_dataset_name=None, progress=None)

Raster dataset mosaic. Supports raster datasets and image datasets.

Mosaic of raster data refers to combining two or more raster data into one raster data according to geographic coordinates. Sometimes because the area to be studied and analyzed is very large, or the target objects of interest are widely distributed, and multiple raster datasets or multiple images are involved, mosaicking is required. The figure below shows the mosaic of six adjacent raster data into one data.

../_images/Mosaic_1.png

When performing raster data mosaic, you need to pay attention to the following points:

  • The raster to be mosaiced must have the same coordinate system Mosaic requires that all raster datasets or image datasets have the same coordinate system, otherwise the mosaic result may be wrong. It is possible to unify all coordinate systems with mosaic grids through projection conversion before mosaicking.

  • Treatment of overlapping areas When mosaicking, there are often overlapping areas between two or more raster data (as shown in the figure below, the areas in the red frame of the two images overlap). At this time, it is necessary to specify the value method for the grid of the overlapping area. SuperMap provides five overlapping area value methods. Users can choose the appropriate method according to actual needs. For details, see the RasterJoinType class.

    ../_images/Mosaic_2.png
  • Explanation of non-value and background color and its tolerance There are two types of raster data to be mosaicked: raster dataset and image dataset. For raster datasets, this method can specify non-valued and non-valued tolerances. For image datasets, this method can specify the background color and its tolerances.

    • The data to be mosaicked is a raster dataset:

      • When the data to be mosaicked is a raster dataset, the cells whose raster value is the value specified by the back_or_no_value parameter,

      and the cells within the tolerance range specified by the back_tolerance parameter are considered as no value, and these cells will not Participate in the calculation during mosaic (calculation of the overlapping area), and the original non-valued cells of the raster are no longer non-valued data and participate in the calculation.

      • It should be noted that the tolerance of no value is the tolerance of the value of no value specified by the user, and has nothing to do with the original no value in the grid.
    • The data to be mosaicked is an image dataset

      • When the data to be mosaicked is an image dataset, the cells whose raster value is the value specified by the back_or_no_value parameter, and the cells within the tolerance range specified by the

      back_tolerance parameter are regarded as the background color, and these cells do not participate in mosaicking Calculation. For example, if the value of no value is specified as a and the tolerance of no value is specified as b, the cells with the grid value in the range of [a-b,a+b] will not participate in calculation.

      • Note that the grid value in the image dataset represents a color. The raster values of the image dataset correspond to RGB colors, so if you want to set a certain color as the background color, The value specified for the back_or_no_value parameter should be the value after the color (RGB value) is converted to a 32-bit integer, and the system will perform the corresponding conversion according to the pixel format.
      • The setting of the tolerance value of the background color is the same as the specification of the value of the background color: the tolerance value is a 32-bit integer value, which is converted into three tolerances corresponding

      to the background color R, G, and B inside the system. Limit, for example, the color specified as the background color is (100,200,60), the specified tolerance limit is 329738, the RGB value corresponding to this value is (10,8,5), then the value is (90,192,55) and The colors between (110,208,65) are regarded as background colors and are not included in the calculation.

note:

When two or more high-pixel format rasters are mosaicked into a low-pixel format raster, the resulting raster value may exceed the value range and cause errors. Therefore, this operation is not recommended.

Parameters:
  • inputs (list[DatasetGrid] or list[DatasetImage] list[str] or str) – The specified dataset to be mosaicked.
  • back_or_no_value (float or tuple) – The specified grid background color or value without value. You can use a float or tuple to represent an RGB or RGBA value
  • back_tolerance (float or tuple) – The specified grid background color or tolerance without value. You can use a float or tuple to represent an RGB or RGBA value
  • join_method (RasterJoinType or str) – The specified mosaic method, that is, the value method of overlapping area during mosaic.
  • join_pixel_format (RasterJoinPixelFormat or str) – The pixel format of the specified mosaic result raster data.
  • cell_size (float) – The cell size of the specified mosaic result dataset.
  • encode_type (EncodeType or str) – The encoding method of the specified mosaic result dataset.
  • valid_rect (Rectangle) – The valid range of the specified mosaic result dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The specified datasource information used to store the mosaic result dataset
  • out_dataset_name (str) – The name of the specified mosaic result dataset.
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

mosaic result dataset

Return type:

Dataset

iobjectspy.analyst.zonal_statistics_on_raster_value(value_data, zonal_data, zonal_field, is_ignore_no_value=True, grid_stat_mode='SUM', out_data=None, out_dataset_name=None, out_table_name=None, progress=None)

Raster zoning statistics. In the method, the value data is a raster dataset, and the band data can be vector or raster data.

Raster zoning statistics is a statistical method to count the values of cells in a region, and assign the statistical values in each region to all cells covered by the region to obtain the result grid. Raster zoning statistics involves two kinds of data, value data and band data. The value data is the raster data to be counted, and the band data is the data identifying the statistical area, which can be raster or vector area data. The following figure shows the algorithm of using grid band data for banding statistics, where the gray cells represent no value data.

../_images/ZonalStatisticsOnRasterValue_1.png

When the band data is a raster dataset, the consecutive cells with the same raster value are regarded as a band (area); when the band data is a vector polygon dataset, it is required to have a field that identifies the band in the attribute table, with the value To distinguish different bands, if the identification values of two or more area objects (which can be adjacent or not) are the same, they will be counted as one band when performing banding statistics, that is, in the result grid , The grid values at the corresponding positions of these area objects are the statistical values of the grid values of all cells within the range of these area objects.

The results of zoning statistics include two parts: one is the grid of zoning statistics, the grid value in each band is the same, that is, the value calculated according to the statistical method; the second is a record of statistical information in each band Attribute table, including ZONALID (identification with band), PIXELCOUNT (number of cells in band), MININUM (minimum), MAXIMUM (maximum), RANGE_VALUE (range), SUM_VALUE (sum), MEAN (average), STD (Standard deviation), VARIETY (kind), MAJORITY (mode), MINORITY (minority), MEDIAN (median) and other fields.

Let’s use an example to understand the application of band statistics.

  1. As shown in the figure below, the left picture is the DEM raster value, which is used as value data, and the right picture is the administrative division of the corresponding area, which is used as the band data for band statistics;
../_images/ZonalStatisticsOnRasterValue_2.png
  1. Using the above data, use the maximum value as the statistical method to perform zoning statistics. The result includes the result grid as shown in the figure below and the corresponding statistical information attribute table (omitted). In the result grid, the grid values in each band are equal, that is, the largest grid value in the value grid within the band range, that is, the elevation value. This example counts the highest elevation in each administrative area of the region.
../_images/ZonalStatisticsOnRasterValue_3.png

Note that the pixel type (PixelFormat) of the result grid of the zoning statistics is related to the specified zoning statistics type (set by the setStatisticsMode method of the ZonalStatisticsAnalystParameter class):

 * When the statistical type is VARIETY, the result raster pixel type is BIT32;  * When the statistical type is the maximum (MAX), minimum (MIN), and range (RANGE), the pixel type of the result raster is consistent with the source raster;  * When the statistical type is average (MEAN), standard deviation (STDEV), sum (SUM), mode (MAJORITY), minimum (MINORITY), median (MEDIAN), the pixel type of the result raster is DOUBLE .

Parameters:
  • value_data (DatasetGrid or str) – value data to be counted
  • zonal_data (DatasetGrid or DatasetVector or str) – The zonal dataset to be counted. Only raster datasets or vector polygon datasets with pixel format (PixelFormat) UBIT1, UBIT4, UBIT8 and UBIT16 are supported.
  • zonal_field (str) – The field used to identify the band in the vector band data. The field type only supports 32-bit integers.
  • is_ignore_no_value (bool) – Whether to ignore no-value data in statistics. If it is True, it means that the grid with no value is not involved in the calculation; if it is False, it means that there is no value involved in the calculation, and the result is still no value
  • grid_stat_mode (GridStatisticsMode or str) – zoning statistics type
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • out_table_name (str) – The name of the analysis result attribute table
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

return a tuple, tuple has two elements, the first is the result dataset or name, the second is the result attribute table dataset or name

Return type:

tuple[DatasetGrid, DatasetGrid] or tuple[str,str]

iobjectspy.analyst.calculate_profile(input_data, line)

Profile analysis, view the profile of the DEM grid along the line according to a given line, and return the profile line and sampling point coordinates. Given a straight line or broken line, view the longitudinal section of the DEM grid along this line, which is called profile analysis. The result of profile analysis consists of two parts: profile line and sampling point collection.

  • Sampling point

Profile analysis needs to select some points along a given line, and show the profile effect through the elevation and coordinate information of these points. These points are called sampling points. The sampling points are selected according to the following rules, Can be combined with the following figure to understand.

-Only one sampling point is selected in each cell of a given route; -The nodes of a given route are all taken as sampling points; -If the route passes and the node is not in the cell, the intersection of the line with the larger of the two center lines of the cell is taken as the sampling point.
../_images/CalculateProfile_1.png
  • Collection of profile line and sampling point coordinates

The profile line is one of the results of the profile analysis. It is a two-dimensional line (GeoLine ). Its nodes correspond to the sampling points one-to-one. The X value of the node represents the current sampling point to the starting point of the given line (Also the first sampling point) straight line distance, Y value is the elevation of the current sampling point. The sampling point collection gives the positions of all sampling points, and a two-dimensional collection line object is used to store these points. There is a one-to-one correspondence between the profile line and the sampling point collection. Combining the profile line and the sampling point collection can know the elevation at a certain position and the distance from the starting point of the analysis.

The following figure shows a schematic diagram of a cross-sectional line drawn in a two-dimensional coordinate system with the X value of the profile line on the horizontal axis and the Y value of the vertical axis. Through the profile line, you can intuitively understand the elevation and topography of the terrain along a given line.

../_images/CalculateProfile_2.png

Note: The specified line must be within the range of the DEM grid dataset, otherwise the analysis may fail. If the sampling point is located on a grid with no value, the elevation of the corresponding point on the profile line is 0.

Parameters:
  • input_data (DatasetGrid or str) – The specified DEM grid for profile analysis.
  • line (GeoLine) – The specified line is a line segment or polyline. The profile analysis gives the profile along the line.
Returns:

Profile analysis result, profile line and sampling point collection.

Return type:

tuple[GeoLine, GeoLine]

class iobjectspy.analyst.CutFillResult(cut_area, cut_volume, fill_area, fill_volume, remainder_area, cut_fill_grid_result)

Bases: object

Fill and excavation result information class. This object is used to obtain the results of filling and digging calculations on a raster dataset, such as the area of filling and digging, the volume of filling and digging, etc.

Explanation of the area and volume unit of the result of filling and cutting:

The area unit of the filling and excavation is square meters, and the unit of the volume is the unit of square meters multiplied by the elevation (that is, the grid value for filling and excavation). But it should be noted that if the grid for filling and cutting calculation is a geographic coordinate system, the area value is a value approximately converted to square meters.

Internal constructor, users do not need to use

Parameters:
  • cut_area (float) – The excavated area as a result of the fill and cut analysis. The unit is square meter. When the grid for filling and cutting is a geographic coordinate system, the value is approximate conversion
  • cut_volume (float) – The excavation volume of the fill and cut analysis results. The unit is square meters multiplied by the raster value (ie elevation value) of the fill and cut grid
  • fill_area (float) – Fill area with the result of fill and cut analysis. The unit is square meter. When the grid for filling and cutting is a geographic coordinate system, the value is approximate conversion.
  • fill_volume (float) – Fill volume with the analysis result of fill and cut. The unit is the square meter multiplied by the grid value (ie elevation value) of the fill and cut grid.
  • remainder_area (float) – The area that has not been filled and cut in the fill and cut analysis. The unit is square meter. When the grid for filling and cutting is a geographic coordinate system, the value is approximate conversion.
  • cut_fill_grid_result (DatasetGrid or str) – The result dataset of fill and cut analysis. Cell value greater than 0 means the depth to be digged, and less than 0 means the depth to be filled.
iobjectspy.analyst.inverse_cut_fill(input_data, volume, is_fill, region=None, progress=None)

Inverse calculation of fill and excavation, that is, calculate the elevation after fill and excavation according to the given fill or excavation volume Back-calculation is used to solve this kind of practical problem: the raster data before filling and excavation and the volume to be filled or excavated within the data range are known to derive the elevation value after filling or excavation.

For example, a certain area of a construction site needs to be filled, and now it is known that a certain area can provide earthwork with a volume of V, at this time, using the inverse calculation to fill and excavate, you can calculate the elevation of the construction area after filling this batch of soil into the construction area. Then it can be judged whether the construction requirements are met and whether it is necessary to continue filling.. Then it can be judged whether the construction requirements are met and whether it is necessary to continue filling.
Parameters:
  • input_data (DatasetGrid or str) – The specified raster data to be filled and excavated.
  • volume (float) – The specified fill or cut volume. The value is a value greater than 0. If set to less than or equal to 0, an exception will be thrown. The unit is square meters multiplied by the grid value unit of the grid to be filled and excavated.
  • is_fill (bool) – Specify whether to perform fill calculation. If it is true, it means performing fill calculation, and false means performing excavation calculation.
  • region (GeoRegion or Rectangle) – The designated fill and cut region. If it is None, fill and cut calculations are applied to the entire grid area.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The elevation value after filling and digging. The unit is the same as the grid value unit of the grid to be filled and excavated.

Return type:

float

iobjectspy.analyst.cut_fill_grid(before_cut_fill_grid, after_cut_full_grid, out_data=None, out_dataset_name=None, progress=None)

Raster fill and cut calculation refers to the calculation of the pixels corresponding to the two raster datasets before and after fill and cut. The surface of the earth often causes the migration of surface material due to the effects of deposition and erosion, which is manifested as the increase of surface material in some areas of the surface and the decrease of surface material in some areas. In engineering, the reduction of surface material is usually called “excavation”, and the increase of surface material is called “filling”.

The calculation of raster fill and cut requires input of two raster datasets: the raster dataset before fill and cut and the raster dataset after fill and cut. Each pixel value of the generated result dataset is its two input datasets The change value of the corresponding pixel value. If the pixel value is positive, it means that the surface material at the pixel is reduced; if the pixel value is negative, it means that the surface material at the pixel is increasing. The calculation method of fill and excavation is shown in the figure below:

../_images/CalculationTerrain_CutFill.png

It can be found from this figure that the result dataset = raster dataset before fill and excavation-raster dataset after fill and excavation.

There are a few things to note about the two input raster datasets and the result dataset:

-The two input raster datasets are required to have the same coordinates and projection system to ensure that the same location has the same coordinates. If the coordinate systems of the two input raster datasets are inconsistent, it is likely to produce incorrect results .

-Theoretically, the spatial extent of the two input raster datasets is also the same. For two raster datasets with inconsistent spatial extents, only the results of surface filling and excavation in their overlapping areas are calculated.

-When a cell in one of the raster datasets is a null value, the cell value in the calculation result dataset is also a null value.

Parameters:
  • before_cut_fill_grid (DatasetGrid or str) – the specified raster dataset before cut and fill
  • after_cut_full_grid (DatasetGrid or str) – The specified raster dataset after fill and cut.
  • out_data (Datasource or str) – The specified datasource to store the result dataset.
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

fill and excavation result information

Return type:

CutFillResult

iobjectspy.analyst.cut_fill_oblique(input_data, line3d, buffer_radius, is_round_head, out_data=None, out_dataset_name=None, progress=None)

Calculation of slope filling and excavation. The incline fill and cut function is to count the amount of fill and cut required to create an incline on a terrain surface. The principle is similar to that of face selection, filling and digging.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset to be filled and excavated.
  • line3d (GeoLine3D) – The specified filling and digging route
  • buffer_radius (float) – The buffer radius of the specified fill and cut line. The unit is the same as the coordinate system unit of the raster dataset to be filled and excavated.
  • is_round_head (bool) – Specify whether to use round head buffer to create a buffer for the fill and dig route.
  • out_data (Datasource or str) – The specified datasource to store the result dataset
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

fill and excavation result information

Return type:

CutFillResult

iobjectspy.analyst.cut_fill_region(input_data, region, base_altitude, out_data=None, out_dataset_name=None, progress=None)

Calculation of selected face, fill and dig. When a undulating area needs to be flattened to the ground, the user can specify the undulating area and the elevation of the flattened ground, and use this method to perform surface selection, fill and excavation calculations, and calculate the filling area, excavation area, and filling amount. And the amount of excavation.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset to be filled and excavated.
  • region (GeoRegion or Rectangle) – The designated fill and cut region.
  • base_altitude (float) – The resultant elevation of the specified fill and cut area. The unit is the same as the raster value unit of the raster dataset to be filled and excavated.
  • out_data (Datasource or str) – The specified datasource to store the result dataset.
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

fill and excavation result information

Return type:

CutFillResult

iobjectspy.analyst.cut_fill_region3d(input_data, region, out_data=None, out_dataset_name=None, progress=None)

Three-dimensional face fill and cut calculation. For a undulating area, you can calculate the area to be filled, the area of excavation, the amount of filling, and the amount of excavation based on the three-dimensional surface after the area is filled and excavated.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster dataset to be filled and excavated.
  • region (GeoRegion3D) – The designated fill and cut region.
  • out_data (Datasource or str) – The specified datasource to store the result dataset.
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

fill and excavation result information

Return type:

CutFillResult

iobjectspy.analyst.flood(input_data, height, region=None, progress=None)

Calculate the submerged area of the DEM grid based on the specified elevation. The calculation of the submerged area is based on the DEM raster data. According to a given submerged water elevation (specified by the parameter height), it is compared with the value of the DEM raster (ie, the elevation value), where the elevation value is lower than or equal to the given The cells of the water level are all divided into the submerged area, and then the submerged area is converted to vector surface output, and the source DEM data will not be changed. Through the submerged area area object, it is easy to calculate the submerged area and area. The figure below is an example of calculating the submerged area when the water level reaches 200, which is formed by superimposing the original DEM data and the vector dataset (purple area) of the submerged area

../_images/Flood.png

Note: The area object returned by this method is the result of merging all submerged areas.

Parameters:
  • input_data (DatasetGrid or str) – The specified DEM data for the submerged area to be calculated.
  • height (float) – The specified elevation value of the water level after submergence. The cells less than or equal to this value in the DEM data will be classified into the submerged area. The unit is the same as the grid value unit of the DEM grid to be analyzed.
  • region (GeoRegion or Rectangle) – Specified effective calculation region. After the area is designated, the flooded area is calculated only in the area.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The area object after all the submerged areas are merged

Return type:

GeoRegion

iobjectspy.analyst.thin_raster_bit(input_data, back_or_no_value, is_save_as_grid=True, out_data=None, out_dataset_name=None, progress=None)

The rasterized linear features are refined by reducing the pixel width of the feature. This method is a refinement method for processing binary images. If it is not a binary image, it will be processed as a binary image first, and only the background color needs to be specified Values other than the background color are values that need to be refined. This method is the fastest.

Parameters:
  • input_data (DatasetImage or DatasetGrid or str) – The specified raster dataset to be refined. Support image dataset.
  • back_or_no_value (int or tuple) – Specify the background color of the grid or a value indicating no value. You can use an int or tuple to represent an RGB or RGBA value.
  • is_save_as_grid (bool) – Whether to save as a raster dataset, True means to save as a raster dataset, False to save as the original data type (raster or image). Saving as a raster dataset is convenient for vectorization of specified values during raster vectorization, which is convenient for obtaining line data quickly.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result data.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

Dataset or str

class iobjectspy.analyst.ViewShedType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constant of the visual field when the visual field is analyzed for multiple observation points (observed points). :var ViewShedType.VIEWSHEDINTERSECT: Common visual field, taking the intersection of the visual field range of multiple observation points. :var ViewShedType.VIEWSHEDUNION: Non-common viewing area, taking the union of multiple viewing points.

VIEWSHEDINTERSECT = 0
VIEWSHEDUNION = 1
iobjectspy.analyst.NDVI(input_data, nir_index, red_index, out_data=None, out_dataset_name=None)

The normalized vegetation index is also called the normalized difference vegetation index or standard difference vegetation index or biomass index change. The vegetation can be separated from the water and soil.

Parameters:
  • input_data (DatasetImage or str) – Multi-band image dataset.
  • nir_index (int) – index of near infrared band
  • red_index (int) – the index of the red band
  • out_data (Datasource or str) – result datasource
  • out_dataset_name (str) – result dataset name
Returns:

Result dataset, used to save NDVI value. The range of NDVI value is between -1 and 1.

Return type:

DatasetGrid or str

iobjectspy.analyst.NDWI(input_data, nir_index, green_index, out_data=None, out_dataset_name=None)

Normalized water index. NDWI is generally used to extract water information in images, and the effect is better.

Parameters:
  • input_data (DatasetImage or str) – Multi-band image dataset.
  • nir_index (int) – index of near infrared band
  • green_index (int) – the index of the green band
  • out_data (Datasource or str) – result datasource
  • out_dataset_name (str) – result dataset name
Returns:

Result dataset, used to save NDWI value.

Return type:

DatasetGrid or str

iobjectspy.analyst.compute_features_envelope(input_data, is_single_part=True, out_data=None, out_dataset_name=None, progress=None)

Calculate the rectangular area of the geometric object

Parameters:
  • input_data (DatasetVector or str) – The dataset to be analyzed. Only line dataset and surface dataset are supported.
  • is_single_part (bool) – Whether to split sub-objects when there is a combined line or combined surface. The default is True, split sub-objects.
  • out_data (Datasource or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

The result dataset, Return the range of each object. A new field “ORIG_FID” is added to the result dataset to save the ID value of the input object.

Return type:

DatasetVector or str

iobjectspy.analyst.calculate_view_shed(input_data, view_point, start_angle, view_angle, view_radius, out_data=None, out_dataset_name=None, progress=None)

Single-point visual field analysis is to analyze the visual range of a single observation point. Single-point visual field analysis is to find the area that can be observed within a given range (determined by the observation radius and the observation angle) for a given observation point on the raster surface dataset, that is, to The scope of the fixed-point visibility area. The result of the analysis is a raster dataset, in which the visible area keeps the raster value of the original raster surface, and other areas have no value.

As shown in the figure below, the green point in the figure is the observation point, and the blue area superimposed on the original grid surface is the result of the visual field analysis.

../_images/CalculateViewShed.png

Note: If the elevation of the specified observation point is less than the elevation value of the corresponding position on the current grid surface, the elevation value of the observation point will be automatically set to the elevation of the corresponding position on the current grid surface.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster surface dataset used for visual domain analysis.
  • view_point (Point3D) – The specified view point position.
  • start_angle (float) – The specified starting observation angle, in degrees, with the true north direction being 0 degrees and rotating clockwise. Specify a negative value or greater than 360 degrees, and it will be automatically converted to the range of 0 to 360 degrees.
  • view_angle (float) – The specified viewing angle, the unit is degrees, and the maximum value is 360 degrees. The observation angle is based on the starting angle, that is, the viewing angle range is [starting angle, starting angle + observation angle]. For example, if the starting angle is 90 degrees and the observation angle is 90 degrees, the actual observation angle is from 90 degrees to 180 degrees. But note that when specifying 0 or a negative value, regardless of the value of the starting angle, the viewing angle range is 0 to 360 degrees
  • view_radius (float) – Specified viewing radius. This value limits the size of the field of view. If the observation radius is less than or equal to 0, it means no limit. The unit is meters
  • out_data (Datasource or str) – The specified datasource used to store the result dataset
  • out_dataset_name (str) – the name of the specified result dataset
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

single-point visual domain analysis result dataset

Return type:

DatasetGrid or str

iobjectspy.analyst.calculate_view_sheds(input_data, view_points, start_angles, view_angles, view_radiuses, view_shed_type, out_data=None, out_dataset_name=None, progress=None)

Multi-point visual field analysis, that is, analyze the visual range of multiple observation points, which can be a common visual field or a non-common visual field. Multi-point visual field analysis is to analyze the visual field of each observation point in a given observation point set based on the grid surface, and then calculate the intersection of the visual fields of all observation points according to the specified visual field type (Referred to as “common visual field”) or union (referred to as “non-common visual field”), and output the result to a raster dataset, where the visual area maintains the raster value of the original raster surface, Other areas have no value.

As shown in the figure below, the green point in the figure is the observation point, and the blue area superimposed on the original grid surface is the result of the visual field analysis. The picture on the left shows the common visual field of three observation points, and the picture on the right shows the non-common visual field of three observation points.

../_images/CalculateViewShed_1.png

Note: If the elevation of the specified observation point is less than the elevation value of the corresponding position on the current grid surface, the elevation value of the observation point will be automatically set to the elevation of the corresponding position on the current grid surface.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster surface dataset used for visual domain analysis.
  • view_points (list[Point3D]) – The set of specified observation points.
  • start_angles (list[float]) – The set of starting observation angles specified, corresponding to the observation point one by one. The unit is degree, and the direction of true north is 0 degree, and the rotation is clockwise. Specify a negative value or greater than 360 degrees, and it will be automatically converted to the range of 0 to 360 degrees.
  • view_angles – The set of specified observation angles, corresponding to the observation point and the starting observation angle one-to-one, the unit is degree, and the maximum value is 360 degrees. The observation angle is based on the starting angle, that is, the viewing angle range is [starting angle, starting angle + observation angle]. For example, if the starting angle is 90 degrees and the observation angle is 90 degrees, the actual observation angle is from 90 degrees to 180 degrees. :type view_angles: list[float]
  • view_radiuses (list[float]) – The specified observation radius set, which corresponds to the observation point one by one. This value limits the size of the field of view. If the observation radius is less than or equal to 0, it means no limit. The unit is meters.
  • view_shed_type (ViewShedType or str) – The type of the specified visual field, which can be the intersection of the visual fields of multiple observation points, or the union of the visual fields of multiple observation points.
  • out_data (Datasource or str) – The specified datasource used to store the result dataset
  • out_dataset_name (str) – the name of the specified result dataset
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

Multi-point visual domain analysis result dataset.

Return type:

DatasetGrid

class iobjectspy.analyst.VisibleResult(java_object)

Bases: object

Visibility analysis result class.

This class gives the results of the visual analysis between the observation point and the observed point. If it is not visible, it will also give relevant information about the obstacle point.

barrier_alter_height

float – The recommended maximum height of the obstacle point. If the grid value (ie elevation) of the cell on the grid surface where the obstacle point is located is modified to be less than or equal to this value, the point will no longer obstruct the line of sight, but note that it does not mean that there are no other obstacle points after the point. The grid value can be modified through the set_value() method of the DatasetGrid class

barrier_point

Point3D – The coordinate value of the obstacle point. If the observation point and the observed point are not visible, the return value of this method is the first obstacle point between the observation point and the observed point. If the observation point and the observed point are visible, the obstacle point coordinates will take the default value.

from_point_index

int – The index value of the observation point. If the visibility analysis is performed between two points, the index value of the observation point is 0.

to_point_index

int – The index value of the observed point. If the visibility analysis is performed between two points, the index value of the observed point is 0.

visible

bool – Whether the observation point and the observed point pair are visible or not

iobjectspy.analyst.is_point_visible(input_data, from_point, to_point)

Two-point visibility analysis is to determine whether two points are mutually visible. Based on the grid surface, judging whether a given observation point and the observed point are visible or not is called visibility analysis between two points. There are two results of visibility analysis between two points: visible and invisible. This method returns a VisibleResult object, which is used to return the result of the visibility analysis between two points, that is, whether the two points are visible or not. If they are not visible, the first obstacle point that obstructs the line of sight will be returned, and the obstacle will also be given. The suggested elevation value of the point so that the point no longer obstructs the line of sight.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster surface dataset used for visibility analysis.
  • from_point (Point3D) – The specified starting point for visibility analysis, that is, the observation point
  • to_point (Point3D) – The specified end point for visibility analysis, that is, the observed point.
Returns:

the result of the visibility analysis

Return type:

VisibleResult

iobjectspy.analyst.are_points_visible(input_data, from_points, to_points)

Multi-point visibility analysis is to determine whether multiple points can be viewed in pairs. Multi-point visibility analysis is based on the grid surface to calculate whether the observation point and the observed point are both visible. For visibility analysis between two points, please refer to the introduction of another overloaded method isVisible method.

If there are m observation points and n observed points, there will be m * n observation combinations. The analysis result is returned through an array of VisibleResult objects, each VisibleResult object includes whether the corresponding two points are visible. If they are not visible, the first obstacle point will be given, and the suggested elevation value of the point so that the point no longer obstructs the line of sight.

Note: If the elevation of the specified observation point is less than the elevation value of the corresponding position on the current grid surface, the elevation value of the observation point will be automatically set to the elevation of the corresponding position on the current grid surface.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster surface dataset used for visibility analysis.
  • from_points (list[Point3D]) – The specified starting point for visibility analysis, that is, the observation point
  • to_points (list[Point3D]) – The specified end point for visibility analysis, that is, the observed point.
Returns:

the result of the visibility analysis

Return type:

list[VisibleResult]

iobjectspy.analyst.line_of_sight(input_data, from_point, to_point)

Calculate the line of sight between two points, that is, calculate the visible and invisible parts of the line of sight from the observation point to the target point based on the terrain. According to the ups and downs of the terrain, calculating which segments of the line of sight from the observation point to the target point are visible or invisible is called calculating the line of sight between two points. The line between the observation point and the target point is called the line of sight. The line of sight can help understand which locations can be seen at a given point, and can serve for tourism route planning, the site selection of radar stations or signal transmission stations, and military activities such as the deployment of positions and observation posts.

../_images/LineOfSight.png

The elevation of the observation point and the target point is determined by their Z value. When the Z value of the observation point or the target point is less than the elevation value of the corresponding cell on the grid surface, the grid value of the cell is used as the elevation of the observation point or the target point to calculate the line of sight.

The result of calculating the line of sight between two points is a two-dimensional line object array, the 0th element of the array is a visible line object, and the first element is an invisible line object. The length of the array may be 1 or 2. This is because the invisible line object may not exist. At this time, the result array contains only one object, that is, the line-of-sight object. Since the visible line (or the invisible line) may be discontinuous, the visible or invisible line object may be a complex line object.

Parameters:
  • input_data (DatasetGrid or str) – The specified raster surface dataset.
  • from_point (Point3D) – The specified observation point is a three-dimensional point object.
  • to_point (Point3D) – The specified target point is a three-dimensional point object.
Returns:

The result is a two-dimensional line array

Return type:

list[GeoLine]

iobjectspy.analyst.radar_shield_angle(input_data, view_point, start_angle, end_angle, view_radius, interval, out_data=None, out_dataset_name=None, progress=None)

According to the topographic map and the radar center point, return the point dataset with the largest radar occlusion angle in each position. The azimuth is the angle between clockwise and true north.

Parameters:
  • input_data (DatasetGrid or str or list[DatasetGrid] or list[str]) – deleted dataset or DEM
  • view_point (Point3D) – A three-dimensional point object, which represents the coordinates of the radar center point and the height between the radar center and the ground.
  • start_angle (float) – The starting angle of the radar azimuth, the unit is degrees, the true north direction is 0 degrees, and the rotation is clockwise. The range is 0 to 360 degrees. If set to less than 0, the default value Is 0; if the value is greater than 360, the default is 360.
  • end_angle (float) – The end angle of the radar azimuth, the unit is degree, the maximum is 360 degrees. The viewing angle is based on the starting angle, that is, the viewing angle range is [starting angle, ending angle). The value must be greater than the starting angle. If the value is less than or equal to 0, it means [0,360).
  • view_radius (float) – Observation range, in meters. If it is set to less than 0, it means the entire topographic map range.
  • interval (float) – The azimuth interval, that is, how many degrees to return a radar masking point. The value must be greater than 0 and less than 360.
  • out_data (Datasource or str) – target datasource.
  • out_dataset_name (str) – result dataset name
  • progress (funtion) – progress information, please refer to:py:class:.StepEvent
Returns:

The returned 3D point dataset, Z represents the terrain height of the point. This dataset records the point with the largest radar occlusion angle in each azimuth, and adds the field “ShieldAngle”, “ShieldPosition” and “RadarDistance” respectively record the radar shielding angle, the angle between the point and the true north direction, and the distance between the point and the radar center.

Return type:

DatasetVector or str

iobjectspy.analyst.majority_filter(source_grid, neighbour_number_method, majority_definition, out_data=None, out_dataset_name=None, progress=None)

Mode filtering, Return the result raster dataset. The cell value of the raster is replaced according to the mode of the adjacent cell value. The majority filter tool must meet two conditions to perform the replacement. The number of adjacent pixels with the same value must be sufficient (up to half of all pixels or more), and these pixels must be continuous around the filter kernel. The second condition is related to the spatial connectivity of the pixel, and the purpose is to minimize the damage to the spatial pattern of the pixel.

Special cases:

-Angular pixels: 2 adjacent pixels in the case of 4 neighborhoods, 3 adjacent pixels in the case of 8 neighborhoods. At this time, two or more consecutive same values can be replaced; -Edge pixels: In the case of 4 neighborhoods, there are 3 adjacent pixels. In this case, 2 consecutive pixels and more than the same value can be replaced; In the case of 8-neighborhood, there are 5 adjacent pixels. At this time, 3 or more and at least one pixel must be on the edge to be replaced. -Equal to half: When two values are both half, it will not be replaced if one of the values is the same as the pixel, but not replaced at will.

The figure below is a schematic diagram of mode filtering.

api\../image/majorityFilter.png
Parameters:
  • source_grid (DatasetGrid or str) – The specified dataset to be processed. The input raster must be of integer type.
  • neighbour_number_method (str or NeighbourNumber) – The number of neighborhood pixels. There are two selection methods: 4 pixels up, down, left, and right as adjacent pixels (FOUR), and 8 adjacent pixels as adjacent pixels (EIGHT).
  • majority_definition (str or MajorityDefinition) – majority definition, that is, specify the number of adjacent (spatially connected) pixels that must have the same value before replacing, refer to:py:class:.MajorityDefinition for details.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset.
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset

Return type:

DatasetGrid

iobjectspy.analyst.expand(source_grid, neighbour_number_method, cell_number, zone_values, out_data=None, out_dataset_name=None, progress=None)

Expand, return the result raster dataset. Expand the specified grid area by the specified number of pixels. The specified area value is regarded as the foreground area, and the remaining area values are regarded as the background area. In this way, the foreground area can be extended to the background area. Valueless pixels will always be treated as background pixels, so adjacent pixels of any value can be expanded to non-valued pixels, and non-valued pixels will not be expanded to adjacent pixels.

note:

-When there is only one type of area value, expand the value; -When there are multiple types of area values, the closest distance is expanded first; -In the case of equal distances, calculate the contribution value of each area value and expand the maximum total contribution value (the contribution value calculation method of the 4-neighborhood method and the 8-neighborhood method are different); -If the distance and contribution value are equal, the extended pixel value is the smallest.

The following figure is an expanded schematic diagram:

api\../image/expand.png
Parameters:
  • source_grid (DatasetGrid or str) – The specified dataset to be processed. The input raster must be of integer type.
  • neighbour_number_method (NeighbourNumber or str) – The number of neighborhood pixels, here refers to the method used to expand the selected area. There are two expansion methods based on distance, that is, 4 pixels up, down, left, and right as neighboring pixels (FOUR), and based on mathematical morphology, that is, 8 neighboring pixels are used as neighboring pixels (EIGHT).
  • cell_number (int) – generalization amount. The number of pixels to be expanded in each specified area, similar to the specified number of runs, where the result of the previous run is the input of the subsequent iteration, and the value must be an integer greater than 1.
  • zone_values (list[int]) – zone values. The cell area value to be expanded.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result raster dataset

Return type:

DatasetGrid

iobjectspy.analyst.shrink(source_grid, neighbour_number_method, cell_number, zone_values, out_data=None, out_dataset_name=None, progress=None)

Shrink and return the result raster dataset. Shrink the selected area by the specified number of pixels by replacing the value of the area with the value of the most frequently occurring pixel in the neighborhood. The specified area value is regarded as the foreground area, and the remaining area values are regarded as the background area. In this way, the pixels in the background area can be used to replace the pixels in the foreground area.

note:

-When there are multiple values for shrinkage, take the one that appears most frequently, and take a random value if the number of multiple values is the same; -Two adjacent areas are pixels to be contracted, so there is no change on the boundary; -No value is a valid value, that is, pixels adjacent to no value data may be replaced with no value.

The following figure is a schematic diagram of shrinkage:

api\../image/shrink.png
Parameters:
  • source_grid (DatasetGrid or str) – The specified dataset to be processed. The input raster must be of integer type.
  • neighbour_number_method (NeighbourNumber or str) – The number of neighborhood pixels, here refers to the method used to shrink the selected area. There are two shrinking methods based on distance, that is, four pixels up, down, left, and right are used as neighboring pixels (FOUR), and based on mathematical morphology, that is, eight neighboring pixels are used as neighboring pixels (EIGHT).
  • cell_number (int) – generalization amount. The number of pixels to shrink, similar to the specified number of runs, where the result of the previous run is the input of subsequent iterations, and the value must be an integer greater than 0.
  • zone_values (list[int]) – zone values. The cell area value to be contracted.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset.
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result raster dataset

Return type:

DatasetGrid

iobjectspy.analyst.nibble(source_grid, mask_grid, zone_grid, is_mask_no_value, is_nibble_no_value, out_data=None, out_dataset_name=None, progress=None)

Nibble, return the result raster dataset.

Replace the raster cell value within the mask with the value of the nearest neighbor. Nibble can assign the value of the nearest neighbor to the selected area in the raster, and can be used to edit areas in a raster with known data errors.

Generally speaking, the valueless pixels in the mask grid define which pixels are nibbled. Any position in the input raster that is not within the mask will not be eaten away.

The picture below is a schematic diagram of cannibalization:

api\../image/nibble.png
Parameters:
  • source_grid (DatasetGrid or str) – The specified dataset to be processed. The input raster can be integer or floating point.
  • mask_grid (DatasetGrid or str) – The specified raster dataset as the mask.
  • zone_grid (DatasetGrid or str) – The zone grid. If there is an area raster, the pixels in the mask will only be replaced by the nearest pixels (unmasked values) in the same area in the area raster. Area refers to the same value in the raster.
  • is_mask_no_value (bool) – Whether to select the non-valued pixels in the mask to be eaten away. True means that the valueless pixel is selected to be eroded, that is, the original grid value that has no value in the mask is replaced with the value of the nearest neighbor, and the valued pixel remains unchanged in the original grid; False means that the valued pixel is selected to be eroded, that is, the value of the original raster corresponding to the value in the mask is replaced with the value of the nearest neighboring area, and the valueless pixel remains unchanged in the original raster. Generally use the first case more.
  • is_nibble_no_value (bool) – Whether to modify the non-valued data in the original grid. True indicates that the valueless pixels in the input raster are still valueless in the output; False indicates that the valueless pixels in the input raster in the mask can be cannibalized into valid output pixel values
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset
  • out_dataset_name (str) – The name of the specified result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return type:

DatasetGrid

iobjectspy.analyst.region_group(source_grid, neighbour_number_method, is_save_link_value, is_link_by_neighbour=False, exclude_value=None, out_data=None, out_dataset_name=None, progress=None)

Regional grouping. Record the identification of the connected area to which each pixel in the output belongs, and the system assigns a unique number to each area. In simple terms, connect connected pixels with the same value to form an area and number it. The first area of the scan is assigned a value of 1, the second area is assigned a value of 2, and so on, until all areas have been assigned. Scanning will proceed from left to right and top to bottom.

The following figure is a schematic diagram of regional grouping:

api\../image/regionGroup.png
Parameters:
  • source_grid (DatasetGird or str) – The specified dataset to be processed. The input raster must be of integer type.
  • neighbour_number_method (str or NeighbourNumber) – The number of neighborhood pixels. There are two selection methods: up, down, left and right 4 pixels as adjacent pixels (FOUR), and adjacent 8 pixels as adjacent pixels (EIGHT).
  • is_save_link_value (bool) – Whether to keep the original value of the corresponding grid. If set to true, the SourceValue item is added to the attribute table to connect the original value of each pixel of the input raster; if the original value of each area is no longer needed, it can be set to false, which will speed up the processing.
  • is_link_by_neighbour (bool) – Whether to connect based on neighborhood. When set to true, connect the pixels to form an area according to the 4-neighborhood or 8-neighborhood method; when set to false, the excluded value must be set. At this time, all connected areas except for the excluded value can form an area
  • exclude_value (int) – Exclude value. The excluded raster value does not participate in the counting. On the output raster, the position of the cell containing the excluded value is assigned a value of 0. If the exclusion value is set, there is no connection information in the result attribute table.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset
  • out_dataset_name (str) – The name of the specified result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result raster dataset and attribute table

Return type:

tuple[DatasetGrid, DatasetVector]

iobjectspy.analyst.boundary_clean(source_grid, sort_type, is_run_two_times, out_data=None, out_dataset_name=None, progress=None)

Boundary cleanup, return the result raster dataset. Smooth the boundaries between regions by expanding and contracting. All areas less than three pixels in the x and y directions will be changed.

The following figure is a schematic diagram of boundary cleaning:

api\../image/BoundaryClean.png
Parameters:
  • source_grid (DatasetGrid or str) – The specified dataset to be processed. The input raster must be of integer type.
  • sort_type (BoundaryCleanSortType or str) – Sort method. Specify the sort type to be used in smoothing. Including NOSORT, DESCEND, ASCEND three methods.
  • is_run_two_times (bool) – Is the number of smoothing processes occurring twice? True means to perform the expansion-contraction process twice, perform expansion and contraction according to the sorting type, and then use the opposite priority to perform one more contraction and expansion; False means to perform one expansion and contraction according to the sorting type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The specified datasource for storing the result dataset.
  • out_dataset_name (str) – the name of the result dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result raster dataset

Return type:

DatasetGrid

class iobjectspy.analyst.CellularAutomataParameter

Bases: object

Cellular automata parameter setting class. Including setting the starting grid and spatial variable grid data, as well as the display and output configuration of the simulation process (simulation result iterative refresh, simulation result output), etc.

cell_grid

DatasetGrid – starting data grid

flush_file_path

str – file path for interface refresh

flush_frequency

int – refresh frequency of iteration results

is_save

bool – whether to save the intermediate iteration result

iterations

int – number of iterations

output_dataset_name

str

output_datasource

Datasource – datasource for saving intermediate iteration results.

save_frequency

int – Frequency of saving intermediate iteration results

set_cell_grid(cell_grid)

Set the starting data grid.

Parameters:cell_grid (DatasetGrid or str) – starting data grid
Returns:self
Return type:CellularAutomataParameter
set_flush_file_path(value)

Set the file path for interface refresh.The suffix name must be’.tif’

Parameters:value (str) – file path for interface refresh
Returns:self
Return type:CellularAutomataParameter
set_flush_frequency(flush_frequency)

Set the refresh frequency of iteration results. That is, the output information and graphs are refreshed every few iterations.

Parameters:flush_frequency (int) – Iteration result refresh frequency
Returns:self
Return type:CellularAutomataParameter
set_iterations(value)

Set the number of iterations

Parameters:value (int) – number of iterations
Returns:self
Return type:CellularAutomataParameter
set_output_dataset_name(dataset_name)

Set the name of the dataset to save the intermediate iteration results

Parameters:dataset_name (str) – the name of the dataset to save the intermediate iteration results
Returns:self
Return type:CellularAutomataParameter
set_output_datasource(datasource)

Set the datasource for saving intermediate iteration results.

Parameters:datasource (Datasource or str or DatasourceConnectionInfo) – The datasource for saving intermediate iteration results.
Returns:self
Return type:CellularAutomataParameter
set_save(save)

Set whether to save intermediate iteration results. That is, whether to output the result during the simulation.

Parameters:save (bool) – whether to save intermediate iteration results
Returns:self
Return type:CellularAutomataParameter
set_save_frequency(save_frequency)

Set the saving frequency of intermediate iteration results. That is, the result is output every number of iterations.

Parameters:save_frequency (int) – save frequency of intermediate iteration results
Returns:self
Return type:CellularAutomataParameter
set_simulation_count(simulation_count)

Set the number of conversions. The number of analog conversions is a parameter required for the simulation process, and refers to the number of cities added between two different time periods.

Parameters:simulation_count (int) – conversion number
Returns:self
Return type:CellularAutomataParameter
set_spatial_variable_grids(spatial_variable_grids)

Set the grid array of spatial variable data.

Parameters:spatial_variable_grids (DatasetGrid or list[DatasetGrid] or tuple[DatasetGrid]) – grid array of spatial variable data
Returns:self
Return type:CellularAutomataParameter
simulation_count

int – conversion number

spatial_variable_grids

list[DatasetGrid] – spatial variable data grid array

class iobjectspy.analyst.PCACellularAutomataParameter

Bases: object

Cellular automata parameter class based on principal component analysis. In the cellular automata process based on principal component analysis, principal component analysis needs to be generated. This process requires setting the principal component weight value and the parameters required for the simulation process (non-linear exponential transformation value, diffusion index), etc.

alpha

int – diffusion parameter

cellular_automata_parameter

CellularAutomataParameter – Cellular AutomataParameter

component_weights

list[float] – Principal component weight array

conversion_rules

dict[int,bool] – Conversion rules

conversion_target

int – conversion target

index_k

float – Non-linear exponential transformation value.

set_alpha(value)

Set the diffusion parameters. Generally 1-10.

Parameters:value (int) – Diffusion parameter.
Returns:self
Return type:PCACAParameter
set_cellular_automata_parameter(value)

Set cellular automata parameters.

Parameters:value (CellularAutomataParameter) – Cellular automata parameters
Returns:self
Return type:PCACAParameter
set_component_weights(value)

Set the principal component weight array.

Parameters:value (list[float] or tuple[float]) – Principal component weight array
Returns:self
Return type:PCACAParameter
set_conversion_rules(value)

Set conversion rules. For example, in the change of land use, water areas are non-convertible land and farmland is convertible land.

Parameters:value (dict[int,bool]) – conversion rules
Returns:self
Return type:PCACAParameter
set_conversion_target(value)

Set conversion goals. For example, in the conversion of farmland to urban land, urban land is the conversion target.

Parameters:value (int) – conversion target
Returns:self
Return type:PCACAParameter
set_index_k(value)

Set the non-linear exponential transformation value. This system is 4.

Parameters:value (float) – Non-linear exponential transformation value.
Returns:self
Return type:PCACAParameter
class iobjectspy.analyst.PCAEigenValue

Bases: object

Principal component analysis eigenvalue result class.

contribution_rate

float – contribution rate

cumulative

float – cumulative contribution rate

eigen_value

float – characteristic value

spatial_dataset_raster_name

str – spatial variable data name

class iobjectspy.analyst.PCAEigenResult

Bases: object

Principal component analysis result class. Principal component analysis has different numbers of principal components due to different sampling numbers and principal component ratios. Therefore, it is necessary to set the weights according to the results (principal components and contribution rate, etc.) obtained after principal component analysis. After setting the weights, you can Use cellular automata to simulate.

component_count

int – number of principal components

pca_eigen_values

list[PCAEigenValue] – Principal component analysis feature value result array

pca_loadings

list[float] – Principal component contribution rate

spatial_dataset_raster_names

list[str] – spatial variable data name

class iobjectspy.analyst.PCACellularAutomata

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Cellular automata based on principal component analysis.

Cellular automata (CA) is a network dynamics model in which time, space, and state are discrete, and spatial interaction and time causality are local. It has the ability to simulate the spatiotemporal evolution process of complex systems.

When geographic simulations need to use many spatial variables, these spatial variables are often related. It is necessary to use principal component analysis to effectively compress multiple spatial variables into a few principal components and reduce the difficulty of setting weights. The cellular automata of component analysis is applied in the spatial simulation of urban development.

pca(spatial_variable_grids, sample_count, component_radio, progress_func=None)

Sampling and principal component analysis are performed on the cell dataset.

This method is used to set the corresponding weight value using the number of principal components obtained before the cellular automata analysis based on principal component analysis.

Parameters:
  • spatial_variable_grids (list[DatasetGrid] or tuple[DatasetGird]) – spatial variable grid dataset.
  • sample_count (int) – The number of samples. Randomly sample the specified number of samples from the entire raster data
  • component_radio (float) – Principal component ratio, the value range is [0,1], for example, when the value is 0.8, it means to select the first n principal components whose cumulative contribution rate reaches 80%.
  • progress_func (function) – progress information processing function, refer to: py:class:.StepEvent
Returns:

Principal component analysis result, including the number of principal components, contribution rate, eigenvalue and eigenvector

Return type:

PCAEigenResult

pca_cellular_automata(parameter, out_data=None, out_dataset_name=None, progress_func=None, flush_func=None)

Cellular automata based on principal component analysis.

Parameters:
  • parameter (PCACellularAutomataParameter) – The parameters of cellular automata based on principal component analysis.
  • out_data (Datasource or str) – The datasource of the output result dataset.
  • out_dataset_name (str) – The name of the output dataset.
  • progress_func (function) – progress information processing function, refer to: py:class:.StepEvent
  • flush_func (function) – Cellular automata flushing information processing function, refer to: py:class:.CellularAutomataFlushedEvent
Returns:

result raster dataset

Return type:

DatasetGrid

class iobjectspy.analyst.CellularAutomataFlushedEvent(flush_file_path=None)

Bases: object

Cellular automata refreshes the transaction class.

Parameters:flush_file_path (str) – Tif file path for flushing
flush_file_path

str – the path of the tif file used for refreshing

class iobjectspy.analyst.ANNTrainResult(java_object)

Bases: object

Artificial Neural Network (ANN) training results

accuracy

float – training accuracy rate

convert_values

dict[int, error] – training iteration result, key is the number of iterations, value is the error rate

class iobjectspy.analyst.ANNCellularAutomataParameter

Bases: object

Cellular automata parameter setting based on artificial neural network.

alpha

int – diffusion parameter

cellular_automata_parameter

CellularAutomataParameter – Cellular AutomataParameter

conversion_class_ids

list[int] – The classification ID (ie raster value) array of the cellular automata conversion rule

conversion_rules

list[list[bool]] – Cellular Automata Conversion Rules

end_cell_grid

DatasetGrid – end raster dataset

is_check_result

bool – Whether to check the result or not

set_alpha(value)

Set diffusion parameters

Parameters:value (int) – Diffusion parameter. Generally 1-10.
Returns:self
Return type:ANNCellularAutomataParameter
set_cellular_automata_parameter(value)

Set the parameters of cellular automata

Parameters:value (CellularAutomataParameter) – parameter of cellular automaton
Returns:self
Return type:ANNCellularAutomataParameter
set_check_result(value)

Set whether to detect the result

Parameters:value (bool) – Whether the test result
Returns:self
Return type:ANNCellularAutomataParameter
set_conversion_class_ids(value)

Set the classification ID (ie raster value) array of the cellular automata conversion rule

Parameters:value (list[int] or tuple[int]) – The classification ID (ie raster value) array of the cellular automata conversion rule
Returns:self
Return type:ANNCellularAutomataParameter
set_conversion_rules(value)

Set cellular automata conversion rules.

Parameters:value (list[list[bool]]) – Set cellular automata conversion rules
Returns:self
Return type:ANNCellularAutomataParameter
set_end_cell_grid(value)

Set the ending raster dataset. Must be set when is_check_result is True

Parameters:value (DatasetGrid or str) – Terminate the raster dataset
Returns:self
Return type:ANNCellularAutomataParameter
set_threshold(value)

Set the threshold of cell transition probability

Parameters:value (float) – Cell transition probability threshold
Returns:self
Return type:ANNCellularAutomataParameter
threshold

float – cell transition probability threshold

class iobjectspy.analyst.ANNCellularAutomata

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Cellular automata based on artificial neural network.

ann_cellular_automata(parameter, out_data=None, out_dataset_name=None, progress_func=None, flush_func=None)

Cellular automata based on artificial neural network.

Parameters:
  • parameter (ANNCellularAutomataParameter) – Cellular automata parameters based on artificial neural network.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource of the output dataset.
  • out_dataset_name (str) – The name of the output dataset.
  • progress_func (function) – progress information processing function, refer to: py:class:.StepEvent
  • flush_func (function) – Cellular automata flushing information processing function, refer to: py:class:.CellularAutomataFlushedEvent
Returns:

The result of cellular automata based on artificial neural network, including land type (if any), accuracy (if any), and result raster dataset.

Return type:

ANNCellularAutomataResult

ann_train(error_rate, max_times)

Artificial neural network training.

Parameters:
  • error_rate (float) – artificial neural network training termination condition, expected error value.
  • max_times (int) – artificial neural network training termination condition, the maximum number of iterations.
Returns:

artificial neural network training result

Return type:

ANNTrainResult

initialize_ann(train_start_cell_grid, train_end_cell_grid, ann_train_values, spatial_variable_grids, ann_parameter)

Initialize cellular automata based on artificial neural network

Parameters:
  • train_start_cell_grid (DatasetGrid or str) – training starting grid dataset
  • train_end_cell_grid (DatasetGrid or str) – training end grid dataset
  • ann_train_values (list[int] or tuple[int]) –
  • spatial_variable_grids (list[DatasetGrid] or list[str]) – spatial variable grid dataset
  • ann_parameter (ANNParameter) – artificial neural network training parameter settings.
Returns:

Whether the initialization is successful

Return type:

bool

class iobjectspy.analyst.ANNCellularAutomataResult(convert_values, accuracies, result_dataset)

Bases: object

Cellular automata result based on artificial neural network

accuracies

list[float] – correct rate

convert_values

list[float] – Conversion value array, which is the raster type of the conversion rule

result_dataset

DatasetGrid – Cellular Automata raster result dataset

class iobjectspy.analyst.ANNParameter(is_custom_neighborhood=False, neighborhood_number=7, custom_neighborhoods=None, learning_rate=0.2, sample_count=1000)

Bases: object

Artificial neural network parameter settings.

Initialization object

Parameters:
  • is_custom_neighborhood (bool) – Whether to customize the neighborhood range
  • neighborhood_number (int) – neighborhood range
  • custom_neighborhoods (list[list[bool]]) – custom neighborhood range.
  • learning_rate (float) – learning rate
  • sample_count (int) – number of samples
custom_neighborhoods

list[list[bool]] – custom field range

is_custom_neighborhood

bool – Whether to customize the neighborhood range

learning_rate

float – learning rate

neighborhood_number

int – neighborhood range

sample_count

int – number of samples

set_custom_neighborhood(value)

Set whether to customize the neighborhood range

Parameters:value (bool) – Whether to customize the neighborhood range
Returns:self
Return type:ANNParameter
set_custom_neighborhoods(value)

Set custom realm scope

Parameters:value (list[list[bool]]) – custom field range
Returns:self
Return type:ANNParameter
set_learning_rate(value)

Set the learning rate

Parameters:value (float) – learning rate
Returns:self
Return type:ANNParameter
set_neighborhood_number(value)

Set neighborhood range

Parameters:value (int) – neighborhood range
Returns:self
Return type:ANNParameter
set_sample_count(value)

Set the number of samples

Parameters:value (int) – number of samples
Returns:self
Return type:ANNParameter
class iobjectspy.analyst.MCECellularAutomata

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Cellular automata based on multi-criteria judgment.

get_kappa()

Get the Kappa coefficient.

return:kappa coefficient. It is used for consistency check and can be used to measure the accuracy of cell conversion.
rtype:float
mce_cellular_automata(parameter, out_data=None, out_dataset_name=None, progress_func=None, flush_func=None)

Cellular automata based on multi-criteria judgment.

>>> ds = open_datasource('/home/data/cellular.udbx')
>>> para = CellularAutomataParameter()
>>> para.set_cell_grid(ds['T2001'])
>>> para.set_spatial_variable_grids((ds['x0'], ds['x1'], ds['x2'], ds['x3'], ds['x4']))
>>> para.set_simulation_count(1000).set_iterations(10)
>>> para.set_output_datasource(ds).set_output_dataset_name('result')
>>> ahp_v = [[1, 0.33333343, 0.2, 3], [3, 1, 0.333333343, 3], [5, 3, 1, 5], [0.333333343, 0.333333343, 0.2, 1]]
>>> assert MCECellularAutomataParameter.check_ahp_consistent(ahp_v) is not None
[0.13598901557207943, 0.24450549432771368, 0.5430402884352277, 0.0764652016649792]
>>>
>>> mce_parameter = MCECellularAutomataParameter().set_cellular_automata_parameter(para)
>>> mce_parameter.set_conversion_rules({2: False, 3: True, 4: False, 5: True})
>>> mce_parameter.set_conversion_target(1).set_check_result(True).set_end_cell_grid(ds['T2006'])
>>> mce_parameter.set_ahp_comparison_matrix(ahp_v).set_alpha(2)
>>> def progress_function(step):
>>> print('{}: {}'.format(step.title, step.message))
>>>
>>> result_grid = mce_parameter.mce_cellular_automata(mce_parameter, progress_func=progress_function)
>>>
param parameter:
 Parameters of cellular automata based on multi-criteria judgment.
type parameter:MCECellularAutomataParameter
param out_data:The data source where the output result dataset is located.
type out_data:Datasource or str
param out_dataset_name:
 The name of the output data set.
type out_dataset_name:
 str
param progress_func:
 progress information processing function, please refer to StepEvent
type progress_func:
 function
param flush_func:
 Cellular automata flushing information processing function, please refer to CellularAutomataFlushedEvent
type flush_func:
 function
return:result raster dataset
rtype:DatasetGrid
class iobjectspy.analyst.MCECellularAutomataParameter

Bases: object

Cellular Automata Parameter Class Based on Multi-criteria Judgment

ahp_comparison_matrix

list[list[float]] – AHP consistency test judgment matrix

alpha

int – diffusion parameter

cellular_automata_parameter

CellularAutomataParameter – Cellular AutomataParameter

static check_ahp_consistent(ahp_comparison_matrix)

Consistency test of analytic hierarchy process.

>>> values = [[1, 0.33333343, 0.2, 3], [3, 1, 0.333333343, 3], [5, 3, 1, 5], [0.333333343, 0.333333343, 0.2, 1]]
>>> MCECellularAutomataParameter.check_ahp_consistent(values)
param ahp_comparison_matrix:
 judgment matrix. If set to list, each element in list must be a list, and the element values ​​are equal. If there is a numpy library in the system, you can Enter a two-dimensional numpy array.
type ahp_comparison_matrix:
 list[list[float]] or numpy.ndarray
return:Return the weight array successfully, whether to return None
rtype:list[float]
conversion_rules

dict[int,bool] – Conversion rules

conversion_target

int – conversion target

end_cell_grid

DatasetGrid – end year raster dataset

global_value

float – global factor influence ratio

is_check_result

bool – 是否对比检验输出结果和终止数据

local_value

float – neighborhood factor influence ratio

set_ahp_comparison_matrix(value)

Set up the judgment matrix for the consistency test of the analytic hierarchy process.

param value:The judgment matrix for the consistency test of the analytic hierarchy process. If set to list, each element in list must be a list, and the element values ​​are equal. If there is a numpy library in the system, you can Enter a two-dimensional numpy array.
type value:list[list[float]] or numpy.ndarray
return:self
rtype:MCECellularAutomataParameter
set_alpha(value)

Set the diffusion parameters. Generally 1-10.

param int value:
 Diffusion parameter.
return:self
rtype:MCECellularAutomataParameter
set_cellular_automata_parameter(value)

Set cellular automata parameters.

param value:Cellular automata parameters
type value:CellularAutomataParameter
return:self
rtype:MCECellularAutomataParameter
set_check_result(value)

Set whether to compare the inspection output result and the termination data. The default is False.

param bool value:
 Whether to compare the test output result and the termination data
return:self
rtype:MCECellularAutomataParameter
set_conversion_rules(value)

Set up conversion rules. For example, in the change of land use, water area is non-convertible land, and farmland is convertible land.

param value:conversion rules
type value:dict[int,bool]
return:self
rtype:MCECellularAutomataParameter
set_conversion_target(value)

Set conversion goals. For example, in the conversion of farmland to urban land, urban land is the conversion target.

param int value:
 conversion target
return:self
rtype:MCECellularAutomataParameter
set_end_cell_grid(value)

Set the end year raster dataset.

param value:end year raster dataset
type value:DatasetGrid or str
return:self
rtype:MCECellularAutomataParameter
set_global_value(value)

Set the global factor influence ratio. The default value is 0.5. The sum of the influence ratios of global factors and domain factors is 1.

param float value:
 Global factors affect the ratio.
return:self
rtype:MCECellularAutomataParameter
set_local_value(value)

Set the influence ratio of neighborhood factors, the default value is 0.5. The sum of the influence ratios of neighborhood factors and global factors is 1

param float value:
 The influence ratio of neighborhood factors
return:self
rtype:MCECellularAutomataParameter
iobjectspy.analyst.basin(direction_grid, out_data=None, out_dataset_name=None, progress=None)

About hydrological analysis:

  • Hydrological analysis is based on the Digital Elevation Model (DEM) raster data to establish a water system model, which is used to study the hydrological characteristics of the watershed and simulate the surface hydrological process, and to make predictions for the future surface hydrological conditions. Hydrological analysis models can help us analyze the scope of floods, locate runoff pollution sources, predict the impact of landform changes on runoff, etc., and are widely used in many industries and fields such as regional planning, agriculture and forestry, disaster prediction, road design, etc.

  • The confluence of surface water is largely determined by the shape of the surface, and DEM data can express the spatial distribution of regional topography, and has outstanding advantages in describing watershed topography, such as watershed boundaries, slope and aspect, and river network extraction. So it is very suitable for hydrological analysis.

  • The main contents of hydrological analysis provided by SuperMap include filling depressions, calculating flow direction, calculating flow length, calculating cumulative water catchment, watershed division, river classification, connecting water system, and water system vectorization.

    • The general process of hydrological analysis is:

      ../_images/HydrologyAnalyst_2.png
    • How to obtain the grid water system?

      Many functions in hydrological analysis need to be based on raster water system data, such as vector water system extraction (stream_to_line() method), river classification (stream_order() method), Connect the water system (:stream_link() method), etc.

      Generally, raster water system data can be extracted from the cumulative water catchment grid. In the cumulative water catchment grid, the larger the value of the cell, the larger the cumulative water catchment in the area. Cells with higher cumulative water catchment can be regarded as river valleys. Therefore, by setting a threshold, cells with cumulative water catchment greater than this value can be extracted, and these cells constitute a grid water system. It is worth noting that this value may be different for river valleys of different levels and valleys of the same level in different regions. Therefore, the determination of the threshold value needs to be determined based on the actual topography of the study area and through continuous experiments.

      In SuperMap, the raster water system used for further analysis (extracting vector water system, river classification, connecting water system, etc.) is required to be a binary raster. This can be achieved through raster algebraic operations to make it greater than or equal to the cumulative water catchment threshold The cell of is 1, otherwise it is 0, as shown in the figure below.

      ../_images/HydrologyAnalyst_3.png

      Therefore, the process of extracting raster water systems is as follows:

      1. Obtain the cumulative water catchment grid, which can be achieved by the flow_accumulation() method.

      2. By using the raster algebraic operation expression_math_analyst() method to perform relational calculation on the cumulative water catchment grid, the raster water system data that meets the requirements can be obtained. Assuming that the threshold is set to 1000, the operation expression is: “[Datasource.FlowAccumulationDataset]>1000”. In addition, using the Con(x,y,z) function can also get the desired result, that is, the expression is: “Con([Datasource.FlowAccumulationDataset]>1000,1,0)”.

Calculate the basin basin based on the flow direction grid. Basin is a catchment area, which is one of the ways to describe a watershed.

Calculating watershed basins is a process of assigning a unique basin to each cell based on flow direction data. As shown in the figure below, watershed basins are one of the ways to describe watersheds, showing all grids that are connected to each other and are in the same watershed basin.

../_images/Basin.png
Parameters:
  • direction_grid (DatasetGrid or str) – flow direction raster dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

raster dataset or dataset name of the basin

Return type:

DatasetGrid or str

iobjectspy.analyst.build_quad_mesh(quad_mesh_region, left_bottom, left_top, right_bottom, right_top, cols=0, rows=0, out_col_field=None, out_row_field=None, out_data=None, out_dataset_name=None, progress=None)

Mesh a single simple area object. The fluid problem is a continuity problem. In order to simplify its research and the convenience of modeling and processing, the study area is discretized. The idea is to establish a discrete grid. The grid division is to section the continuous physical area. Divide it into several grids and determine the nodes in each grid. Use a value in the grid to replace the basic situation of the entire grid area. The grid is used as a carrier for calculation and analysis, and its quality is good or bad. It has an important influence on the accuracy and calculation efficiency of the later numerical simulation.

Steps of meshing:

1. Data preprocessing, including removing duplicate points, etc. Given a reasonable tolerance, removing duplicate points makes the final meshing result more reasonable, and there will be no phenomena that seem to have multiple lines starting from one point (actually duplicate points).

2. Polygon decomposition: For complex polygonal areas, we use the method of block and stepwise division to construct the grid, divide a complex irregular polygon area into multiple simple single-connected areas, and then execute each single-connected area In the grid division procedure, the grids of the sub-regions are spliced together to form the division of the entire region.

3. Choose four corner points: These 4 corner points correspond to the 4 vertices on the calculation area of the mesh, and their choice will affect the result of the division. The choice should be as close as possible to the four vertices of the quadrilateral in the original area, and the overall flow potential should be considered.

../_images/SelectPoint.png
  1. In order to make the divided mesh present the characteristics of a quadrilateral, the vertex data (not on the same straight line) constituting the polygon need to participate in the network formation.
  2. Perform simple area meshing.

Note: Simple polygon: any straight lines or edges in the polygon will not cross.

../_images/QuadMeshPart.png

Description:

RightTopIndex is the index number of the upper right corner, LeftTopIndex is the index number of the upper left corner, RightBottomIndex is the index number of the lower right corner, LeftBottomIndex
is the index number of the lower left corner. Then nCount1=(RightTopIndex- LeftTopIndex+1) and nCount2=(RightBottomIndex- LeftBottomIndex+1),

If: nCount1 is not equal to nCount2, the program does not process it.

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • quad_mesh_region (GeoRegion) – region object for meshing
  • left_bottom (Point2D) – The coordinates of the lower left corner of the polygon of the meshed area. Four corner points selection basis: 4 corner points correspond to the 4 vertices on the calculation area of the mesh, The choice will affect the results of the subdivision. The choice should be as close as possible to the four vertices of the quadrilateral in the original area, while considering the overall flow potential.
  • left_top (Point2D) – The coordinates of the upper left corner of the polygon of the meshed area
  • right_bottom (Point2D) – coordinate of the lower right corner of the polygon of the meshed area
  • right_top (Point2D) – The coordinates of the upper right corner of the polygon of the meshed area
  • cols (int) – The number of nodes in the column direction of the mesh. The default value is 0, which means it will not participate in the processing; if it is not 0, but if the value is less than the maximum number of points in the polygon column direction minus one, the maximum number of points in the polygon column direction minus one is used as the number of columns (cols); if it is greater than the polygon column If the maximum number of points in the direction is subtracted by one, points will be automatically added to make the number of column directions cols. For example: if the user wants to divide a rectangular area object into 2*3 (height*width)=6 small rectangles, the number of column directions (cols) is 3.
  • rows (int) – The number of nodes in the row direction of the mesh. The default value is 0, which means it will not participate in the processing; if it is not 0, but the value is less than the maximum number of points in the polygon row direction minus one, the maximum number of points in the polygon row direction minus one is used as the number of rows; if it is greater than the polygon row direction If the maximum number of points is subtracted by one, points will be automatically added to make the number of rows in the direction of rows. For example: if the user wants to divide a rectangular area object into 2*3 (height * width) = 6 small rectangles, the number of rows in the direction (rows) is 2.
  • out_col_field (str) – The column attribute field name of the grid segmentation result object. This field is used to store the column number of the segmentation result object.
  • out_row_field (str) – The row attribute field name of the grid segmentation result object. This field is used to store the row number of the segmentation result object.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource that stores the segmentation result dataset.
  • out_dataset_name (str) – The name of the split result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The result dataset after splitting. The split faces are returned as sub-objects.

Return type:

DatasetVector or str

iobjectspy.analyst.fill_sink(surface_grid, exclude_area=None, out_data=None, out_dataset_name=None, progress=None)

Fill pseudo depressions with DEM raster data. A depression is an area where the surrounding grids are higher than it, and it is divided into natural depressions and pseudo depressions.

 * Natural depressions are actual depressions that reflect the true shape of the surface, such as glaciers or karst landforms, mining areas, potholes, etc., which are generally far less than pseudo depressions;  * False depressions are mainly caused by errors caused by data processing and improper interpolation methods, which are very common in DEM raster data.

When determining the flow direction, because the elevation of the depression is lower than the elevation of the surrounding grid, the flow direction in a certain area will all point to the depression, causing the water flow to gather in the depression and not flow out, causing the interruption of the water catchment network. Therefore, filling depressions is usually a prerequisite for reasonable flow direction calculations.

After filling a certain depression, it is possible to generate new depressions. Therefore, filling depressions is a process of recognizing depressions and filling depressions until all depressions are filled and no new depressions are generated. The figure below is a schematic cross-sectional view of the filled depression.

../_images/FillSink.png

This method can specify a point or surface dataset to indicate the real depressions or depressions that need to be excluded, and these depressions will not be filled. Using accurate data of this type will obtain more real terrain without pseudo depressions, making subsequent analysis more reliable.

The data used to indicate depressions. If it is a point dataset, one or more of the points can be located in the depression. The ideal situation is that the point indicates the sinking point of the depression area; if it is a polygon dataset, each surface The object should cover a depression area.

You can use the exclude_area parameter to specify a point or area dataset to indicate the actual depressions or depressions to be excluded , and these depressions will not be filled. Use accurate such data, A more realistic topography without pseudo depressions will be obtained, making subsequent analysis more reliable. Data used to indicate depressions. If it is a point dataset, one or more of the points can be located in depressions. The ideal situation is that the point indicates the sink point of the depression area; if it is a polygon dataset, each area object should cover a depression area.

If exclude_area is None, all depressions in the DEM grid will be filled, including pseudo depressions and real depressions

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • surface_grid (DatasetGrid or str) – DEM data specified to fill the depression
  • exclude_area (DatasetVector or str) – The specified point or area data used to indicate known natural depressions or depressions to be excluded. If it is a point dataset, the area where one or more points are located is indicated as a depression; If it is a polygon dataset, each polygon object corresponds to a depression area. If it is None, all depressions in the DEM grid will be filled, including pseudo depressions and real depressions
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – The name of the result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

DEM raster dataset or dataset name without pseudo depressions. If filling the pseudo depression fails, None is returned.

Return type:

DatasetVector or str

iobjectspy.analyst.flow_accumulation(direction_grid, weight_grid=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the cumulative water catchment volume based on the flow direction grid. The weighted dataset can be used to calculate the weighted cumulative water catchment. Cumulative water catchment refers to the cumulative amount of water flowing to all upstream cells of a cell, which is calculated based on flow direction data.

The value of cumulative water catchment can help us identify river valleys and watersheds. The higher cumulative water catchment of the cell indicates that the terrain is low and can be regarded as a river valley; 0 indicates that the terrain is higher and may be a watershed. Therefore, the cumulative water catchment is the basis for extracting various characteristic parameters of the watershed (such as watershed area, perimeter, drainage density, etc.).

The basic idea of calculating the cumulative water catchment is: assuming that there is a unit of water at each cell in the raster data, calculate the cumulative water volume of each cell in sequence according to the water flow direction diagram (excluding the current cell’s Water volume).

The figure below shows the process of calculating the cumulative water catchment from the direction of water flow.

../_images/FlowAccumulation_1.png

The following figure shows the flow direction grid and the cumulative water catchment grid generated based on it.

../_images/FlowAccumulation_2.png

In practical applications, the water volume of each cell is not necessarily the same, and it is often necessary to specify weight data to obtain the cumulative water catchment volume that meets the demand. After the weight data is used, in the calculation of the cumulative water catchment volume, the water volume of each cell is no longer a unit, but a value multiplied by the weight (the grid value of the weight dataset). For example, if the average rainfall in a certain period is used as the weight data, the calculated cumulative water catchment is the rainfall flowing through each cell in that period.

Note that the weight grid must have the same range and resolution as the flow direction grid.

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • direction_grid (DatasetGrid or str) – flow direction raster data.
  • weight_grid (DatasetGrid or str) – weight grid data. Setting to None means that the weight dataset is not used.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – The name of the result dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Raster dataset or dataset name of cumulative water catchment. If the calculation fails, it Return None.

Return type:

DatasetVector or str

iobjectspy.analyst.flow_direction(surface_grid, force_flow_at_edge, out_data=None, out_dataset_name=None, out_drop_grid_name=None, progress=None)

Calculate the flow direction of DEM raster data. To ensure the correctness of the flow direction calculation, it is recommended to use the DEM raster data after filling the pseudo depressions.

Flow direction, that is, the direction of water flow on the hydrological surface. Calculating flow direction is one of the key steps in hydrological analysis. Many functions of hydrological analysis need to be based on the flow direction grid, such as calculating cumulative water catchment, calculating flow length and watershed, etc.

SuperMap uses the maximum gradient method (D8, Deterministic Eight-node) to calculate the flow direction. This method calculates the steepest descent direction of the cell as the direction of water flow. The ratio of the elevation difference between the center cell and the adjacent cells to the distance is called the elevation gradient. The steepest descent direction is the direction formed by the central cell and the cell with the largest elevation gradient, that is, the flow direction of the central grid. The value of the flow direction of the cell is determined by encoding the 8 neighborhood grids around it. As shown in the figure below, if the flow direction of the center cell is to the left, its flow direction is assigned a value of 16; if it flows to the right, it is assigned a value of 1.

In SuperMap, by encoding the 8 neighborhood grids of the central grid (as shown in the figure below), the water flow direction of the central grid can be determined by one of the values. For example, if the flow direction of the center grid is left, then its flow direction is assigned a value of 16; if it flows to the right, it is assigned a value of 1.

../_images/FlowDirection_1.png

When calculating the flow direction, you need to pay attention to the processing of grid boundary cells. The cells located at the border of the grid are special. The forceFlowAtEdge parameter can be used to specify whether the flow direction is outward. If it is outward, the flow direction value of the boundary grid is shown in the figure below (left), otherwise, the cells on the boundary will be assigned no value, as shown in the figure below (right).

../_images/FlowDirection_2.png

Calculate the flow direction of each grid of DEM data to get the flow direction grid. The following figure shows the flow direction raster generated based on DEM data without depressions.

../_images/FlowDirection_3.png

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • surface_grid (DatasetGrid or str) – DEM data used to calculate flow direction
  • force_flow_at_edge (bool) – Specifies whether to force the flow direction of the grid at the boundary to be outward. If True, all cells at the edge of the DEM grid flow outward from the grid.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – The name of the dataset where the result flows
  • out_drop_grid_name (str) –
    The name of the resultant elevation gradient raster dataset. Optional parameters. Used to calculate the intermediate result of the flow direction.
    The ratio of the elevation difference between the center cell and the adjacent cells to the distance is called the elevation gradient. As shown in the figure below, it is an example of flow direction calculation, in which an elevation gradient grid is generated
    ../_images/FlowDirection.png
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return a tuple of 2 elements, the first element is the result flow raster dataset or dataset name, if the result elevation gradient raster dataset name is set, The second element is the resultant elevation gradient raster dataset or dataset name, otherwise it is None

Return type:

tuple[DatasetGrid,DatasetGrid] or tuple[str,str]

iobjectspy.analyst.flow_length(direction_grid, up_stream, weight_grid=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the flow length based on the flow direction grid, that is, calculate the distance from each cell along the flow direction to the start or end point of the flow direction. The weighted dataset can be used to calculate the weighted flow length.

Flow length refers to the distance from each cell along the flow direction to the start or end point of its flow, including the length in the upstream and downstream directions. The length of water flow directly affects the speed of ground runoff, In turn, it affects the erodibility of ground soil, so it is of great significance in soil and water conservation, and is often used as an evaluation factor for soil erosion and soil erosion.

The calculation of flow length is based on flow direction data, which indicates the direction of water flow. This dataset can be created by flow direction analysis; the weight data defines the flow resistance of each cell. Flow length is generally used for flood calculations, and water flow is often hindered by many factors such as slope, soil saturation, vegetation cover, etc. At this time, modeling these factors requires a weighted dataset.

There are two ways to calculate stream length:

  • Downstream: Calculate the longest distance from each cell along the flow direction to the catchment point of the downstream basin.
  • Upstream: Calculate the longest distance from each cell to the vertex of the upstream watershed along the flow direction.

The following figure shows the flow length grids calculated downstream and upstream respectively:

../_images/FlowLength.png

The weight data defines the flow resistance between each grid unit, and the flow length obtained by applying the weight is the weighted distance (that is, the distance is multiplied by the value of the corresponding weight grid). For example, when flow length analysis is applied to flood calculations, flood flow is often hindered by many factors such as slope, soil saturation, vegetation cover, etc. At this time, modeling these factors requires a weighted dataset.

Note that the weight grid must have the same range and resolution as the flow direction grid.

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • direction_grid (DatasetGrid or str) – The specified flow direction raster data.
  • up_stream (bool) – Specify whether to calculate downstream or upstream. True means upstream upstream, False means downstream downstream.
  • weight_grid (DatasetGrid or str) – The specified weight grid data. Setting to None means that the weight dataset is not used.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – the name of the result stream long raster dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result stream length raster dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.stream_order(stream_grid, direction_grid, order_type, out_data=None, out_dataset_name=None, progress=None)

The rivers are classified, and the grid water system is numbered according to the river class.

The rivers in the basin are divided into main streams and tributaries. In hydrology, rivers are classified according to factors such as river flow and shape. In hydrological analysis, certain characteristics of the river can be inferred from the level of the river.

This method is based on the grid water system and classifies rivers according to the flow direction grid. The value of the result grid represents the level of the river. The larger the value, the higher the level. SuperMap provides two kinds of rivers grading methods: Strahler method and Shreve method. For the introduction of these two methods, please refer to: py:class:StreamOrderType enumeration type.

As shown in the figure below, it is an example of river classification. According to the Shreve river classification method, the rivers in this area are divided into 14 levels.

../_images/StreamOrder.png

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • stream_grid (DatasetGrid or str) – raster water system data
  • direction_grid (DatasetGrid or str) – flow direction raster data
  • order_type (StreamOrderType or str) – Basin water system numbering method
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – the name of the result raster dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The numbered raster watershed network is a raster dataset. Return the result dataset or dataset name.

Return type:

DatasetGrid or str

iobjectspy.analyst.stream_to_line(stream_grid, direction_grid, order_type, out_data=None, out_dataset_name=None, progress=None)

Extract vector water system, that is, convert raster water system into vector water system.

The extraction of the vector water system is a process of converting the raster water system into a vector water system (a vector line dataset) based on the flow direction grid. After obtaining the vector water system, you can perform various vector-based calculations.
processing and spatial analysis, such as constructing water network. The figure below shows the DEM data and the corresponding vector water system.
../_images/StreamToLine.png

The vector water system dataset obtained by this method retains the level and flow direction information of the river.

  • While extracting the vector water system, the system calculates the grade of each river, and automatically adds an attribute field named “StreamOrder” to the result dataset to store the value. The way of classification can be Set by the order_type parameter.
  • The flow direction information is stored in a field named “Direction” in the result dataset, which is represented by 0 or 1. 0 means that the flow direction is consistent with the geometric direction of the line object,

and 1 means that it is opposite to the geometric direction of the line object. The flow direction of the vector water system obtained by this method is the same as its geometric direction, that is, the value of the “Direction” field is all 0. After constructing the water system network for the vector water system, this field can be used directly (or modified according to actual needs) as the flow direction field.

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • stream_grid (DatasetGrid or str) – raster water system data
  • direction_grid (DatasetGrid or str) – flow direction raster data
  • order_type (StreamOrderType or str) – river classification method
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result vector water system dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Vector water system dataset or dataset name

Return type:

DatasetVector or str

Connecting the water system means assigning a unique value to each river based on the grid water system and the flow direction grid. The connecting water system is based on the grid water system and the flow direction grid, and each river in the water system is assigned a unique value, and the value is an integer. The connected water system network records the connection information of water system nodes, reflecting the network structure of the water system.

As shown in the figure below, after connecting the water system, each river section has a unique grid value. The red point in the figure is the intersection, that is, the location where the river section intersects the river section. The river section is part of the river, it connects two adjacent junctions, or connect a junction and a watershed, or connect a junction and a watershed. Therefore, the connecting water system can be used to determine the catchment point of the basin.

../_images/StreamLink_1.png

The figure below is an example of connecting the water system.

../_images/StreamLink_2.png

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • stream_grid (DatasetGrid or str) – raster water system data
  • direction_grid (DatasetGrid or str) – flow direction raster data
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – the name of the result raster dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

The connected raster water system is a raster dataset. Return the result raster dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.watershed(direction_grid, pour_points_or_grid, out_data=None, out_dataset_name=None, progress=None)

Watershed segmentation, that is, the watershed basin that generates the designated watershed (catchment point raster dataset).

The process of dividing a watershed into several sub-basins is called watershed division. Through the basin() method, larger watersheds can be obtained, but in actual analysis, it may be necessary to divide larger watersheds into smaller watersheds (called sub-basin).

The first step in determining a watershed is to determine the catchment point of the catchment. Then, the division of a catchment also firstly needs to determine the catchment point of the sub-basin. Different from using the basin method to calculate the basin, the catchment point of the sub-basin can be on the boundary of the grid or inside the grid. This method requires the input of a catchment point raster data, which can be obtained by the catchment point extraction function (pour_points() method). In addition, you can also use another overload method, input a two-dimensional point set representing the watershed to divide the watershed.

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • direction_grid (DatasetGrid or str) – flow direction raster data
  • pour_points_or_grid (DatasetGrid or str or list[Point2D]) – Raster data of catchment points or designated catchment points (two-dimensional point list). The catchment points use geographic coordinate units.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – The name of the raster dataset of the basin basin of the result catchment
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

the raster dataset or dataset name of the catchment basin

Return type:

DatasetGrid or str

iobjectspy.analyst.pour_points(direction_grid, accumulation_grid, area_limit, out_data=None, out_dataset_name=None, progress=None)

Generate a catchment point grid based on the flow direction grid and the cumulative water catchment grid.

The catchment point is located on the boundary of the basin, usually the lowest point on the boundary. The water in the basin flows out from the catchment point, so the catchment point must have a higher cumulative water catchment volume. According to this feature, the catchment point can be extracted based on the cumulative catchment volume and flow direction grid.

The determination of the catchment point requires a cumulative catchment threshold. The position in the cumulative catchment grid that is greater than or equal to the threshold will be used as a potential catchment point, and the catchment point’s location is finally determined according to the flow direction. The determination of this threshold is very critical, affecting the number and location of water catchment points, as well as the size and scope of the sub-basin. A reasonable threshold requires consideration of various factors such as soil characteristics, slope characteristics, and climatic conditions within the basin, and is determined according to actual research needs, so it is more difficult.

After the catchment grid is obtained, the flow direction grid can be combined to divide the watershed (watershed() method).

For related introduction of hydrological analysis, please refer to:py:func:basin

Parameters:
  • direction_grid (DatasetGrid or str) – flow direction raster data
  • accumulation_grid (DatasetGrid or str) – Accumulated water catchment grid data
  • area_limit (int) – Catchment volume limit value
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – the name of the result raster dataset
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result raster dataset or dataset name

Return type:

DatasetGrid or str

iobjectspy.analyst.snap_pour_point(pour_point_or_grid, accumulation_grid, snap_distance, pour_point_field=None, out_data=None, out_dataset_name=None, progress=None)

Capture catchment points. The catchment point is captured to the pixel with the largest accumulated flow within the specified range, and it is used to correct the catchment point data to the river.

Catchment points can generally be used for the construction of bridges, bridges and culverts and other water conservancy facilities, but in practical applications, the catchment points are not always calculated, that is, use the pour_points() method, or other methods, For example, when the position in the vector map is converted to raster data, the catchment catchment function needs to be corrected at this time to ensure the maximum accumulated catchment volume.

As with the catchment grid, after catching the catchment point, the flow direction grid can be further combined to divide the watershed (watershed() method).

param pour_point_or_grid:
 Catchment point dataset, only point dataset and raster dataset are supported
type pour_point_or_grid:
 DatasetGrid or DatasetVector or str
param accumulation_grid:
 a grid dataset of cumulative water catchment. It can be obtained by flow_accumulation().
type accumulation_grid:
 DatasetGrid or str
param float snap_distance:
 Snap distance, capture the catchment point to the grid position of the maximum catchment volume within the range, and the distance is consistent with the specified catchment point data set unit.
param str pour_point_field:
 The field used to assign the location of the catchment point. When the catchment point dataset is a point dataset, you need to specify the catchment point raster value field. The field type only supports integer type, if it is non-integer type, it will be forced to integer type.
param out_data:The data source used to store the result data set
type out_data:DatasourceConnectionInfo or Datasource or str
param str out_dataset_name:
 The name of the result data set.
param function progress:
 progress information processing function, please refer to StepEvent
return:the raster dataset of the result watershed
rtype:DatasetGrid or str
class iobjectspy.analyst.ProcessingOptions(pseudo_nodes_cleaned=False, overshoots_cleaned=False, redundant_vertices_cleaned=False, undershoots_extended=False, duplicated_lines_cleaned=False, lines_intersected=False, adjacent_endpoints_merged=False, overshoots_tolerance=1e-10, undershoots_tolerance=1e-10, vertex_tolerance=1e-10, filter_vertex_recordset=None, arc_filter_string=None, filter_mode=None)

Bases: object

Topology processing parameter class. This class provides setting information about topology processing.

If the node tolerance, short overshoot tolerance and long overhang tolerance are not set through the set_vertex_tolerance, set_overshoots_tolerance and set_undershoots_tolerance methods, Or set to 0, the system will use the corresponding tolerance value in the tolerance of the dataset for processing

Construct topology processing parameter class

Parameters:
  • pseudo_nodes_cleaned (bool) – Whether to remove false nodes
  • overshoots_cleaned (bool) – Whether to remove short overshoots.
  • redundant_vertices_cleaned (bool) – Whether to remove redundant points
  • undershoots_extended (bool) – Whether to extend the long overhang.
  • duplicated_lines_cleaned (bool) – Whether to remove duplicate lines
  • lines_intersected (bool) – Whether to intersect edges.
  • adjacent_endpoints_merged (bool) – Whether to merge adjacent endpoints.
  • overshoots_tolerance (float) – Short overshoot tolerance, which is used to determine whether the overshoot is a short overshoot when removing the short overshoot.
  • undershoots_tolerance (float) – Long suspension line tolerance, this tolerance is used to determine whether the suspension line extends when the long suspension line is extended. The unit is the same as the dataset unit for topological processing.
  • vertex_tolerance (float) – node tolerance. The tolerance is used to merge adjacent endpoints, intersect edges, remove false nodes and remove redundant points. The unit is the same as the dataset unit for topological processing.
  • filter_vertex_recordset (Recordset) – The filter point record set for edge intersection, that is, the point position line segment in this record set will not be interrupted by intersection.
  • arc_filter_string (str) – The filter line expression for arc intersection. When intersecting edges, you can specify a field expression through this property, and the line objects that match the expression will not be interrupted. Whether the expression is valid is related to the filter_mode edge intersection filter mode
  • filter_mode (ArcAndVertexFilterMode or str) – The filtering mode of edge intersection.
adjacent_endpoints_merged

bool – Whether to merge adjacent endpoints

arc_filter_string

str – The filter line expression for arc intersection. When edge intersection, this attribute can specify a field expression that conforms to the expression The line object of the formula will not be interrupted. Whether the expression is valid or not is related to the filter_mode arc intersection filter mode

duplicated_lines_cleaned

bool – Whether to remove duplicate lines

filter_mode

ArcAndVertexFilterMode – The filtering mode of arc intersection

filter_vertex_recordset

Recordset – The record set of filtered points for arc intersection, that is, the point position and line segment in this record set will not be interrupted by intersection

lines_intersected

bool – whether to intersect arcs

overshoots_cleaned

bool – whether to remove short overhangs

overshoots_tolerance

float – Short overhang tolerance, which is used to determine whether the overshoot is a short overhang when removing the short overhang

pseudo_nodes_cleaned

bool – whether to remove false nodes

redundant_vertices_cleaned

bool – Whether to remove redundant points

set_adjacent_endpoints_merged(value)

Set whether to merge adjacent endpoints.

If the distance between the endpoints of multiple arcs is less than the node tolerance, then these points will be merged into a node whose position is the geometric average of the original points (that is, X and Y are the average of all original points X and Y respectively value).

Used to determine the node tolerance of the neighboring endpoint, it can be set by set_vertex_tolerance(), if it is not set or set to 0, the node tolerance in the dataset tolerance will be used.

It should be noted that if there are two adjacent endpoints, the result of the merge will be a false node, and the operation of removing the false node is also required.

Parameters:value (bool) – Whether to merge adjacent endpoints
Returns:self
Return type:ProcessingOptions
set_arc_filter_string(value)

Set the filter line expression for arc intersection.

When intersecting arcs, you can specify a field expression through this property, and the line objects that match the expression will not be interrupted. For details, please refer to the set_lines_intersected() method.

Parameters:value (str) – filter line expression for arc intersection
Returns:self
Return type:ProcessingOptions
set_duplicated_lines_cleaned(value)

Set whether to remove duplicate lines

Repeating line: If all the nodes of two arcs overlap in pairs, it can be considered as a repeating line. The judgment of repeated lines does not consider the direction.

The purpose of removing duplicate lines is to avoid polygon objects with zero area when creating topological polygons. Therefore, only one of the duplicate line objects should be kept, and the redundant ones should be deleted.

Usually, repeated lines are mostly caused by the intersection of arcs.

Parameters:value (bool) – Whether to remove duplicate lines
Returns:self
Return type:ProcessingOptions
set_filter_mode(value)

Set the filtering mode of arc intersection

Parameters:value (ArcAndVertexFilterMode) – The filtering mode of arc intersection
Returns:self
Return type:ProcessingOptions
set_filter_vertex_recordset(value)

Set the filter point record set for arc intersection, that is, the point position line segment in this record set will not be interrupted by intersection.

If the filter point is on the line object or the distance to the line object is within the tolerance range, the line object will not be interrupted at the foot position of the filter point to the line object. For details, please refer to the set_lines_intersected() method.

Note: Whether the filter point record set is valid is related to the arc intersection filtering mode set by the set_filter_mode() method. See also: py:class:.ArcAndVertexFilterMode class.

Parameters:value (Recordset) – The record set of filtering points for arc intersection
Returns:self
Return type:ProcessingOptions
set_lines_intersected(value)

Set whether to perform arc intersection.

Before the line data establishes the topological relationship, it is necessary to calculate the intersection of arcs first, and decompose it into several line objects according to the intersection point. Generally speaking, in the two-dimensional coordinate system, all line objects that intersect with other lines need to be interrupted from the intersection point. , Such as a crossroad. And this method is the basis of subsequent error handling methods. In practical applications, the method of completely interrupting the intersecting line segment does not meet the research needs in many cases. For example, if an elevated railway crosses a highway, it looks like two intersecting line objects in two-dimensional coordinates, but in fact they do not intersect. , If interrupted, it may affect further analysis. There are many similar practical scenarios in the transportation field, such as the intersection of rivers and traffic lines, the intricate overpasses in the city, etc., whether certain intersections are interrupted, It needs to be handled flexibly according to actual applications, and cannot be interrupted uniformly because of intersections on a two-dimensional plane.

This situation can be achieved by setting the filter line expression (set_arc_filter_string()) and the filter point record set (set_vertex_filter_recordset()) to determine which line objects and which intersections are not interrupted:

-Filter line expressions are used to query line objects that do not need to be interrupted -No interruption at the location of the point object in the filtered point record set

These two parameters are used alone or in combination to form four filtering modes for arc intersection, and the other is not to filter. The filter mode is set by the set_filter_mode() method. For the above example, using different filtering modes, the results of arc intersections are also different. For a detailed introduction to the filtering mode, please refer to the ArcAndVertexFilterMode class.

Note: When performing arc intersection processing, you can use the set_vertex_tolerance() method to set the node tolerance (if not set, the node tolerance of the dataset will be used) to determine whether the filter point is valid. If the distance from the filter point to the line object is within the set tolerance range, the line object will not be interrupted from the filter point to its vertical foot position.

Parameters:value (bool) – Whether to perform arc intersection
Returns:self
Return type:ProcessingOptions
set_overshoots_cleaned(value)

Set whether to remove short suspension wires. Removal of the short suspension line means that if the length of a suspension line is less than the suspension line tolerance, the suspension line will be deleted when the short suspension line is removed. The short overshoot tolerance can be specified through the set_overshoots_tolerance method. If not specified, the short overshoot tolerance of the dataset will be used.

Suspended line: If the end point of a line object is not connected with the end point of any other line object, this end point is called a hanging point. Line objects with hanging points are called hanging lines.

Parameters:value (bool) – Whether to remove short overhangs, True means to remove, False means not to remove.
Returns:self
Return type:ProcessingOptions
set_overshoots_tolerance(value)

Set the short suspension wire tolerance, which is used to determine whether the suspension wire is a short suspension wire when the short suspension wire is removed. The unit is the same as the dataset unit for topological processing.

The definition of “suspension line”: If the endpoint of a line object is not connected to the endpoint of any other line object, then this endpoint is called a suspension point. Line objects with hanging points are called hanging lines.

Parameters:value (float) – short overhang tolerance
Returns:self
Return type:ProcessingOptions
set_pseudo_nodes_cleaned(value)

Set whether to remove false nodes. The node is also called the arc connection point, and only the connection with at least three arcs can be called a node. If the arc connection point connects only one arc (in the case of an island) or two arcs (that is, it is the common end of the two arcs), then the node is called a false node

Parameters:value (bool) – Whether to remove false nodes, True means to remove, False means not to remove.
Returns:ProcessingOptions
Return type:self
set_redundant_vertices_cleaned(value)

Set whether to remove redundant points. When the distance between two nodes on any arc is less than the node tolerance, one of them is considered to be a redundant point, which can be removed during topology processing.

Parameters:value: (bool) – Whether to remove redundant points, True means to remove, False means not to remove.
Returns:self
Return type:ProcessingOptions
set_undershoots_extended(value)

Set whether to extend the long suspension line. If a suspension line extends a specified length (suspension tolerance) according to its traveling direction and then has an intersection with an arc segment, the suspension line will be automatically extended to an arc segment after topology processing. It is called long suspension wire extension. The long overhang tolerance can be specified by the set_undershoots_tolerance method, if not specified, the long overhang tolerance of the dataset will be used.

Parameters:value (bool) – Whether to extend the long suspension
Returns:self
Return type:ProcessingOptions
set_undershoots_tolerance(value)

Set the long suspension wire tolerance, which is used to judge whether the suspension wire extends when the long suspension wire is extended. The unit is the same as the dataset unit for topological processing.

Parameters:value (float) – long suspension tolerance
Returns:self
Return type:ProcessingOptions
set_vertex_tolerance(value)

Set node tolerance. The tolerance is used to merge adjacent endpoints, intersect arcs, remove false nodes and remove redundant points. The unit is the same as the dataset unit for topological processing.

Parameters:value (float) – node tolerance
Returns:self
Return type:ProcessingOptions
undershoots_extended

bool – Whether to extend the long suspension line

undershoots_tolerance

float – Long suspension line tolerance, which is used to determine whether the suspension line extends when the long suspension line is extended. Unit and dataset for topology processing Same unit

vertex_tolerance

float – node tolerance. This tolerance is used for adjacent endpoint merging, arc intersection, removing false nodes and removing redundant points. Unit and progress The dataset unit of the topology processing is the same

iobjectspy.analyst.topology_processing(input_data, pseudo_nodes_cleaned=True, overshoots_cleaned=True, redundant_vertices_cleaned=True, undershoots_extended=True, duplicated_lines_cleaned=True, lines_intersected=True, adjacent_endpoints_merged=True, overshoots_tolerance=1e-10, undershoots_tolerance=1e-10, vertex_tolerance=1e-10, filter_vertex_recordset=None, arc_filter_string=None, filter_mode=None, options=None, progress=None)

Perform topological processing on the given dataset according to the topological processing options. The original data will be directly modified.

Parameters:
  • input_data (DatasetVector or str) – The dataset processed by the specified topology.
  • pseudo_nodes_cleaned (bool) – Whether to remove false nodes
  • overshoots_cleaned (bool) – Whether to remove short overshoots.
  • redundant_vertices_cleaned (bool) – Whether to remove redundant points
  • undershoots_extended (bool) – Whether to extend the long overhang.
  • duplicated_lines_cleaned (bool) – Whether to remove duplicate lines
  • lines_intersected (bool) – Whether to intersect arcs.
  • adjacent_endpoints_merged (bool) – Whether to merge adjacent endpoints.
  • overshoots_tolerance (float) – Short overshoot tolerance, which is used to determine whether the overshoot is a short overshoot when removing the short overshoot.
  • undershoots_tolerance (float) – Long suspension line tolerance, this tolerance is used to determine whether the suspension line extends when the long suspension line is extended. The unit is the same as the dataset unit for topological processing.
  • vertex_tolerance (float) – node tolerance. The tolerance is used to merge adjacent endpoints, intersect arcs, remove false nodes and remove redundant points. The unit is the same as the dataset unit for topological processing.
  • filter_vertex_recordset (Recordset) – The filter point record set for arc intersection, that is, the point position line segment in this record set will not be interrupted by intersection.
  • arc_filter_string (str) – The filter line expression for arc intersection. When intersecting arcs, you can specify a field expression through this property, and the line objects that match the expression will not be interrupted. Whether the expression is valid is related to the filter_mode arc intersection filter mode
  • filter_mode (ArcAndVertexFilterMode or str) – The filtering mode of arc intersection.
  • options (ProcessingOptions or None) – Topology processing parameter class. If options is not empty, topology processing will use the value set by this parameter.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Whether the topology processing is successful

Return type:

bool

iobjectspy.analyst.topology_build_regions(input_data, out_data=None, out_dataset_name=None, progress=None)

It is used to construct a surface dataset through topological processing of a line dataset or a network dataset. Before doing topology construction, it is best to use topology processing:py:meth:topology_processing to perform topology processing on the dataset.

Parameters:
  • input_data (DatasetVector or str) – The specified source dataset for polygon topology processing can only be a line dataset or a network dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.pickup_border(input_data, is_preprocess=True, extract_ids=None, out_data=None, out_dataset_name=None, progress=None)

Extract the boundary of the surface (or line) and save it as a line dataset. If multiple faces (or lines) share a boundary (line segment), the boundary (line segment) will only be extracted once.

It does not support the extraction of boundaries from overlapping faces.

Parameters:
  • input_data (DatasetVector or str) – The specified polygon or line dataset.
  • is_preprocess (bool) – Whether to perform topology preprocess
  • extract_ids (list[int] or str) – The specified surface ID array, optional parameter, only the boundary of the surface object corresponding to the given ID array will be extracted.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset.
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

class iobjectspy.analyst.PreprocessOptions(arcs_inserted=False, vertex_arc_inserted=False, vertexes_snapped=False, polygons_checked=False, vertex_adjusted=False)

Bases: object

Topology preprocessing parameter class

Construct topology preprocessing parameter class object

Parameters:
  • arcs_inserted (bool) – whether to intersect between line segments to insert nodes
  • vertex_arc_inserted (bool) – Whether to insert nodes between nodes and line segments
  • vertexes_snapped (bool) – Whether to capture nodes
  • polygons_checked (bool) – Whether to adjust the direction of the polygon
  • vertex_adjusted (bool) – whether to adjust the node position
arcs_inserted

bool – Whether to intersect between line segments to insert nodes

polygons_checked

bool – Whether to adjust the orientation of the polygons

set_arcs_inserted(value)

Set whether to intersect between line segments to insert nodes

Parameters:value (bool) – Whether to intersect between line segments to insert nodes
Returns:self
Return type:PreprocessOptions
set_polygons_checked(value)

Set whether to adjust the direction of the polygon

Parameters:value (bool) – Whether to adjust the direction of the polygon
Returns:self
Return type:PreprocessOptions
set_vertex_adjusted(value)

Set whether to adjust the node position

Parameters:value (bool) – Whether to adjust the node position
Returns:self
Return type:PreprocessOptions
set_vertex_arc_inserted(value)

Set whether to insert nodes between nodes and line segments

Parameters:value (bool) – Whether to insert nodes between nodes and line segments
Returns:self
Return type:PreprocessOptions
set_vertexes_snapped(value)

Set whether to perform node capture

Parameters:value (bool) – Whether to perform node capture
Returns:self
Return type:PreprocessOptions
vertex_adjusted

bool – whether to adjust the node position

vertex_arc_inserted

bool – Whether to insert nodes between nodes and line segments

vertexes_snapped

bool – Whether to perform node capture

iobjectspy.analyst.preprocess(inputs, arcs_inserted=True, vertex_arc_inserted=True, vertexes_snapped=True, polygons_checked=True, vertex_adjusted=True, precisions=None, tolerance=1e-10, options=None, progress=None)

Perform topology preprocessing on the given topology dataset.

Parameters:
  • inputs (DatasetVector or list[DatasetVector] or str or list[str] or Recordset or list[Recordset]) – Input dataset or record set. If it is a dataset, it cannot be read-only.
  • arcs_inserted (bool) – whether to intersect between line segments to insert nodes
  • vertex_arc_inserted (bool) – Whether to insert nodes between nodes and line segments
  • vertexes_snapped (bool) – Whether to capture nodes
  • polygons_checked (bool) – Whether to adjust the direction of the polygon
  • vertex_adjusted (bool) – whether to adjust the node position
  • precisions (list[int]) – The specified precision level array. The smaller the value of the accuracy level, the higher the accuracy of the corresponding record set and the better the data quality. When performing vertex capture, the points in the low-precision record set will be captured to the positions of the points in the high-precision record set. Array must be the same level of accuracy to be recorded the number of pre-set topology collection element and 11 correspond.
  • tolerance (float) – The tolerance control required for the specified processing. The unit is the same as the record set unit for topology preprocessing.
  • options (PreprocessOption) – Topology preprocess parameter class object, if this parameter is not empty, this parameter will be used first as a topology preprocess parameter
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Whether the topology preprocessing is successful

Return type:

bool

iobjectspy.analyst.topology_validate(source_data, validating_data, rule, tolerance, validate_region=None, out_data=None, out_dataset_name=None, progress=None)

Perform topological error check on the dataset or record set, and return the result dataset containing topological errors.

The tolerance parameter of this method is used to specify the tolerance involved when using the topology rule specified by the rule parameter to check the dataset. For example, when using the “TopologyRule.LINE_NO_SHARP_ANGLE” rule check, the tolerance parameter is set to the sharp angle tolerance (an angle value).

For the following topology check operator before calling this method to check the topology data, it is recommended to feed the corresponding data line topology preprocessing (i.e., call: py: meth: preprocess method), or the check result may not be correct:

-REGION_NO_OVERLAP_WITH -REGION_COVERED_BY_REGION_CLASS -REGION_COVERED_BY_REGION -REGION_BOUNDARY_COVERED_BY_LINE -REGION_BOUNDARY_COVERED_BY_REGION_BOUNDARY -REGION_NO_OVERLAP_ON_BOUNDARY -REGION_CONTAIN_POINT -LINE_NO_OVERLAP_WITH -LINE_BE_COVERED_BY_LINE_CLASS -LINE_END_POINT_COVERED_BY_POINT -POINT_NO_CONTAINED_BY_REGION -POINT_COVERED_BY_LINE -POINT_COVERED_BY_REGION_BOUNDARY -POINT_CONTAINED_BY_REGION -POINT_BECOVERED_BY_LINE_END_POINT

For the following topology inspection algorithms, a reference dataset or record set needs to be set:

-REGION_NO_OVERLAP_WITH -REGION_COVERED_BY_REGION_CLASS -REGION_COVERED_BY_REGION -REGION_BOUNDARY_COVERED_BY_LINE -REGION_BOUNDARY_COVERED_BY_REGION_BOUNDARY -REGION_CONTAIN_POINT -REGION_NO_OVERLAP_ON_BOUNDARY -POINT_BECOVERED_BY_LINE_END_POINT -POINT_NO_CONTAINED_BY_REGION -POINT_CONTAINED_BY_REGION -POINT_COVERED_BY_LINE -POINT_COVERED_BY_REGION_BOUNDARY -LINE_NO_OVERLAP_WITH -LINE_NO_INTERSECT_OR_INTERIOR_TOUCH -LINE_BE_COVERED_BY_LINE_CLASS -LINE_NO_INTERSECTION_WITH -LINE_NO_INTERSECTION_WITH_REGION -LINE_EXIST_INTERSECT_VERTEX -VERTEX_DISTANCE_GREATER_THAN_TOLERANCE -VERTEX_MATCH_WITH_EACH_OTHER
Parameters:
  • source_data (DatasetVector or str or Recordset) – the dataset or record set to be checked
  • validating_data (DatasetVector or str or Recordset) – The reference record set for checking. If the topology rule used does not need to refer to the record set, set to None
  • rule (TopologyRule or str) – topology check type
  • tolerance (float) – The specified tolerance used for topology error checking. The unit is the same as the dataset unit for topological error checking.
  • validate_region (GeoRegion) – the region to be checked, None, the whole topology dataset (validating_data) will be checked by default, otherwise, the validate_region region will be checked.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.split_lines_by_regions(line_input, region_input, progress=None)

Use area objects to divide line objects. Before extracting the left and right polygons of the line object (that is, the pickupLeftRightRegions() method), you need to call this method to divide the line object. Otherwise, a line object will correspond to multiple left (right) polygons. As shown in the figure below: line object AB, if you do not use area objects for segmentation, there will be two left polygons of AB, namely 1, 3; there are also two right polygons, namely 1 and 3, after the division operation, the line object AB is divided For AC and CB, there is only one left and right polygon respectively corresponding to AC and CB.

../_images/SplitLinesByRegions.png
Parameters:
  • line_input (DatasetVector or Recordset) – The specified line record set or dataset to be divided.
  • region_input (DatasetVector or Recordset) – The specified region record set or dataset used to divide the line record set.
  • progress (function) – a function for processing progress information
Returns:

Return True on success, False on failure.

Return type:

bool

iobjectspy.analyst.integrate(source_dataset, tolerance, unit=None, precision_orders=None, progress=None)

Data integration is performed on the data set, and the integration process includes node capture and insertion point operations. It has a similar function with py:func:.preprocess:, which can handle topological errors in the data, The difference from py:func:.preprocess: is that data integration will be iterated multiple times until there are no topological errors in the data (no node capture and interpolation are required).

>>> ds = open_datasource('E:/data.udb')
>>> integrate(ds['building'], 1.0e-6)
True
>>> integrate(ds['street'], 1.0,'meter')
True
Parameters:
  • source_dataset (DatasetVector or str) – the data set being processed
  • tolerance (float) – node tolerance
  • unit (Unit or str) – node tolerance unit, when it is None, the data set coordinate system unit is used. If the data set coordinate system is a projected coordinate system, angle units are prohibited.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

Return True if successful, otherwise False

Return type:

bool

iobjectspy.analyst.measure_central_element(source, group_field=None, weight_field=None, self_weight_field=None, distance_method='EUCLIDEAN', stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

About spatial measurement:

The data used to calculate the spatial metric can be points, lines, and areas. For point, line and area objects, the centroid of the object is used in the distance calculation. The centroid of the object is the weighted average center of all sub-objects. The weighting term of the point object is 1 (that is, the centroid is itself), the weighting term of the line object is the length, and the weighting term of the area object is the area.

Users can solve the following problems through spatial measurement calculations:

  1. Where is the center of the data?
  2. What is the shape and direction of the data distribution?
  3. How is the data distributed?

Spatial measurement includes central element (measure_central_element() ), direction distribution (measure_directional() ), Standard distance (measure_standard_distance() ), direction average (measure_linear_directional_mean() ), Mean center (measure_mean_center() ), median center (measure_median_center() ), etc.

Calculate the central element of the vector data and return the result vector dataset.

  • The central element is the object that has the smallest cumulative distance from the centroid of all other objects and is located at the center.
  • If the group field is set, the result vector dataset will contain the “group field name_Group” field.
  • In fact, there may be multiple central elements with the smallest cumulative distance from the centroid of all other objects, but the central element method will only output the object with the smallest SmID field value.
Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • group_field (str) – the name of the grouping field
  • weight_field (str) – the name of the weight field
  • self_weight_field (str) – the name of its own weight field
  • distance_method (DistanceMethod or str) – distance calculation method type
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields. It is a dictionary type. The key of the dictionary type is the field name, and the value is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result vector dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.measure_directional(source, group_field=None, ellipse_size='SINGLE', stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the direction distribution of the vector data and return the result vector dataset.

  • The direction distribution is the standard deviation ellipse obtained by calculating the standard deviation of the x and y coordinates as the axis based on the average center of the centroid of all objects (weighted, weighted).
  • The x and y coordinates of the center of the standard deviation ellipse, two standard distances (half-major axis and semi-minor axis), and the direction of the ellipse are stored in CircleCenterX, CircleCenterY, SemiMajorAxis, SemiMinorAxis, RotationAngle fields. If the grouping field is set, the result vector dataset will contain “Group field name_Group” field.
  • The direction of the ellipse. The positive value in the RotationAngle field indicates a positive ellipse (the direction of the semi-major axis is the X-axis direction, and the direction of the semi-minor axis is the Y-axis direction)) is rotated counterclockwise, and a negative value indicates The positive ellipse rotates clockwise.
  • The output ellipse size has three levels: Single (one standard deviation), Twice (two standard deviations) and Triple (three standard deviations). For details, please refer to: py:class:.EllipseSize class.
  • The standard deviation ellipse algorithm used to calculate the direction distribution was proposed by D. Welty Lefever in 1926 to measure the direction and distribution of data. First determine the center of the ellipse, that is, the average center (weight, weighted);

then determine the direction of the ellipse; finally determine the length of the major axis and the minor axis.

../_images/MeasureDirection.png

For an introduction to spatial measurement, please refer to:py:func:measure_central_element

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • group_field (str) – group field name
  • ellipse_size (EllipseSize or str) – ellipse size type
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields, which is a list type. The list stores a tuple of 2 elements. The first element of the tuple is the field to be counted, and the second element is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result vector dataset

Return type:

DatasetVector or str

iobjectspy.analyst.measure_linear_directional_mean(source, group_field=None, weight_field=None, is_orientation=False, stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the directional average of the line dataset and return the result vector dataset.

  • The average linear direction is based on the average center point of the centroid of all line objects as its center, and the length is equal to the average length, orientation, or direction of all input line objects.
Use the start point and end point to determine the direction) the calculated average bearing or the line object created by the average direction.
  • The average center x and y coordinates, average length, compass angle, direction average, and circle variance of the line object are respectively stored in AverageX, AverageY, and AverageY in the result vector dataset AverageLength, CompassAngle, DirectionalMean, CircleVariance fields. If the grouping field is set, the result vector dataset will contain “Group field name_Group” field.
  • The compass angle (CompassAngle) field of the line object means clockwise rotation based on true north; the DirectionalMean field means counterclockwise rotation based on true east;

CircleVariance means direction or azimuth deviation The degree of the average value of the direction. If the input line objects have very similar (or identical) directions, the value will be very small, and vice versa.

../_images/MeasureLinearDirectionalMean.png

For an introduction to spatial measurement, please refer to:py:func:measure_central_element

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. Is the line dataset.
  • group_field (str) – group field name
  • weight_field (str) – weight field name
  • is_orientation (bool) – Whether to ignore the direction of the start and end points. When it is False, the order of the start point and the end point will be used when calculating the directional average; when it is True, the order of the start point and the end point will be ignored.
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields, which is a list type. The list stores a tuple of 2 elements. The first element of the tuple is the field to be counted, and the second element is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.measure_mean_center(source, group_field=None, weight_field=None, stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the average center of the vector data and return the result vector dataset.

  • The average center is a point constructed based on the average x and y coordinates of the entered centroids of all objects.
  • The x and y coordinates of the average center are respectively stored in the SmX and SmY fields in the result vector dataset. If the group field is set, the result vector dataset will contain the “group field name_Group” field.
../_images/MeasureMeanCenter.png

For an introduction to spatial measurement, please refer to:py:func:measure_central_element

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • group_field (str) – group field
  • weight_field (str) – weight field
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields, which is a list type. The list stores a tuple of 2 elements. The first element of the tuple is the field to be counted, and the second element is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.measure_median_center(source, group_field, weight_field, stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the median center of the vector data and return the result vector dataset.

  • The median center is based on the input centroids of all objects, using an iterative algorithm to find the point with the smallest Euclidean distance to the centroids of all objects.
  • The x and y coordinates of the median center are stored in the SmX and SmY fields in the result vector dataset, respectively. If the grouping field is set, the result vector dataset will contain “Group field name_Group” field.
  • In fact, there may be multiple points with the smallest distance from the centroid of all objects, but the median center method will only return one point.
  • The algorithm used to calculate the median center is the iterative weighted least squares method (Weiszfeld algorithm) proposed by Kuhn, Harold W. and Robert E. Kuenne in 1962,

and then further generalized by James E. Burt and Gerald M. Barber. First, the average center (weighted, weighted) is used as the starting point, and the candidate points are obtained by the weighted least square method. The candidate points are re-used as starting points and substituted into the calculation to obtain new candidate points. The calculation is performed iteratively until the candidate points reach the European style of the centroid of all objects The distance is the smallest.

../_images/MeasureMedianCenter.png

For an introduction to spatial measurement, please refer to:py:func:measure_central_element

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • group_field (str) – group field
  • weight_field (str) – weight field
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields, which is a list type. The list stores a tuple of 2 elements. The first element of the tuple is the field to be counted, and the second element is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.measure_standard_distance(source, group_field, weight_field, ellipse_size='SINGLE', stats_fields=None, out_data=None, out_dataset_name=None, progress=None)

Calculate the standard distance of vector data and return the result vector dataset.

  • The standard distance is a circle obtained by calculating the standard distance of the x and y coordinates as the radius based on the average center of the centroid of all objects (weighted, weighted) as the center of the circle.
  • The x and y coordinates of the center of the circle and the standard distance (radius of the circle) are respectively stored in the CircleCenterX, CircleCenterY, and StandardDistance fields in the result vector dataset.

If the group field is set, the result vector dataset will contain the “group field name_Group” field.

  • The output circle size has three levels: Single (one standard deviation), Twice (two standard deviations) and Triple (three standard deviations). For details, please refer to: py:class:.EllipseSize enumeration type.
../_images/MeasureStandardDistance.png

For an introduction to spatial measurement, please refer to:py:func:measure_central_element

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. Line dataset
  • group_field (str) – group field
  • weight_field (str) – weight field
  • ellipse_size (EllipseSize or str) – ellipse size type
  • stats_fields (list[tuple[str,SpatialStatisticsType]] or list[tuple[str,str]] or str) – The type of statistical fields, which is a list type. The list stores a tuple of 2 elements. The first element of the tuple is the field to be counted, and the second element is the statistical type.
  • out_data (DatasourceConnectionInfo or Datasource or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

class iobjectspy.analyst.AnalyzingPatternsResult

Bases: object

Analysis mode result class. This class is used to obtain the results of the analysis mode calculation, including the result index, expectation, variance, Z score and P value.

expectation

float – the expected value in the analysis mode result

index

float – Moran index or GeneralG index in the analysis mode result

p_value

float – P value in the analysis mode result

variance

float – the variance value in the analysis mode result

z_score

float – Z score in the analysis mode result

iobjectspy.analyst.auto_correlation(source, assessment_field, concept_model='INVERSEDISTANCE', distance_method='EUCLIDEAN', distance_tolerance=-1.0, exponent=1.0, k_neighbors=1, is_standardization=False, weight_file_path=None, progress=None)

Analysis mode introduction:

The analysis mode can assess whether a set of data forms a discrete spatial pattern, a clustered spatial pattern, or a random spatial pattern.

  • The data used for calculation in the analysis mode can be points, lines, and areas. For point, line and area objects, the centroid of the object is used in the distance calculation. The centroid of the object is the weighted average center of all sub-objects. The weighting term of the point object is 1 (that is, the centroid is itself), the weighting term of the line object is the length, and the weighting term of the area object is the area.

  • The analysis mode class adopts inferential statistics, and the “null hypothesis” will be established in the statistical test, assuming that the elements or the related values between the elements are all shown as random spatial patterns.

  • In the calculation of the analysis result, a P value will be given to indicate the correct probability of the “null hypothesis” to determine whether to accept the “null hypothesis” or reject the “null hypothesis”.

  • In the calculation of the analysis result, a Z score will be given to indicate the multiple of the standard deviation to determine whether the data is clustered, discrete or random.

  • To reject the “Null Hypothesis”, one must bear the risk of making the wrong choice (ie wrongly rejecting the “Null Hypothesis”).

    The following table shows the uncorrected critical P value and critical Z score under different confidence levels:

    ../_images/AnalyzingPatterns.png
  • Users can solve the following problems through analysis mode:

    • Does the feature in the dataset or the value associated with the feature in the dataset have spatial clustering?
    • Will the degree of clustering of the dataset change over time?

Analysis modes include spatial autocorrelation analysis (auto_correlation() ), average nearest neighbor analysis (average_nearest_neighbor() ), High and low value cluster analysis (high_or_low_clustering() ), incremental spatial autocorrelation analysis (incremental_auto_correlation() ), etc.

Perform spatial autocorrelation analysis on the vector dataset, and return the spatial autocorrelation analysis result. The results returned by spatial autocorrelation include Moran index, expectation, variance, z score, P value, See: py:class:.AnalyzingPatternsResult class.

../_images/AnalyzingPatterns_autoCorrelation.png
Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • assessment_field (str) – The name of the assessment field. Only numeric fields are valid.
  • concept_model (ConceptualizationModel or str) – Conceptual model of spatial relationship. Default value: py:attr:.ConceptualizationModel.INVERSEDISTANCE.
  • distance_method (DistanceMethod or str) – distance calculation method type
  • distance_tolerance (float) –

    Cut off distance tolerance. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.FIXEDDISTANCEBAND, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.

    Specify the break distance for the “Reverse Distance” and “Fixed Distance” models. “-1” means that the default distance is calculated and applied. The default value is to ensure that each feature has at least one adjacent feature; “0” means that no distance is applied, and each feature is an adjacent feature.

  • exponent (float) – Inverse distance power exponent. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.
  • k_neighbors (int) – The number of neighbors. The K nearest elements around the target feature are neighbors. Only valid when the conceptualization model is set to: py:attr:.ConceptualizationModel.KNEARESTNEIGHBORS.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If standardize, each one will be divided by the weight of the line and.
  • weight_file_path (str) – file path of spatial weight matrix
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

spatial autocorrelation result

Return type:

AnalyzingPatternsResult

iobjectspy.analyst.high_or_low_clustering(source, assessment_field, concept_model='INVERSEDISTANCE', distance_method='EUCLIDEAN', distance_tolerance=-1.0, exponent=1.0, k_neighbors=1, is_standardization=False, weight_file_path=None, progress=None)

Perform high and low value cluster analysis on the vector dataset, and return the high and low value cluster analysis results. The results returned by high and low value clustering include GeneralG index, expectation, variance, z score, P value, See: py:class:.AnalyzingPatternsResult class.

../_images/AnalyzingPatterns_highOrLowClustering.png

For the introduction of analysis mode, please refer to: py:func:auto_correlation

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • assessment_field (str) – The name of the assessment field. Only numeric fields are valid.
  • concept_model (ConceptualizationModel or str) – Conceptual model of spatial relationship. Default value: py:attr:.ConceptualizationModel.INVERSEDISTANCE.
  • distance_method (DistanceMethod or str) – distance calculation method type
  • distance_tolerance (float) –

    Cut off distance tolerance. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.FIXEDDISTANCEBAND, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.

    Specify the break distance for the “Reverse Distance” and “Fixed Distance” models. “-1” means that the default distance is calculated and applied. The default value is to ensure that each feature has at least one adjacent feature;
    ”0” means that no distance is applied, and each feature is an adjacent feature.
  • exponent (float) – Inverse distance power exponent. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.
  • k_neighbors (int) – The number of neighbors. The K nearest elements around the target feature are neighbors. Only valid when the conceptualization model is set to: py:attr:.ConceptualizationModel.KNEARESTNEIGHBORS.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If normalized, each weight will be divided by the sum of the row.
  • weight_file_path (str) – file path of spatial weight matrix
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

high and low value clustering results

Return type:

AnalyzingPatternsResult

iobjectspy.analyst.average_nearest_neighbor(source, study_area, distance_method='EUCLIDEAN', progress=None)

Perform average nearest neighbor analysis on the vector dataset and return the average nearest neighbor analysis result array.

  • The results returned by average nearest neighbors include nearest neighbor index, expected average distance, average observation distance, z score, P value, please refer to: py:class:.AnalyzingPatternsResult class.
  • The area of the given study area must be greater than or equal to 0; if the area of the study area is equal to 0, the minimum area bounding rectangle of the input dataset will be automatically generated, and the area of the rectangle will be used for calculation. The default value is: 0.
  • The distance calculation method type can specify the distance calculation method between adjacent elements (see: py:class:.DistanceMethod). If the input dataset is a geographic coordinate system, the chord measurement method will be used to Calculate the distance. For any two points on the surface of the earth, the chord distance between the two points is the length of the straight line connecting the two points through the earth.
../_images/AnalyzingPatterns_AverageNearestNeighbor.png

For the introduction of analysis mode, please refer to: py:func:auto_correlation

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • study_area (float) – study area area
  • distance_method (DistanceMethod or str) – distance calculation method
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

average nearest neighbor analysis result

Return type:

AnalyzingPatternsResult

iobjectspy.analyst.incremental_auto_correlation(source, assessment_field, begin_distance=0.0, distance_method='EUCLIDEAN', incremental_distance=0.0, incremental_number=10, is_standardization=False, progress=None)

Perform incremental spatial autocorrelation analysis on the vector dataset, and return an array of incremental spatial autocorrelation analysis results. The results returned by incremental spatial autocorrelation include incremental distance, Moran index, expectation, variance, z-score, P-value, See: py:class:.IncrementalResult class.

Incremental spatial autocorrelation will run the spatial autocorrelation method for a series of incremental distances (refer to: py:func:auto_correlation), and the spatial relationship conceptual model defaults to a fixed distance Model (see: py:attr:.ConceptualizationModel.FIXEDDISTANCEBAND)

For the introduction of analysis mode, please refer to: py:func:auto_correlation

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • assessment_field (str) – The name of the assessment field. Only numeric fields are valid.
  • begin_distance (float) – The starting distance of incremental spatial autocorrelation analysis.
  • distance_method (DistanceMethod or str) – distance calculation method type
  • incremental_distance (float) – distance increment, the distance between each analysis of incremental spatial autocorrelation.
  • incremental_number (int) – The number of incremental distance segments. Specify the number of times to analyze the dataset for incremental spatial autocorrelation. The value range is 2 ~ 30.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If normalized, each weight will be divided by the sum of the row.
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

Incremental spatial autocorrelation analysis result list.

Return type:

list[IncrementalResult]

class iobjectspy.analyst.IncrementalResult

Bases: iobjectspy._jsuperpy.analyst.ss.AnalyzingPatternsResult

Incremental spatial autocorrelation result class. This class is used to obtain the results of incremental spatial autocorrelation calculations, including the resultant incremental distance, Moran index, expectation, variance, Z score, and P value.

distance

float – incremental distance in incremental spatial autocorrelation results

iobjectspy.analyst.cluster_outlier_analyst(source, assessment_field, concept_model='INVERSEDISTANCE', distance_method='EUCLIDEAN', distance_tolerance=-1.0, exponent=1.0, is_FDR_adjusted=False, k_neighbors=1, is_standardization=False, weight_file_path=None, out_data=None, out_dataset_name=None, progress=None)

Introduction to cluster distribution:

The cluster distribution can identify a set of statistically significant hotspots, cold spots, or spatial outliers.

The data used to calculate the cluster distribution can be points, lines, and areas. For point, line and area objects, the centroid of the object is used in the distance calculation. The centroid of the object is the weighted average center of all sub-objects. The weighting term of the point object is 1 (that is, the centroid is itself), the weighting term of the line object is the length, and the weighting term of the area object is the area.

Users can solve the following problems through clustering distribution calculation:

  1. Where do clusters or cold spots and hot spots appear?
  2. Where do the spatial outliers appear?
  3. Which elements are very similar?

Clustering distribution includes clustering and outlier analysis (cluster_outlier_analyst()), hot spot analysis (hot_spot_analyst()), Optimized hot spot analysis (optimized_hot_spot_analyst()) etc.

Clustering and outlier analysis, return result vector dataset.

  • The result dataset includes local Moran index (ALMI_MoranI), z score (ALMI_Zscore), P value (ALMI_Pvalue) and clustering and outlier type (ALMI_Type).
  • Both z-score and P-value are measures of statistical significance, which are used to judge whether to reject the “null hypothesis” on a factor-by-element basis. The confidence interval field will identify statistically significant clusters and outliers.

If the Z score of a feature is a high positive value, it means that the surrounding features have similar values (high or low), and the cluster and outlier type fields will represent statistically significant high-value clusters as ” HH”, the statistically significant low-value cluster is expressed as “LL”; if the Z score of the element is a low negative value, it means that there is a statistically significant spatial data outlier, cluster And the outlier type field will indicate that low-value elements surround high-value elements as “HL”, and high-value elements around low-value elements as “LH”. * When is_FDR_adjusted is not set, statistical significance is based on the P value and Z field. Otherwise, the key P value to determine the confidence level will be reduced to take into account multiple testing and spatial dependence.

../_images/ClusteringDistributions_clusterOutlierAnalyst.png
Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • assessment_field (str) – The name of the assessment field. Only numeric fields are valid.
  • concept_model (ConceptualizationModel or str) – Conceptual model of spatial relationship. Default value: py:attr:.ConceptualizationModel.INVERSEDISTANCE.
  • distance_method (DistanceMethod or str) – distance calculation method type
  • distance_tolerance (float) –

    Cut off distance tolerance. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.FIXEDDISTANCEBAND, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.

    Specify the break distance for the “Reverse Distance” and “Fixed Distance” models. “-1” means that the default distance is calculated and applied. The default value is to ensure that each feature has at least one adjacent feature; “0” means that no distance is applied, and each feature is an adjacent feature.

  • exponent (float) – Inverse distance power exponent. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.
  • is_FDR_adjusted (bool) – Whether to perform FDR (false discovery rate) correction. If the FDR (false discovery rate) correction is performed, the statistical significance will be based on the false discovery rate correction, otherwise, the statistical significance will be based on the P value and z-score fields.
  • k_neighbors (int) – The number of neighbors. The K nearest elements around the target feature are neighbors. Only valid when the conceptualization model is set to: py:attr:.ConceptualizationModel.KNEARESTNEIGHBORS.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If normalized, each weight will be divided by the sum of the row.
  • weight_file_path (str) – file path of spatial weight matrix
  • out_data (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.hot_spot_analyst(source, assessment_field, concept_model='INVERSEDISTANCE', distance_method='EUCLIDEAN', distance_tolerance=-1.0, exponent=1.0, is_FDR_adjusted=False, k_neighbors=1, is_standardization=False, self_weight_field=None, weight_file_path=None, out_data=None, out_dataset_name=None, progress=None)

Hot spot analysis, return result vector dataset.

  • The result dataset includes z score (Gi_Zscore), P value (Gi_Pvalue) and confidence interval (Gi_ConfInvl).
  • Both z-score and P-value are measures of statistical significance, which are used to judge whether to reject the “null hypothesis” on a factor-by-element basis. The confidence interval field will identify statistically significant hot and cold spots.

Elements with confidence intervals of +3 and -3 reflect statistical significance with a confidence of 99%, elements with confidence intervals of +2 and -2 reflect statistical significance with a confidence of 95%, and The elements with confidence intervals of +1 and -1 reflect statistical significance with a confidence of 90%, while the clustering of elements with confidence intervals of 0 has no statistical significance.

  • Without setting the is_FDR_adjusted method, statistical significance is based on the P value and Z field. Otherwise, the key P value for determining the confidence level will be reduced to take into account multiple tests and spatial dependence.
../_images/ClusteringDistributions_hotSpotAnalyst.png

For introduction to cluster distribution, refer to:py:func:cluster_outlier_analyst

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • assessment_field (str) – The name of the assessment field. Only numeric fields are valid.
  • concept_model (ConceptualizationModel or str) – Conceptual model of spatial relationship. Default value: py:attr:.ConceptualizationModel.INVERSEDISTANCE.
  • distance_method (DistanceMethod or str) – distance calculation method type
  • distance_tolerance (float) –
    Cut off distance tolerance. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE,
    ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.FIXEDDISTANCEBAND, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.

    Specify the break distance for the “Reverse Distance” and “Fixed Distance” models. “-1” means that the default distance is calculated and applied. The default value is to ensure that each feature has at least one adjacent feature; “0” means that no distance is applied, and each feature is an adjacent feature.

  • exponent (float) – Inverse distance power exponent. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.
  • is_FDR_adjusted (bool) – Whether to perform FDR (false discovery rate) correction. If the FDR (false discovery rate) correction is performed, the statistical significance will be based on the false discovery rate correction, otherwise, the statistical significance will be based on the P value and z-score fields.
  • k_neighbors (int) – The number of neighbors. The K nearest elements around the target feature are neighbors. Only valid when the conceptualization model is set to: py:attr:.ConceptualizationModel.KNEARESTNEIGHBORS.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If normalized, each weight will be divided by the sum of the row.
  • self_weight_field (str) – The name of its own weight field. Only numeric fields are valid.
  • weight_file_path (str) – file path of spatial weight matrix
  • out_data (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.optimized_hot_spot_analyst(source, assessment_field=None, aggregation_method='NETWORKPOLYGONS', aggregating_polygons=None, bounding_polygons=None, out_data=None, out_dataset_name=None, progress=None)

Optimized hotspot analysis, return result vector dataset.

  • The result dataset includes z score (Gi_Zscore), P value (Gi_Pvalue) and confidence interval (Gi_ConfInvl). For details, please refer to: py:func:hot_spot_analyst method results.
  • Both z-score and P-value are measures of statistical significance, which are used to judge whether to reject the “null hypothesis” on a factor-by-element basis.

The confidence interval field will identify statistically significant hot and cold spots. Elements with confidence intervals of +3 and -3 reflect statistical significance with a confidence of 99%, elements with confidence intervals of +2 and -2 reflect statistical significance with a confidence of 95%, and confidence intervals are +1 and -1 The elements of reflect statistical significance with a confidence of 90%, while the clustering of elements with a confidence interval of 0 has no statistical significance.

  • If the analysis field is provided, the hot spot analysis will be performed directly; if the analysis field is not provided, the provided aggregation method (see: py:class:AggregationMethod)

will be used to aggregate all input event points to obtain the count, and then execute as an analysis field Hot spot analysis.

  • When performing hot spot analysis, the default conceptualization model is: py:attr:.ConceptualizationModel.FIXEDDISTANCEBAND, false discovery rate (FDR) is True, Statistical significance will use the false discovery rate (FDR) correction method to automatically balance multiple testing and spatial dependence.
../_images/ClusteringDistributions_OptimizedHotSpotAnalyst.png

For introduction to cluster distribution, refer to:py:func:cluster_outlier_analyst

Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. If the evaluation field is set, it can be a point, line, or area dataset; otherwise, it must be a point dataset.
  • assessment_field (str) – The name of the assessment field.
  • aggregation_method (AggregationMethod or str) –

    aggregation method. If the analysis field is not set, the aggregation method provided for optimized hot spot analysis is required.

    • If set to: py:attr:.AggregationMethod.AGGREGATIONPOLYGONS, aggregate_polygons must be set
    • If set to: py:attr:.AggregationMethod.NETWORKPOLYGONS, if bounding_polygons is set, use bounding_polygons is used for aggregation. If bounding_polygons is not set, the geographic extent of the point dataset is used for aggregation.
    • If set to: py:attr:.AggregationMethod.SNAPNEARBYPOINTS, aggregating_polygons and bounding_polygons are invalid.
  • aggregating_polygons (DatasetVector or str) – aggregating event points to obtain a polygon dataset of event counts. If the analysis field (assessment_field) is not provided and aggregation_method When set to: py:attr:.AggregationMethod.AGGREGATIONPOLYGONS, it provides a polygon dataset that aggregates event points to obtain event counts. If the evaluation field is set, this parameter is invalid.
  • bounding_polygons (DatasetVector or str) – The boundary polygon dataset of the area where the event point occurs. Must be a polygon dataset. If the analysis field (assessment_field) is not provided and aggregation_method When set to: py:attr:.AggregationMethod.NETWORKPOLYGONS, the boundary surface dataset of the area where the incident occurred is provided.
  • out_data (Datasource or DatasourceConnectionInfo or str) – result datasource information
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.collect_events(source, out_data=None, out_dataset_name=None, progress=None)

Collect events and convert event data into weighted data.

  • The result point dataset contains a Counts field, which stores the sum of all centroids of each unique position.
  • Collecting events will only process objects with exactly the same centroid coordinates, and only one centroid will be retained and the remaining duplicate points will be removed.
  • For point, line and area objects, the centroid of the object will be used in the distance calculation. The centroid of the object is the weighted average center of all sub-objects. The weighting term of the point object is 1 (that is, the centroid is itself), The weighting term for line objects is length, and the weighting term for area objects is area.
Parameters:
  • source (DatasetVector or str) – The dataset to be collected. It can be a point, line, or area dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result point dataset.
  • out_dataset_name (str) – The name of the result point dataset.
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.build_weight_matrix(source, unique_id_field, file_path, concept_model='INVERSEDISTANCE', distance_method='EUCLIDEAN', distance_tolerance=-1.0, exponent=1.0, k_neighbors=1, is_standardization=False, progress=None)

Construct the spatial weight matrix.

  • The spatial weight matrix file is designed to generate, store, reuse and share a conceptual model of the spatial relationship between a set of elements. The file is created in a binary file format, and the element relationship Stored as a sparse matrix.
  • This method will generate a spatial weight matrix file, the file format is ‘*.swmb’. The generated spatial weight matrix file can be used for analysis,

as long as the spatial relationship conceptualization model is set to ConceptualizationModel.SPATIALWEIGHTMATRIXFILE and the full path of the created spatial weight matrix file is specified by the weight_file_path parameter.

Parameters:
  • source (DatasetVector or str) – The dataset of the spatial weight matrix to be constructed, supporting points, lines and surfaces.
  • unique_id_field (str) – Unique ID field name, must be a numeric field.
  • file_path (str) – The path to save the spatial weight matrix file.
  • concept_model (ConceptualizationModel or str) – conceptual model
  • distance_method (DistanceMethod or str) – distance calculation method type
  • distance_tolerance (float) –

    Cut off distance tolerance. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.FIXEDDISTANCEBAND, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.

    Specify the break distance for the “Reverse Distance” and “Fixed Distance” models. “-1” means that the default distance is calculated and applied.
    The default value is to ensure that each feature has at least one adjacent feature; “0” means that no distance is applied, and each feature is an adjacent feature
  • exponent (float) – Inverse distance power exponent. Only for the conceptualization model set to: py:attr:.ConceptualizationModel.INVERSEDISTANCE, ConceptualizationModel.INVERSEDISTANCESQUARED, ConceptualizationModel.ZONEOFINDIFFERENCE is valid.
  • k_neighbors (int) – The number of neighbors. Only valid when the conceptualization model is set to: py:attr:.ConceptualizationModel.KNEARESTNEIGHBORS.
  • is_standardization (bool) – Whether to standardize the spatial weight matrix. If normalized, each weight will be divided by the sum of the row.
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

If the spatial weight matrix is constructed, return True, otherwise return False

Return type:

bool

iobjectspy.analyst.weight_matrix_file_to_table(file_path, out_data, out_dataset_name=None, progress=None)

The spatial weight matrix file is converted into an attribute table.

The result attribute table contains the source unique ID field (UniqueID), neighbors unique ID field (NeighborsID), and weight field (Weight).

Parameters:
  • file_path (str) – The path of the spatial weight matrix file.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result attribute table
  • out_dataset_name (str) – name of result attribute table
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

The result attribute table dataset or dataset name.

Return type:

DatasetVector or str

iobjectspy.analyst.GWR(source, explanatory_fields, model_field, kernel_function='GAUSSIAN', band_width_type='AICC', distance_tolerance=0.0, kernel_type='FIXED', neighbors=2, out_data=None, out_dataset_name=None, progress=None, prediction_dataset=None, explanatory_fields_matching=None, out_predicted_name=None)

Introduction to spatial relationship modeling:

  • Users can solve the following problems through spatial relationship modeling:
    • Why does a certain phenomenon continue to occur, and what factors cause this situation?
    • What are the factors that cause a certain accident rate to be higher than expected? Is there any way to reduce the accident rate in the entire city or in a specific area?
    • Modeling a phenomenon to predict values at other locations or other times?
  • Through regression analysis, you can model, examine and study spatial relationships, which can help you explain many factors behind the observed spatial model. For example, the linear relationship is positive or negative; for a positive relationship, that is, there is a positive correlation, a variable increases with the increase of another variable; conversely, a variable decreases with the increase of another variable; or the two variables have no relationship.

Geographically weighted regression analysis.

  • Geographically weighted regression analysis result information includes a result dataset and a summary of geographically weighted regression results (see GWRSummary category).
  • The result dataset includes cross validation (CVScore), predicted value (Predicted), regression coefficient (Intercept, C1_interpretation field name), residual (Residual), standard error (StdError), coefficient standard error (SE_Intercept, SE1_interpretation field name), pseudo t value (TV_Intercept, TV1_interpretation field name) and Studentised residual (StdResidual), etc.

Description:

  • Geographically weighted regression analysis is a local form of linear regression for spatial change relationships, which can be used to study the relationship between spatial change dependent and independent variables. Model the relationship between the data variables associated with geographic elements, so that you can predict unknown values or better understand the key factors that can affect the variables to be modeled. The regression method allows you to verify the spatial relationship and measure the stability of the spatial relationship.
  • Cross-validation (CVScore): Cross-validation does not include the regression point itself when estimating the regression coefficient, that is, only performs the regression calculation based on the data points around the regression point. This value is the difference between the estimated value and the actual value obtained in cross-validation for each regression point, and the sum of their squares is the CV value. As a model performance indicator.
  • Predicted: These values are estimated values (or fitted values) obtained by geographically weighted regression.
  • Regression coefficient (Intercept): It is the regression coefficient of the geographically weighted regression model. It is the regression intercept of the regression model, indicating that all explanatory variables are the predicted value of the dependent variable when it is zero.
  • Regression coefficient (C1_explained field name): It is the regression coefficient of the explanatory field, indicating the strength and type of the relationship between the explanatory variable and the dependent variable. If the regression coefficient is positive, The relationship between the explain variable and the dependent variable is positive; on the contrary, there is a negative relationship. If the relationship is strong, the regression coefficient is relatively large; when the relationship is weak, the regression coefficient is close to 0.
  • Residual: These are the unexplainable parts of the dependent variable, which are the difference between the estimated value and the actual value. The average value of the standardized residual is 0 and the standard deviation is 1. Residuals can be used to determine the degree of fit of the model.
Small residuals indicate that the model fits well and can explain most of the predicted values, indicating that this regression equation is effective.
  • Standard Error (StdError): The standard error of the estimate, used to measure the reliability of each estimate. A smaller standard error indicates that the smaller the difference between the fitted value and the actual value, the better the model fitting effect.
  • Coefficient standard errors (SE_Intercept, SE1_interpretation field name): These values are used to measure the reliability of each regression coefficient estimate. If the standard error of the coefficient is smaller than the actual coefficient, the reliability of the estimated value will be higher. A larger standard error may indicate a local multicollinearity problem.
  • Pseudo t value (TV_Intercept, TV1_interpretation field name): It is the significance test of each regression coefficient. When the T value is greater than the critical value, the null hypothesis is rejected, and the regression coefficient is significant, that is, the estimated regression coefficient is reliable; when the T value is less than the critical value, the null hypothesis is accepted and the regression coefficient is not significant.
  • Studentised residual (StdResidual): The ratio of the residual error to the standard error. This value can be used to judge whether the data is abnormal. If the data is in the (-2, 2) interval, it indicates that the data has normality and uniformity of variance; if the data exceeds (-2, 2) The interval indicates that the data is abnormal data, and there is no homogeneity of variance and normality.
Parameters:
  • source (DatasetVector or str) – The dataset to be calculated. It can be a point, line, or area dataset.
  • explanatory_fields (list[str] or str) – a collection of explanatory field names
  • model_field (str) – the name of the modeling field
  • kernel_function (KernelFunction or str) – kernel function type
  • band_width_type (BandWidthType or str) – Bandwidth determination method
  • distance_tolerance (float) – bandwidth range
  • kernel_type (KernelType or str) – bandwidth type
  • neighbors (int) – The number of neighbors. It is valid only when the bandwidth type is set to: py:attr:.KernelType.ADAPTIVE and the width determination method is set to: py:attr:.BandWidthType.BANDWIDTH.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource used to store the result dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
  • prediction_dataset (DatasetVector or str) – Prediction dataset
  • explanatory_fields_matching (dict[str,str]) – Forecast dataset field mapping. Represents the corresponding relationship between the model’s explanatory field name and the predicted dataset field name. Each explanatory field should have a corresponding field in the prediction dataset. If no correspondence is set, all fields in the explanatory variable array must exist in the prediction dataset.
  • out_predicted_name (str) – The name of the prediction result dataset
Returns:

Return a three-element tuple, the first element of the tuple is GWRSummary,

the second element is the geographically weighted regression result dataset and the third element is the geographically weighted regression prediction result dataset :rtype: tuple[GWRSummary, DatasetVector, DatasetVector]

iobjectspy.analyst.GTWR(source, explanatory_fields, model_field, time_field, time_distance_unit='DAYS', kernel_function='GAUSSIAN', band_width_type='AICC', distance_tolerance=0.0, kernel_type='FIXED', neighbors=2, out_data=None, out_dataset_name=None, progress=None, prediction_dataset=None, prediction_time_field=None, explanatory_fields_matching=None, out_predicted_name=None)

Spatio-temporal geographic weighted regression.

Spatio-temporal geographic weighted regression is an expanded and improved geographic weighted regression that can analyze spatial coordinate points with time attributes and solve the problem of the overall temporal and spatial non-stationarity of the model. Application scenarios:

-Study the changing trends of urban housing in terms of time and space
-Study the factors of provincial economic development and their temporal and spatial laws
>>> result = GTWR(ds['data'],'FLOORSZ','PURCHASE','time_field', kernel_type='FIXED', kernel_function='GAUSSIAN',
>>> band_width_type='CV', distance_tolerance=2000)

Simultaneous prediction:

>>> result = GTWR(ds['data'],'FLOORSZ','PURCHASE','time_field', kernel_type='FIXED', kernel_function='GAUSSIAN',
>>> band_width_type='CV', distance_tolerance=2000, prediction_dataset=ds['predict'],
>>> explanatory_fields_matching={'FLOORSZ':'FLOORSZ'}, prediction_time_field='time_field')
Parameters:
  • source (DatasetVector or str) – The data set to be calculated. It can be a point, line, or area data set.
  • explanatory_fields (list[str] or str) – a collection of explanatory field names
  • model_field (str) – the name of the modeling field
  • time_field (str) –
  • time_distance_unit (TimeDistanceUnit or str) –
  • kernel_function (KernelFunction or str) – kernel function type
  • band_width_type (BandWidthType or str) – Bandwidth determination method
  • distance_tolerance (float) – bandwidth range
  • kernel_type (KernelType or str) – bandwidth type
  • neighbors (int) – The number of neighbors. It is valid only when the bandwidth type is set to KernelType.ADAPTIVE and the width determination method is set to BandWidthType.BANDWIDTH.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The data source used to store the result data set
  • out_dataset_name (str) – result data set name
  • progress (function) – progress information, please refer to StepEvent
  • prediction_dataset (DatasetVector or str) – prediction data set
  • prediction_time_field (str) – The name of the prediction data set time field. It is only valid when a valid forecast data set is set.
  • explanatory_fields_matching (dict[str,str]) – prediction data set field mapping. Represents the corresponding relationship between the model’s explanatory field name and the predicted data set field name. Each interpretation field should have a corresponding field in the prediction data set. If no corresponding relationship is set, Then all fields in the explanatory variable array must exist in the prediction data set.
  • out_predicted_name (str) – The name of the prediction result data set. It is only valid when a valid forecast data set is set.
Returns:

returns a three-element tuple, the first element of the tuple is GWRSummary, the second element is the geographically weighted regression result data set, The third element is the geographically weighted regression prediction result data set

Return type:

tuple[GWRSummary, DatasetVector, DatasetVector]

class iobjectspy.analyst.GWRSummary

Bases: object

Summary category of geographically weighted regression results. This category gives a summary of the results of geographically weighted regression analysis, such as bandwidth, adjacent number, residual sum of squares, AICc, and determination coefficient.

AIC

float – AIC in the summary of geographically weighted regression results. Similar to AICc, it is a standard to measure the goodness of model fitting. It can weigh the complexity of the estimated model and the goodness of the model fitting data. When evaluating the model, both simplicity and accuracy are considered. It shows that increasing the number of free parameters improves the goodness of the fit. AIC encourages the fit of the data, but overfitting should be avoided as much as possible. Therefore, priority is given to those with a smaller AIC value, which is to find a model that can best explain the data but contains the fewest free parameters.

AICc

float – AICc in the summary of geographically weighted regression results. When data increases, AICc converges to AIC, which is also a measure of model performance, which is helpful for comparison Different regression models. Considering the complexity of the model, a model with a lower AICc value will better fit the observed data. AICc is not an absolute measure of goodness of fit, but it is very useful for comparing models that use the same dependent variable and have different explanatory variables. If the difference between the AICc values of the two models is greater than 3, the model with the lower AICc value will be regarded as the better model.

Edf

float – The effective degrees of freedom in the summary of geographically weighted regression results. The difference between the number of data and the effective number of parameters (EffectiveNumber), not necessarily an integer, Available To calculate multiple diagnostic measurements. A model with a larger degree of freedom will have a poorer fit and can better reflect the true situation of the data, and the statistics will become more reliable; otherwise, The fitting effect will be better, but it cannot better reflect the true situation of the data, the independence of the model data is weakened, and the degree of association increases.

R2

float – The coefficient of determination (R2) in the summary of geographically weighted regression results. The coefficient of determination is a measure of goodness of fit, and its value is in the range of 0.0 and 1.0 Internal change, the larger the value, the better the model. This value can be interpreted as the proportion of the variance of the dependent variable covered by the regression model. The denominator calculated by R2 is the sum of the squares of the dependent variable value. Adding an explanatory variable will not change the denominator but will change the numerator. This will improve the model fit, but it may also be false.

R2_adjusted

float – The coefficient of determination for correction in the summary of geographically weighted regression results. The calculation of the corrected coefficient of determination will normalize the numerator and denominator according to their degrees of freedom. This has the effect of compensating the number of variables in the model, because the corrected R2 value is usually smaller than the R2 value. However, when performing a correction, the interpretation of this value cannot be used as a proportion of the explained variance. The effective value of the degrees of freedom is a function of bandwidth, so AICc is the preferred way to compare models.

band_width

float – The bandwidth range in the summary of geographically weighted regression results.

  • The bandwidth range used for each local estimation, which controls the degree of smoothing in the model. Usually, you can choose the default bandwidth range by setting the bandwidth determination method (kernel_type)

: py:attr:.BandWidthType.AICC or BandWidthType.CV, both of these options will try to identify the best bandwidth range.

  • Since the “optimal” conditions are different for AIC and CV, both will get the relative optimal AICc value and CV value, and therefore usually get different optimal values.
  • The accurate bandwidth range can be provided by setting the bandwidth type (kernel_type) method.
effective_number

float – The number of effective parameters in the summary of geographically weighted regression results. It reflects the trade-off between the variance of the estimated value and the deviation of the estimated value of the coefficient. This value Related to the choice of bandwidth, Can be used to calculate multiple diagnostic measurements. For larger bandwidths, the effective number of coefficients will be close to the actual number of parameters, and local coefficient estimates will have smaller variances. But the deviation will be very large; for a smaller bandwidth, the effective number of coefficients will be close to the number of observations, and the local coefficient estimate will have a larger variance, but the deviation will become smaller.

neighbours

int – the number of neighbors in the summary of geographically weighted regression results.

  • The number of neighbors used for each local estimation, which controls the degree of smoothing in the model. Usually, you can choose the default adjacent point value by setting the bandwidth determination method (kernel_type) Method selection: py:attr:.BandWidthType.AICC or BandWidthType.CV, both of these options will try to identify the best adaptive number of adjacent points.
  • Since the “optimal” conditions are different for AIC and CV, both will get the relative optimal AICc value and CV value, and therefore usually get different optimal values.
  • The precise number of adaptive adjacent points can be provided by setting the bandwidth type (kernel_type) method.
residual_squares

float – The sum of squared residuals in the summary of geographically weighted regression results. The residual sum of squares is the sum of the squares of the actual value and the estimated value (or fitted value). The smaller the measured value, the better the model fits the observation data, that is, the better the fit.

sigma

float – The estimated standard deviation of the residuals in the summary of geographically weighted regression results. The estimated standard deviation of the residual is the residual sum of squares divided by the square root of the effective degrees of freedom of the residual. The smaller the statistical value, the better the model fitting effect.

class iobjectspy.analyst.OLSSummary(java_object)

Bases: object

The general least squares result summary class. This category gives a summary of the results of ordinary least squares analysis, such as distribution statistics, statistical probability, AICc, and coefficient of determination.

AIC

float – AIC in the summary of ordinary least squares results. Similar to AICc, it is a standard to measure the goodness of model fitting. It can weigh the complexity of the estimated model and the goodness of the model fitting data. When evaluating the model, both simplicity and accuracy are considered. It shows that increasing the number of free parameters improves the goodness of the fit. AIC encourages the fit of the data, but overfitting should be avoided as much as possible. Therefore, priority is given to those with a smaller AIC value, which is to find a model that can best explain the data but contains the fewest free parameters.

AICc

float – AICc in the summary of ordinary least squares results. When data increases, AICc converges to AIC, which is also a measure of model performance, which helps Compare different regression models. Considering the complexity of the model, a model with a lower AICc value will better fit the observed data. AICc is not an absolute measure of goodness of fit, but it is very useful for comparing models that use the same dependent variable and have different explanatory variables. If the difference between the AICc values of the two models is greater than 3, the model with the lower AICc value will be regarded as the better model.

F_dof

int – The degree of freedom of the joint F statistic in the summary of ordinary least squares results.

F_probability

float – The probability of the joint F statistic in the summary of ordinary least squares results.

JB_dof

int – The degree of freedom of the Jarque-Bera statistic in the summary of ordinary least squares results.

JB_probability

float – the probability of the Jarque-Bera statistic in the summary of ordinary least squares results

JB_statistic

float – The Jarque-Bera statistic in the summary of ordinary least squares results. Jarque-Bera statistics can evaluate the deviation of the model and are used to indicate whether the residuals are normally distributed. The null hypothesis tested is that the residuals are normally distributed. For a confidence level of 95%, the probability of the joint F statistic is less than 0.05, indicating that the model is statistically significant, the regression will not be normally distributed, and the model is biased.

.
KBP_dof

int – Koenker (Breusch-Pagan) statistic degrees of freedom in the summary of ordinary least squares results.

KBP_probability

float – the probability of the Koenker (Breusch-Pagan) statistic in the summary of ordinary least squares results

KBP_statistic

float – Koenker (Breusch-Pagan) statistics in the summary of ordinary least squares results. Koenker (Breusch-Pagan) statistics can assess the steady state of the model and are used to determine whether the explanatory variable of the model has a consistent relationship with the dependent variable in both geographic space and data space. The tested null hypothesis is that the tested model is stable. For the 95% confidence level, the probability of the joint F statistic is less than 0.05, which indicates that the model has statistically significant heteroscedasticity or non-steady state. When the test result is significant, you need to refer to the robustness coefficient standard deviation and probability to evaluate the effect of each explanatory variable.

R2

float – The coefficient of determination (R2) in the summary of ordinary least squares results.

R2_adjusted

float – the coefficient of determination for correction in the summary of ordinary least squares results

VIF

list[float] – The variance expansion factor in the summary of ordinary least squares results

coefficient

list[float] – The coefficient in the summary of ordinary least squares results. The coefficient represents the relationship and type between the explanatory variable and the dependent variable.

coefficient_std

list[float] – Standard deviation of coefficients in the summary of ordinary least squares results

f_statistic

float – The joint F statistic in the summary of ordinary least squares results. The joint F statistic is used to test the statistical significance of the entire model. Only when the Koenker (Breusch-Pagan) statistic is not statistically significant, the joint F statistic is credible. The null hypothesis tested is that the explanatory variables in the model do not work. For a confidence level of 95%, the probability of the joint F statistic is less than 0.05, which indicates that the model is statistically significant.

probability

list[float] – probability of t distribution statistic in the summary of ordinary least squares results

robust_Pr

list[float] – The probability of the robustness coefficient in the summary of ordinary least squares results.

robust_SE

list[float] – Get the standard deviation of the robustness coefficient in the summary of ordinary least squares results.

robust_t

list[float] – The distribution statistic of robustness coefficient t in the summary of ordinary least squares results.

sigma2

float – The residual variance in the summary of ordinary least squares results.

std_error

list[float] – Standard error in the summary of ordinary least squares results.

t_statistic

list[float] – t distribution statistics in the summary of ordinary least squares results.

variable

list[float] – The variable array in the summary of ordinary least squares results

wald_dof

int – the degree of freedom of the joint chi-square statistic in the summary of ordinary least squares results

wald_probability

float – the probability of the joint chi-square statistic in the summary of ordinary least squares results

wald_statistic

float – The joint chi-square statistic in the summary of ordinary least squares results. The combined chi-square statistic is used to test the statistical significance of the entire model. Only when the Koenker (Breusch-Pagan) statistic is statistically significant, the joint F statistic is credible. The null hypothesis tested is that the explanatory variables in the model do not work. For a confidence level of 95%, the probability of the joint F statistic is less than 0.05, which indicates that the model is statistically significant.

iobjectspy.analyst.ordinary_least_squares(input_data, explanatory_fields, model_field, out_data=None, out_dataset_name=None, progress=None)

Ordinary least squares method. The ordinary least squares analysis result information includes a result dataset and a summary of ordinary least squares results. The result dataset includes predicted value (Estimated), residual (Residual), standardized residual (StdResid), etc.

Description:

-Estimated: These values are estimated values (or fitted values) obtained by ordinary least squares. -Residual: These are the unexplainable parts of the dependent variable. They are the difference between the estimated value and the actual value. The average value of the standardized residual is 0 and the standard deviation is 1. Residuals can be used to determine the degree of fit of the model. Small residuals indicate that the model fits well and can explain most of the predicted values, indicating that this regression equation is effective. -Standardized residual (StdResid): the ratio of the residual to the standard error. This value can be used to determine whether the data is abnormal. If the data are all in the (-2, 2) interval, it indicates that the data has normality and homogeneity of variance; If the data exceeds the (-2, 2) interval, it indicates that the data is abnormal data, and there is no homogeneity of variance and normality.

Parameters:
  • input_data (DatasetVector or str) – The specified dataset to be calculated. It can be a point, line, or area dataset.
  • explanatory_fields (list[str] or str) – a collection of explanatory field names
  • model_field (str) – the name of the modeling field
  • out_data (Datasource or str) – The specified datasource used to store the result dataset.
  • out_dataset_name (str) – The specified result dataset name
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

Return a tuple, the first element of the tuple is the least squares result dataset or dataset name, the second element is the least squares result summary

Return type:

tuple[DatasetVector, OLSSummary] or tuple[str, OLSSummary]

class iobjectspy.analyst.InteractionDetectorResult(java_object)

Bases: object

The analysis result of the interaction detector is used to obtain the analysis result obtained by the interaction detector on the data, including the description of the interaction between different explanatory variables and the analysis result matrix. The user cannot create this object.

descriptions

list[str] – Description of interaction detector results. Evaluate whether different explanatory variables will increase or decrease the explanatory power of the dependent variable when they work together, or whether the effects of these factors on the dependent variable are independent of each other. The types of interaction between the two explanatory variables on the dependent variable include: non-linear weakening, single factor Non-linear weakening, two-factor enhancement, independent and non-linear enhancement.

interaction_values

pandas.DataFrame – Interaction detector analysis result values.

class iobjectspy.analyst.RiskDetectorMean(java_object)

Bases: object

The risk detector result mean class is used to obtain the mean value of the results of different explanatory variable fields obtained by the risk area detector on the data.

means

list[float] – mean value of risk detector analysis results

unique_values

list[str] – Unique value of risk detector explanatory variable field

variable

str – risk detector explanatory variable name

class iobjectspy.analyst.RiskDetectorResult(java_object)

Bases: object

The risk area detector analysis result class, used to obtain the analysis results obtained by the risk area detector on the data, including the average result and the result matrix

means

list[iobjectspy.RiskDetectorMean] – Mean value of detector results in risk area

values

list[pandas.DataFrame] – Risk detector analysis result value

class iobjectspy.analyst.GeographicalDetectorResult(java_object)

Bases: object

Geographic detector result class, used to obtain the results of geographic detector calculation, including factor detector, ecological detector, interaction detector, risk detector analysis results.

ecological_detector_result

pandas.DataFrame – Ecological detector analysis result. Ecological detector is used to compare whether the influence of two factors X1 and X2 on the spatial distribution of attribute Y is significantly different , Measured by F statistic.

../_images/GeographicalDetectorFformula.png
factor_detector_result

pandas.DataFrame – factor detector analysis result. Detecting the spatial divergence of Y and detecting how much a factor X explains the spatial divergence of attribute Y Measured with q value.

../_images/GeographicalDetectorQformula.png

The value range of q is [0,1]. The larger the value, the more obvious the spatial differentiation of y. If the stratification is generated by the independent variable X, the larger the value of q, the more consistent the spatial distribution of X and Y , The stronger the explanatory power of the independent variable X to the attribute Y, the weaker it is on the contrary. In extreme cases, a q value of 1 indicates that in the layer of X, the variance of Y is 0, that is, the factor X completely controls the spatial distribution of Y. A value of 0 means that the variance of Y after stratification by X is equal to the variance of Y without stratification, and Y is not differentiated by X, that is, the factor X and Y have no relationship. The q value means that X explains 100q% of Y.

interaction_detector_result

**InteractionDetectorResult* – InteractionDetectorResult* – Interaction probe analysis results. Identify the interaction between different risk factors Xs, that is, whether the combination of factors X1 and X2 will increase or decrease the explanatory power of the dependent variable Y, or whether the effects of these factors on Y are independent of each other? The evaluation method is to first calculate the q value of two factors X1 and X2 to Y: q(Y|X1) and q(Y|X2). Then superimpose the new layer formed by the tangency of the two layers of variables X1 and X2, and calculate the q value of X1∩X2 to Y: q(Y|X1∩X2).

Finally, compare the values of q(Y|X1), q(Y|X2) and q(Y|X1∩X2) to determine the interaction.

-q(X1∩X2) <Min(q(X1),q(X2)) nonlinearity reduction -Min(q(X1),q(X2)) <q(X1∩X2) <Max(q(X1),q(X2)) Single-factor nonlinearity reduction -q(X1∩X2)> Max(q(X1),q(X2)) two-factor enhancement -q(X1∩X2) = q(X1) + q(X2) independent -q(X1∩X2)> q(X1) + q(X2) nonlinear enhancement
risk_detector_result

RiskDetectorResult – Risk area detector analysis result. It is used to judge whether there is a significant difference in the attribute mean between two sub-regions, using t statistic to test.

../_images/GeographicalDetectorTformula.png
variables

list[str] – Geographical detector explanatory variable

iobjectspy.analyst.geographical_detector(input_data, model_field, explanatory_fields, is_factor_detector=True, is_ecological_detector=True, is_interaction_detector=True, is_risk_detector=True, progress=None)

Perform geographic detector analysis on the data and return the results of the geographic detector. The results returned by the geographic detector include the analysis results of factor detectors, ecological detectors, interaction detectors, and risk detectors

Geodetector is a set of statistical methods to detect spatial differentiation and reveal the driving force behind it. The core idea is based on the assumption that if an independent variable has an important influence on a dependent variable Therefore, the spatial distribution of the independent variable and the dependent variable should be similar. Geographic differentiation can be expressed by classification algorithms, such as environmental remote sensing classification, or it can be determined based on experience, such as the Hu Huanyong line. Geographic detectors are good at analyzing type quantities, and for sequential quantities, ratio quantities or interval quantities, as long as they are appropriately discretized, they can also be used for statistical analysis. Therefore, geographic detectors can detect both numerical data and qualitative data, which is a major advantage of geographic detectors. Another unique advantage of geodetector is to detect the interaction of two factors on the dependent variable. The general identification method of interaction is to add the product of two factors to the regression model to test its statistical significance. However, the interaction of two factors is not necessarily a multiplicative relationship. By calculating and comparing the q value of each single factor and the q value of the superposition of the two factors, the geographic detector can judge whether there is an interaction between the two factors, and whether the interaction is strong or weak, direction, linear or Non-linearity etc. The superposition of two factors includes both the multiplication relationship and other relationships. As long as there is a relationship, it can be tested.

Parameters:
  • input_data (DatasetVector or str) – vector dataset to be calculated
  • model_field (str) – modeling field
  • explanatory_fields (list[str] or str) – explanatory variable array
  • is_factor_detector (bool) – Whether to calculate factor detector
  • is_ecological_detector (bool) – Whether to calculate the ecological detector
  • is_interaction_detector (bool) – Whether to calculate the interaction detector
  • is_risk_detector (bool) – whether to perform risk detection
  • progress (function) – progress information, please refer to:py:class:.StepEvent
Returns:

Geodetector result

Return type:

GeographicalDetectorResult

iobjectspy.analyst.density_based_clustering(input_data, min_pile_point_count, search_distance, unit, out_data=None, out_dataset_name=None, progress=None)

DBSCAN implementation of density clustering

According to the given search radius (search_distance) and the minimum number of points to be included in the range (min_pile_point_count), this method connects the areas of the spatial point data that are dense enough and similar in space, and eliminates noise interference to achieve better clustering effect.

Parameters:
  • input_data (DatasetVector or str) – The specified vector dataset to be clustered, supporting point dataset.
  • min_pile_point_count (int) – The minimum number of points contained in each category
  • search_distance (int) – the distance to search for the neighborhood
  • unit (Unit) – unit of search distance
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.hierarchical_density_based_clustering(input_data, min_pile_point_count, out_data=None, out_dataset_name=None, progress=None)

HDBSCAN implementation of density clustering

This method is an improvement of the DBSCAN method, and only the minimum number of points (min_pile_point_count) in the given spatial neighborhood is required. On the basis of DBSCAN, different search radius is calculated and the most stable spatial clustering distribution is selected as the density clustering result.

Parameters:
  • input_data (DatasetVector or str) – The specified vector dataset to be clustered, supporting point dataset.
  • min_pile_point_count (int) – The minimum number of points contained in each category
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.ordering_density_based_clustering(input_data, min_pile_point_count, search_distance, unit, cluster_sensitivity, out_data=None, out_dataset_name=None, progress=None)

OPTICS implementation of density clustering

Based on DBSCAN, this method additionally calculates the reachable distance of each point, and obtains the clustering result based on the ranking information and clustering coefficient (cluster_sensitivity). This method is not very sensitive to the search radius (search_distance) and the minimum number of points to be included in the range (min_pile_point_count). The main decision result is the clustering coefficient (cluster_sensitivity)

Concept definition: -Reachable distance: Take the maximum value of the core distance of the core point and the distance from its neighboring points. -Core point: A point is within the search radius, and the number of points is not less than the minimum number of points contained in each category (min_pile_point_count). -Core distance: The minimum distance at which a certain point becomes a core point. -Clustering coefficient: an integer ranging from 1 to 100, which is a standard quantification of the number of clustering categories. When the coefficient is 1, the clustering category is the least and 100 is the most.

Parameters:
  • input_data (DatasetVector or str) – The specified vector dataset to be clustered, supporting point dataset.
  • min_pile_point_count (int) – The minimum number of points contained in each category
  • search_distance (int) – the distance to search for the neighborhood
  • unit (Unit) – unit of search distance
  • cluster_sensitivity (int) – clustering coefficient
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.analyst.spa_estimation(source_dataset, reference_dataset, source_unique_id_field, source_data_field, reference_unique_id_field, reference_data_fields, out_data=None, out_dataset_name=None, progress=None)

Single point geographic estimation (SPA)

Parameters:
  • source_dataset (DataetVector or str) – source dataset
  • reference_dataset (DataetVector or str) – reference dataset
  • source_unique_id_field (str) – unique ID field name of the source dataset
  • source_data_field (str) – source dataset data field name
  • reference_unique_id_field (str) – unique field name of the reference dataset
  • reference_data_fields (list[str] or tuple[str] or str) – reference dataset data field name collection
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource of the result dataset
  • out_dataset_name (str) – The name of the output dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result dataset

Return type:

DatasetVector

iobjectspy.analyst.bshade_estimation(source_dataset, historical_dataset, source_data_fields, historical_fields, estimate_method='TOTAL', out_data=None, out_dataset_name=None, progress=None)

BShade forecast

Parameters:
  • source_dataset (DatasetVector or str) – source dataset
  • historical_dataset (DatasetVector or str) – historical dataset
  • source_data_fields (list[str] or tuple[str] or str) – source dataset data field name collection
  • historical_fields (list[str] or tuple[str] or str) – Historical dataset data field name collection
  • estimate_method (BShadeEstimateMethod or str) – Estimate method. It includes two methods: total and average.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource of the result dataset
  • out_dataset_name (str) – The name of the output dataset.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

analysis result

Return type:

BShadeEstimationResult

iobjectspy.analyst.bshade_sampling(historical_dataset, historical_fields, parameter, progress=None)

BShade sampling.

Parameters:
  • historical_dataset (DatasetVector or str) – historical dataset.
  • historical_fields (list[str] or tuple[str] or str) – Historical dataset data field name collection.
  • parameter (BShadeSamplingParameter) – Parameter setting.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Analysis result.

Return type:

list[BShadeSamplingResult]

class iobjectspy.analyst.BShadeSampleNumberMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

Variables:
  • BShadeEstimateMethod.FIXED – Use a fixed number of fields.
  • BShadeEstimateMethod.RANGE – Use range field sampling number.
FIXED = 1
RANGE = 2
class iobjectspy.analyst.BShadeEstimateMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

Variables:
MEAN = 2
TOTAL = 1
class iobjectspy.analyst.BShadeSamplingParameter

Bases: object

BShade sampling parameters.

The simulated annealing algorithm will be used in the sampling process, which will contain multiple parameters of the simulated annealing algorithm. The simulated annealing algorithm is used to find the minimum value of the function.

bshade_estimate_method

BS hadeEstimateMethod – BShade estimation method

bshade_sample_number_method

BShadeSampleNumberMethod – BShade sampling number method

cool_rate

float

initial_temperature

float – initial temperature

max_consecutive_rejection

int – Maximum number of consecutive rejections

max_full_combination

int – Maximum number of field combinations

max_success

int – the maximum number of successes in a temperature

max_try

int – Maximum number of attempts

min_energy

float – minimum energy, ie stop energy

min_temperature

float – minimum temperature, that is, stop temperature

select_sample_number

int – select the number of samples

select_sample_range_lower

int – lower limit of range sampling number

select_sample_range_step

int – range sampling step

select_sample_range_upper

int – upper limit of range sampling number

set_bshade_estimate_method(value)

Set the BShade estimation method. That is to calculate the sample according to the total or average value

Parameters:value (BShadeEstimateMethod or str) – BShade estimation method, the default value is TOTAL
Returns:self
Return type:BShadeSamplingParameter
set_bshade_sample_number_method(value)

Set the BShade sampling number method. The default value is FIXED

Parameters:value (BShadeSampleNumberMethod or str) – BShade sampling number method
Returns:self
Return type:BShadeSamplingParameter
set_cool_rate(value)

Set annealing rate

Parameters:value (float) – annealing rate
Returns:self
Return type:BShadeSamplingParameter
set_initial_temperature(value)

Set the starting temperature

Parameters:value (float) – starting temperature
Returns:self
Return type:BShadeSamplingParameter
set_max_consecutive_rejection(value)

Set the maximum number of consecutive rejections

Parameters:value (int) – Maximum number of consecutive rejections
Returns:self
Return type:BShadeSamplingParameter
set_max_full_combination(value)

Set the maximum number of field combinations

Parameters:value (int) – Maximum number of field combinations
Returns:self
Return type:BShadeSamplingParameter
set_max_success(value)

Set the maximum number of successes within a temperature

Parameters:value (int) – Maximum number of successes
Returns:self
Return type:BShadeSamplingParameter
set_max_try(value)

Set the maximum number of attempts

Parameters:value (int) – Maximum number of attempts
Returns:self
Return type:BShadeSamplingParameter
set_min_energy(value)

Set minimum energy, that is, stop energy

Parameters:value (float) – minimum energy, ie stop energy
Returns:self
Return type:BShadeSamplingParameter
set_min_temperature(value)

Set minimum temperature, that is, stop temperature

Parameters:value (float) – minimum temperature, ie stop temperature
Returns:self
Return type:BShadeSamplingParameter
set_select_sample_number(value)

Set the number of selected samples

Parameters:value (int) – select the number of samples
Returns:self
Return type:BShadeSamplingParameter
set_select_sample_range_l(value)

Set the lower limit of the range sampling number

Parameters:value (int) – lower limit of range sampling number
Returns:self
Return type:BShadeSamplingParameter
set_select_sample_range_step(value)

Set range sampling step

Parameters:value (int) – Range sampling step
Returns:self
Return type:BShadeSamplingParameter
set_select_sample_range_u(value)

Set the upper limit of the number of range samples

Parameters:value (int) – The upper limit of the range sampling number
Returns:self
Return type:BShadeSamplingParameter
class iobjectspy.analyst.BShadeSamplingResult

Bases: object

BShade SamplingResult

estimate_variance

float – estimated variance

sample_number

int – number of sampling fields

solution_names

list[str] – field name array

to_dict()

Convert to dict.

Returns:A dictionary object used to describe the results of BShade sampling
Return type:dict[str, object]
weights

list[float] – weight array

class iobjectspy.analyst.BShadeEstimationResult

Bases: object

dataset

DatasetVector – BShade estimation result dataset

to_dict()

Convert to dict.

Returns:A dictionary object used to describe the estimation result of BShade.
Return type:dict[str, object]
variance

float – estimated variance

weights

list[float] – weight array

class iobjectspy.analyst.WeightFieldInfo(weight_name, ft_weight_field, tf_weight_field)

Bases: object

Weight field information class.

Stores the relevant information of the weight field in the network analysis, including the forward weight field and the reverse weight field. The weight field is a field indicating the weight value of the cost. The value of the forward weight field indicates the cost required from the start to the end of the edge. The value of the reverse weight field indicates the cost required from the end point to the start point of the edge.

Initialization object

Parameters:
  • weight_name (str) – the name of the weight field information
  • ft_weight_field (str) – forward weight field or field expression
  • tf_weight_field (str) – reverse weight field or field expression
ft_weight_field

str – Forward weight field or field expression

set_ft_weight_field(value)

Set forward weight field or field expression

Parameters:value (str) – forward weight field or field expression
Returns:self
Return type:WeightFieldInfo
set_tf_weight_field(value)

Set reverse weight field or field expression

Parameters:value (str) – reverse weight field or field expression
Returns:self
Return type:WeightFieldInfo
set_weight_name(value)

Set the name of the weight field information

Parameters:value (str) – the name of the weight field information
Returns:self
Return type:WeightFieldInfo
tf_weight_field

str – reverse weight field or field expression

weight_name

str – the name of the weight field information

class iobjectspy.analyst.PathAnalystSetting

Bases: object

Best path analysis environment setting, this kind of abstract base class, users can choose to use:py:class:SSCPathAnalystSetting or TransportationPathAnalystSetting

network_dataset

DatasetVector – network dataset

set_network_dataset(dataset)

Set up a network dataset for optimal path analysis

Parameters:dataset (DatasetVetor or str) – network dataset
Returns:current object
Return type:PathAnalystSetting
class iobjectspy.analyst.SSCPathAnalystSetting(network_dt=None, ssc_file_path=None)

Bases: iobjectspy._jsuperpy.analyst.na.PathAnalystSetting

Optimal path analysis environment setting based on SSC file

Parameters:
  • network_dt (DatasetVector) – network dataset name
  • ssc_file_path (str) – SSC file path
set_ssc_file_path(value)

Set SSC file path

Parameters:value (str) – SSC file path
Returns:current object
Return type:SSCPathAnalystSetting
set_tolerance(value)

Set node tolerance

Parameters:value (float) – node tolerance
Returns:current object
Return type:SSCPathAnalystSetting
ssc_file_path

str – SSC file path

tolerance

float – node tolerance

class iobjectspy.analyst.TransportationPathAnalystSetting(network_dataset=None)

Bases: iobjectspy._jsuperpy.analyst.na.PathAnalystSetting

The best path analysis environment for traffic network analysis.

Initialization object

Parameters:network_dataset (DatasetVector or str) – network dataset
barrier_edge_ids

list[int] – ID list of barrier edge segments

barrier_node_ids

list[int] – Barrier node ID list

bounds

Rectangle – The analysis range of the best path analysis

edge_filter

str – edge filtering expression in traffic network analysis

edge_id_field

str – The field that marks the edge ID in the network dataset

edge_name_field

str – Road name field

f_node_id_field

str – mesh network dataset flag edge starting node ID field

ft_single_way_rule_values

list[str] – an array of strings used to represent forward one-way lines

node_id_field

str – The field that identifies the node ID in the network dataset

prohibited_way_rule_values

list[str] – an array of strings representing prohibited lines

rule_field

str – The field in the network dataset representing the traffic rules of the network edge

set_barrier_edge_ids(value)

Set the ID list of barrier edges

Parameters:value (str or list[int]) – ID list of barrier edge
Returns:self
Return type:TransportationPathAnalystSetting
set_barrier_node_ids(value)

Set the ID list of barrier nodes

Parameters:value (str or list[int]) – ID list of barrier node
Returns:self
Return type:TransportationPathAnalystSetting
set_bounds(value)

Set the analysis scope of the best path analysis

Parameters:value (Rectangle or str) – The analysis range of the best path analysis
Returns:self
Return type:TransportationPathAnalystSetting
set_edge_filter(value)

Set the edge filter expression in traffic network analysis

Parameters:value – edge filtering expression in traffic network analysis
Returns:self
Return type:TransportationPathAnalystSetting
set_edge_id_field(value)

Set the field that identifies the node ID in the network dataset

Parameters:value (str) – The field that marks the edge segment ID in the network dataset
Returns:self
Return type:TransportationPathAnalystSetting
set_edge_name_field(value)

Set road name field

Parameters:value (str) – Road name field
Returns:self
Return type:TransportationPathAnalystSetting
set_f_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) – The field that marks the starting node ID of the edge in the network dataset
Returns:self
Return type:TransportationPathAnalystSetting
set_ft_single_way_rule_values(value)

Set the array of strings used to represent the forward one-way line

Parameters:value (str or list[str]) – An array of strings used to represent the forward one-way line
Returns:self
Return type:TransportationPathAnalystSetting
set_node_id_field(value)

Set the field of the network dataset to identify the node ID

Parameters:value (str) – The field that identifies the node ID in the network dataset
Returns:self
Return type:TransportationPathAnalystSetting
set_prohibited_way_rule_values(value)

Set up an array of strings representing forbidden lines

Parameters:value (str or list[str]) – an array of strings representing the forbidden line
Returns:self
Return type:TransportationPathAnalystSetting
set_rule_field(value)

Set the fields in the network dataset that represent the traffic rules of the network edge

Parameters:value (str) – A field in the network dataset representing the traffic rules of the network edge
Returns:self
Return type:TransportationPathAnalystSetting
set_t_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) –
Returns:self
Return type:TransportationPathAnalystSetting
set_tf_single_way_rule_values(value)

Set up an array of strings representing reverse one-way lines

Parameters:value (str or list[str]) – an array of strings representing the reverse one-way line
Returns:self
Return type:TransportationPathAnalystSetting
set_tolerance(value)

Set node tolerance

Parameters:value (float) – node tolerance
Returns:current object
Return type:TransportationPathAnalystSetting
set_two_way_rule_values(value)

Set an array of strings representing two-way traffic lines

Parameters:value (str or list[str]) – An array of strings representing two-way traffic lines
Returns:self
Return type:TransportationPathAnalystSetting
set_weight_fields(value)

Set weight field

Parameters:value (list[WeightFieldInfo] or tuple[WeightFieldInfo]) – weight field
Returns:self
Return type:TransportationPathAnalystSetting
t_node_id_field

str – the field that marks the starting node ID of the edge in the network dataset

tf_single_way_rule_values

list[str] – an array of strings representing reverse one-way lines

tolerance

float – node tolerance

two_way_rule_values

li st[str] – an array of strings representing two-way traffic lines

weight_fields

list[WeightFieldInfo] – weight field

class iobjectspy.analyst.TransportationAnalystSetting(network_dataset=None)

Bases: iobjectspy._jsuperpy.analyst.na.TransportationPathAnalystSetting

Traffic network analysis environment setting class. This class is used to provide all the parameter information needed for traffic network analysis. The setting of each parameter of the traffic network analysis environment setting category directly affects the result of the analysis.

When using the traffic network analysis class (TransportationAnalyst) to perform various traffic network analysis, you must first set the traffic network analysis environment, and the traffic network analysis environment is set through: py:class:TransportationAnalyst The `:py:meth:`TransportationAnalyst.set_analyst_setting method of the class object is done.

初始化对象

Parameters:network_dataset (DatasetVector or str) – 网络数据集
class iobjectspy.analyst.PathInfo(path_info_items)

Bases: object

Guidance information, through this category, you can obtain guidance information based on the route after SSC path analysis

__getitem__(item)

Get the driving guidance item of the specified location

Parameters:item (int) – Specified driving guide index subscript
Returns:driving guide sub-item
Return type:PathInfoItem
__len__()

Return the number of driving guide items

Returns:number of driving guide items
Return type:int
class iobjectspy.analyst.PathInfoItem(java_object)

Bases: object

Boot information item

direction_to_swerve

int – Return to the turning direction of the next road. Where 0 means going straight, 1 means turning left, 2 means turning right, 3 means turning left, 4 means Turn right, 5 means turn left, 6 means a right back turn, 7 means a U-turn, 8 means a right turn and detour to the left, 9 means a right-angled hypotenuse turn right, and 10 means a roundabout.

junction

Point2D – Through this interface, you can return to the intersection point coordinates of the next road

length

float – Return the length of the current road.

route_name

str – This interface can return the name of the current road. When the name of the road is “PathPoint”, it means the point of arrival.

class iobjectspy.analyst.PathGuide(items)

Bases: object

Driving guidance

Driving guidance records how to drive step by step from the starting point to the end of a path, and each key element on the path corresponds to a driving guidance sub-item. These key elements include sites (points input by the user for analysis, which can be ordinary points or nodes), edges passed by and network nodes. Through the driving guidance sub-item object, you can obtain the ID, name, serial number, weight, and length of the key elements in the route, and can also determine whether it is an edge or a stop, as well as information such as driving direction, turning direction, and cost. By extracting and organizing the stored key element information according to the sequence number of the driving guidance sub-items, it is possible to describe how to reach the end of the route from the beginning of the route.

The figure below is an example of recent facility search and analysis. The results of the analysis give three preferred paths. The information of each route is recorded by a driving guidance object. For example, the second route is composed of key elements such as stops (starting and ending points, which can be general coordinate points or network nodes), road sections (edges), intersections (network nodes), etc., and it is guided from its corresponding driving The information of these key elements can be obtained in the driving guidance sub-item of the guide, so that we can clearly describe how the route travels from the start point to the end point, such as which road to drive and how long to turn.

api\../image/PathGuide.png
__getitem__(item)

Get the driving guidance item of the specified location

Parameters:item (int) – Specified driving guide index subscript
Returns:driving guide sub-item
Return type:PathGuideItem
__len__()

Return the number of driving guide items

Returns:number of driving guide items
Return type:int
class iobjectspy.analyst.PathGuideItem(java_object)

Bases: object

In traffic network analysis, the sub-items of driving guidance can be summarized into the following categories:

-Station: The point selected by the user for analysis, such as the points to be passed when the best path analysis is performed.

-Line segment from site to network: When the site is a common coordinate point, the site needs to be attributed to the network before analysis can be performed based on the network. Please refer to the introduction of TransportationPathAnalystSetting.tolerance method.

As shown in the figure below, the red dotted line is the shortest straight line distance from the site to the network. Note that when the site is near the edge of the network edge, as shown on the right, this distance refers to the distance between the site and the end of the edge.

api\../image/PathGuideItem_4.png

-The corresponding point of the station on the network: corresponding to the “line segment from the station to the network”, this point is the corresponding point on the network when the station (ordinary coordinate point) is attributed to the network. In the case shown in the left picture above, this point is the vertical foot point of the station on the corresponding edge; in the case shown in the right picture above, this point is the end point of the edge.

-Road section: that is, a section of road through which you drive. In the traffic network, edges are used to simulate roads, so the driving sections are all located on the edges. Note that multiple edges may be merged into one driving guidance sub-item. The conditions for merging are that their edge names are the same, and the turning angle between adjacent edges is less than 30 degrees. It needs to be emphasized that the last driving section before arriving at the station and the first driving section after arriving

at the station contain only one edge or part of an edge. Even if the above conditions are met, it will not be merged with adjacent edges into one driving guide. lead.

As shown in the figure below, the driving section between the two stations is marked with different colors. Among them, the first road segment (red) after station 1, although the name of its edge is the same as the names of the following edges, and the steering angle is less than 30 degrees, it is the first road segment after the station , So they are not merged. The three edges covered by the blue section have the same name and the turning angle is less than 30 degrees, so they are merged into one driving guidance sub-item; the pink section has a different edge name from the previous section, so it becomes Another driving guidance sub-item; because the green section is the last road section before reaching the stop, it is also a separate driving guidance sub-item.

api\../image/PathGuideItem_5.png
-Turning point: the intersection between two adjacent driving sections. An intersection refers to an intersection of an actual road (such as an intersection or a T-junction) that may change direction. The driving direction at the turning point can be
change. As shown in the figure above, the nodes 2783, 2786 and 2691 are all turning points. The turning point must be the network node.

The value returned by each method of PathGuideItem can be used to determine which type of driving guide item belongs to. The following table summarizes the comparison table of the return value of each method of the five driving guide items. It is convenient for users to understand and use driving guidance.

api\../image/PathGuideItem_6.png

The following examples can help users understand the content and functions of the driving guidance and driving guidance sub-items. The blue dotted line in the figure below is a path in the results of the nearest facility search and analysis. In the returned results of the facility search, the driving guidance corresponding to this route can be obtained.

api\../image/PathGuideItem_1.png

The driving guide used to describe this path contains 7 sub-items. These 7 driving guidance sub-items include 2 stops (that is, the starting point and the end point, corresponding to serial numbers 0 and 6), 3 edges (that is, road sections, serial numbers are 1, 3, 5), and 2 network nodes ( That is, the turning point, the serial numbers are 2 and 4 respectively). The following table lists the information of these 7 driving guidance sub-items, including whether it is a stop (is_stop ), whether it is an edge (is_edge ), and the serial number (:py index ), driving direction (direction_type ), turning direction (turn_type) and edge name (name) and other information.

api\../image/PathGuideItem_2.png

Organize the information recorded by the driving guidance sub-item to get the guidance description of the route as shown in the following table.

api\../image/PathGuideItem_3.png
bounds

**Rectangle* – Fan the child with the guide around the guide when the running subkey line type (i.e.,* – . py: attr:` is_edge` Return True ), is the minimum enclosing rectangle of the line; Point type (ie: py:attr:is_edge Return False), it is the point itself

direction_type

**DirectionType* – The direction of the travel guide item, only if the travel guide item is line type (ie* – py:attr:is_edge Return True) is meaningful, and can be east, south, west, and north

distance

float – The distance from the station to the network is only valid when the driving guidance item is a station. The site may not be on the network (neither on the edge nor on the node), and the site must be attributed to the network in order to perform analysis based on the network. The distance refers to the distance from the station to the nearest edge. As shown in the figure below, the orange dots represent the network nodes, the blue dots represent the edges, the gray dots represent the stations, and the red line segments represent the distance.

api\../image/PathGuideItemDistance.png

When the driving guidance item is of a type other than the station, the value is 0.0.

guide_line

**GeoLineM* – Return the travel guide line segment when the travel guide item is line type (ie* – py:attr:is_edge Return True) . When is_edge Return false, the method Return None.

index

int – The serial number of the travel guide item. Except for the following two situations, this method Return -1:

-When the driving guidance item is a station, the value is the serial number of the station in all stations, starting from 1. For example, a station is the second station passed by the driving route, then this
Index value of the site is 2;
-When the driving guidance item is a turning point, the value is the number of intersections from that point to the previous turning point or stop. For example, the two intersections before a certain turning point are the closest stations,
Then the Index value of this turning point is 2; when a certain point is a stop and a turning point at the same time, the Index is the position of the stop in all the stops in the whole driving process.
is_edge

bool – Return whether the driving guide item is of line or point type. If True, it means line type, such as the line segment and road segment from the station to the network; If False, it means point type, Such as stations, turning points or stations are attributed to corresponding points on the network.

is_stop

bool – Return whether the travel guide item is a stop or the stop is attributed to the corresponding point on the network. When is_stop Return true, the corresponding travel The guide item may be a site, or when the site is a coordinate point, it is attributed to the corresponding point on the network.

length

**float* – Return the length of the corresponding line segment when the travel guide item is line type (ie* – py:attr:is_edge Return True). Unit For rice

name

str – The name of the travel guide item. Except for the following two cases, this method Return an empty string:

-When the driving guidance item is a stop (node mode) or turning point, the value is based on the value of the node name field specified in the traffic network analysis environment
Given, an empty string if not set;
-When the driving guidance item is a road segment or a line segment from a station to the network, the value is based on the value of the node name field specified in the traffic network analysis environment
Given, or an empty string if not set.
node_edge_id

int – ID of the travel guide item. Except for the following three situations, this method Return -1:

-When the driving guidance item is a stop in node mode, the stop is a node, and the node ID of the node is returned; -When the driving guidance item is a turning point, the turning point is a node, and the node ID of the node is returned; -When the driving guidance item is a road segment, the edge ID of the edge corresponding to the road segment is returned. If the road segment is merged by multiple edges, the ID of the first edge is returned.
side_type

SideType – When the driving guide item is a stop, whether the stop is on the left or right side of the road or on the road. When the driving guide item is outside the stop Type, the method Return NONE

turn_angle

float – When the driving guidance item is a point type, the turning angle of the next step at this point. The unit is degree, and the accuracy is 0.1 degree. When is_edge Return True, the method Return -1

turn_type

**TurnType* – Return when the driving guide item is a point type (ie* – py:attr:is_edge Return False), the next step at this point Direction of turn. When is_edge Return True, the method Return None

weight

float – Return the weight of the driving guide item, that is, the cost of using the guide item. The unit is the same as the unit of the weight field information (WeightFieldInfo) object specified by the transportation network analysis parameter (TransportationAnalystParameter).

. When the driving guidance item is a road section, a turning point or a stop in the node mode, the cost obtained is meaningful, otherwise it is 0.0.

-When the driving guidance sub-item is a road segment, the corresponding cost is calculated according to the edge weight and the steering weight. If the steering table is not set, the steering weight is 0; -When the driving guidance item is a turning point or a stop in node mode (both are nodes), it is the corresponding turning weight. If the steering table is not set, it is 0.0.

class iobjectspy.analyst.SSCPathAnalyst(path_analyst_setting)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Path analysis class based on SSC files. Users can compile ssc files with compile_ssc_data(). Generally, the performance of path analysis using SSC files is better than that based on Traffic network path analysis performance of the network dataset.

Initialization object

Parameters:path_analyst_setting (SSCPathAnalystSetting) – Based on SSC path analysis environment parameter object.
find_path(start_point, end_point, midpoints=None, route_type='RECOMMEND', is_alternative=False)

Best path analysis

Parameters:
  • start_point (Point2D) – start point
  • end_point (Point2D) – end point
  • midpoints (list[Point2D] or tuple[Point2D] or Point2D) – midpoints
  • route_type (RouteType or str) – The analysis mode of the best route analysis, the default value is’RECOMMEND’
  • is_alternative (bool) – Whether to return alternatives. True will return the alternative path, otherwise only the best path will be returned
Returns:

Return True if the analysis is successful, and False if it fails

Return type:

bool

get_alternative_path_infos()

Return the guidance information of the candidate analysis results.

Returns:Guidance information for alternative analysis results
Return type:PathInfo
get_alternative_path_length()

Return the total length of the candidate analysis results.

Returns:The total length of the candidate analysis results.
Return type:float
get_alternative_path_points()

A collection of waypoints to return candidate analysis results.

Returns:The set of passing points for the alternative analysis results.
Return type:list[Point2D]
get_alternative_path_time()

Return the travel time of the candidate analysis result, in seconds. If you want to get the travel time, you need to specify the correct speed field when compiling the SSC file.

Returns:travel time of alternative analysis results
Return type:float
get_path_infos()

Return the guide information collection of the analysis result. Please ensure that the path analysis is successful before calling this interface.

Returns:guide information collection of analysis results
Return type:PathInfo
get_path_length()

Return the total length of the analysis result. Please ensure that the path analysis is successful before calling this interface.

Returns:The total length of the analysis result.
Return type:float
get_path_points()

A collection of waypoints to return analysis results. Please ensure that the path analysis is successful before calling this interface.

Returns:The collection of the coordinates of the passing point of the analysis result
Return type:list[Point2D]
get_path_time()

Return the travel time of the analysis result, in seconds. If you want to get the travel time, you need to specify the correct speed field when compiling the SSC file.

Returns:driving time of the analysis result
Return type:float
set_analyst_setting(path_analyst_setting)

Set path analysis environment parameters

Parameters:path_analyst_setting (SSCPathAnalystSetting) – Based on SSC path analysis environment parameter object.
Returns:self
Return type:SSCPathAnalyst
class iobjectspy.analyst.SSCCompilerParameter

Bases: object

Parameters for compiling the SSC file

edge_id_field

str – The field that marks the edge ID in the network dataset

edge_name_field

str – the name field of the edge

f_node_id_field

str – The field that marks the starting node ID of the edge in the network dataset

file_path

str – Path of SSC file

ft_single_way_rule_values

list[str] – an array of strings used to represent forward one-way lines

level_field

str – Road level field

network_dataset

DatasetVector – network dataset

node_id_field

str – The field that identifies the node ID in the network dataset

prohibited_way_rule_values

list[str] – an array of strings representing prohibited lines

rule_field

str – The field in the network dataset representing the traffic rules of the network edge

set_edge_id_field(value)

Set the field that identifies the node ID in the network dataset

Parameters:value (str) – The field that marks the edge segment ID in the network dataset
Returns:self
Return type:SSCCompilerParameter
set_edge_name_field(value)

Set the field name of the edge

Parameters:value (str) – The name field of the edge segment.
Returns:self
Return type:SSCCompilerParameter
set_f_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) – The field that marks the starting node ID of the edge in the network dataset
Returns:self
Return type:SSCCompilerParameter
set_file_path(value)

Set the path of the SSC file

Parameters:value (str) – The path of the SSC file.
Returns:self
Return type:SSCCompilerParameter
set_ft_single_way_rule_values(value)

Set the array of strings used to represent the forward one-way line

Parameters:value (str or list[str]) – An array of strings used to represent the forward one-way line
Returns:self
Return type:SSCCompilerParameter
set_level_field(value)

Set the road grade field. The value range is 1-3. It is a required field. Among them, 3 has the highest road grade (highway, etc.), and 1 has the lowest road grade (country road, etc.).

Parameters:value (str) – Road grade field
Returns:self
Return type:SSCCompilerParameter
set_network_dataset(dataset)

Set up network dataset

Parameters:dataset (DatasetVetor or str) – network dataset
Returns:current object
Return type:SSCCompilerParameter
set_node_id_field(value)

Set the field of the network dataset to identify the node ID

Parameters:value (str) – The field that identifies the node ID in the network dataset
Returns:self
Return type:SSCCompilerParameter
set_prohibited_way_rule_values(value)

Set up an array of strings representing forbidden lines

Parameters:value (str or list[str]) – an array of strings representing the forbidden line
Returns:self
Return type:SSCCompilerParameter
set_rule_field(value)

Set the fields in the network dataset that represent the traffic rules of the network edge

Parameters:value (str) – A field in the network dataset representing the traffic rules of the network edge
Returns:self
Return type:SSCCompilerParameter
set_speed_field(value)

Set the road speed field, which is not mandatory. Integer field, where 1 has the highest road speed (150km/h), 2 has a speed of 130km/h, 3 has a speed of 100km/h, and 4 has a speed of 90km/h, The speed of 5 is 70km/h, the speed of 6 is 50km/h, the speed of 7 is 30km/h, and the speeds of other values are unified to 10km/h

Parameters:value (str) – road speed field
Returns:self
Return type:SSCCompilerParameter
set_t_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) –
Returns:self
Return type:SSCCompilerParameter
set_tf_single_way_rule_values(value)

Set up an array of strings representing reverse one-way lines

Parameters:value (str or list[str]) – an array of strings representing the reverse one-way line
Returns:self
Return type:SSCCompilerParameter
set_two_way_rule_values(value)

Set an array of strings representing two-way traffic lines

Parameters:value (str or list[str]) – An array of strings representing two-way traffic lines
Returns:self
Return type:SSCCompilerParameter
set_weight_field(value)

Set weight field

Parameters:value (str) – weight field
Returns:self
Return type:SSCCompilerParameter
speed_field

str – Road speed field

t_node_id_field

str – The field that marks the starting node ID of the edge in the network dataset

tf_single_way_rule_values

list[str] – an array of strings representing reverse one-way lines

two_way_rule_values

list[str] – an array of strings representing two-way traffic lines

weight_field

str – weight field

class iobjectspy.analyst.TrackPoint(point, t, key=0)

Bases: object

Track coordinate point with time.

Parameters:
  • point (Point2D) – two-dimensional point coordinates
  • t (datetime.datetime) – Time value, indicating the time of the coordinate position.
  • key (int) – key value, used to identify point uniqueness
key

int – key value, used to identify the uniqueness of the point

point

Point2D – Two-dimensional point coordinates

set_key(value)

Set the key value to identify the uniqueness of the point

Parameters:value (int) – key value
Returns:self
Return type:TrackPoint
set_point(value)

Set location point

Parameters:value (Point2D or str) – position point
Returns:self
Return type:TrackPoint
set_time(value)

Set time value

Parameters:value (datetime.datetime or str) – time value, representing the time of the coordinate position
Returns:self
Return type:TrackPoint
time

datetime.datetime – time value, representing the time of the coordinate location point

class iobjectspy.analyst.TrajectoryPreprocessing

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Track preprocessing class. Used to deal with abnormal points in the trajectory data, including trajectory segmentation, processing offset points, repeated points, sharp corners and other abnormal situations.

is_remove_redundant_points

bool – Whether to remove duplicate points with equal spatial positions

measurement_error

float – trajectory point error value

prj_coordsys

PrjCoordSys – coordinate system of the point to be processed

rectify(points)

Trajectory preprocessing result

Parameters:points (list[TrackPoint] or tuple[TrackPoint]) – Track point data to be processed.
Returns:The processed track point dataset.
Return type:TrajectoryPreprocessingResult
rectify_dataset(source_dataset, id_field, time_field, split_time_milliseconds, out_data=None, out_dataset_name=None, result_track_index_field='TrackIndex')

Preprocess the trajectory of the dataset, and save the result as point data

Parameters:
  • source_dataset (DatasetVector or str) – original track point dataset
  • id_field (str) – ID field of the track. Track points with the same ID value belong to a track, such as mobile phone number, license plate number, etc. When no ID field is specified, all

points in the dataset will be classified as a track.

Parameters:
  • time_field (str) – The time field of the track point, must be a time or timestamp type field
  • split_time_milliseconds (float) – The time interval for splitting the track. If the time interval between two adjacent points in time is greater than the specified time interval for splitting the track, The trajectory will be divided between two points.
  • out_data (Datasource or str) – The datasource to save the result dataset
  • out_dataset_name (str) – result dataset name
  • result_track_index_field (str) – The field that stores the track index. After the track is divided, one track may be divided into multiple sub-tracks. The result_track_index_field will store the index value of the sub-track, and the value starts from 1. Because the result dataset will save all the fields of the source track point dataset, it is necessary to ensure that the result_track_index_field field value is not occupied in the source track point dataset.
Returns:

The result point dataset, the result point dataset after preprocessing.

Return type:

DatasetVector

set_measurement_error(value)

Set the track point error value, such as GPS error value, in meters. Need to specify an appropriate error value according to the quality of the data. If the track point offset exceeds the error value, it will be processed.

api\../image/MeasurementError.png
Parameters:value (float) – track point error value
Returns:self
Return type:TrajectoryPreprocessing
set_prj_coordsys(value)

Set the coordinate system of the point to be processed

Parameters:value (PrjCoordSys) – the coordinate system of the point to be processed
Returns:self
Return type:TrajectoryPreprocessing
set_remove_redundant_points(value)

Set whether to remove duplicate points with equal spatial positions

api\../image/RemoveRedundantPoints.png
Parameters:value (bool) – Whether to remove duplicate points with equal spatial positions
Returns:self
Return type:TrajectoryPreprocessing
set_sharp_angle(value)

Set the sharp angle value. The unit is angle. When the included angle of three unequal points in a continuous period of time is less than the specified sharp angle value, the middle point will be corrected to be the midpoint of the first and last two points.When the value is less than or equal to 0, sharp corners will not be processed.

api\../image/SharpAngle.png
Parameters:value (float) – sharp angle value
Returns:self
Return type:TrajectoryPreprocessing
set_valid_region_dataset(value)

Set the effective surface. Only the points that fall within the effective surface are effective points.

Parameters:value (DatasetVector or str) – valid surface dataset
Returns:self
Return type:TrajectoryPreprocessing
sharp_angle

float – sharp angle value

valid_region_dataset

DatasetVector – valid surface dataset.

class iobjectspy.analyst.TransportationAnalystParameter

Bases: object

Traffic network analysis parameter setting class.

This class is mainly used to set the parameters of traffic network analysis. Through the traffic network analysis parameter
setting class, you can set the name identification of obstacle edges, obstacle points, weight field information, analysis path points or nodes, and you can also set some analysis results, that is, whether the analysis results include the analysis path The following content: node collection, edge segment collection, routing object collection and site collection.
barrier_edges

list[int] – Barrier edge ID list

barrier_nodes

list[int] – Barrier node ID list

barrier_points

list[Point2D] – List of barrier points

is_edges_return

bool – Whether the passing edge is included in the analysis result

is_nodes_return

bool – Does the analysis result include passing nodes

is_path_guides_return

bool – Whether the analysis result contains driving guide

is_routes_return

bool – Return whether the analysis result contains routing (GeoLineM) object

is_stop_indexes_return

bool – Whether to include the site index in the analysis result

nodes

list[int] – Analysis path point

points

list[Point2D] – Pass points during analysis

set_barrier_edges(edges)

Set the barrier edge ID list. Optional. The obstacle edge specified here and the obstacle edge specified in the traffic network analysis environment (TransportationAnalystSetting) work together to analyze the traffic network.

Parameters:edges (list[int] or tuple[int]) – list of obstacle edge IDs
Returns:self
Return type:TransportationAnalystParameter
set_barrier_nodes(nodes)

Set up a list of barrier node IDs. Optional. The obstacle node specified here and the obstacle node specified in the transportation network analysis environment (TransportationAnalystSetting) work together to analyze the traffic network.

Parameters:nodes (list[int] or tuple[int]) – Barrier node ID list
Returns:self
Return type:TransportationAnalystParameter
set_barrier_points(points)

Set the coordinate list of obstacle nodes. Optional. The specified obstacle point can not be on the network (neither on the edge nor on the node). The analysis will be based on the distance tolerance (TransportationPathAnalystSetting.tolerance) The obstacle point comes down to the nearest network. Currently, it supports best route analysis, nearest facility search, traveling salesman analysis, and logistics distribution analysis.

Parameters:points (list[Point2D] or tuple[Point2D]) – list of coordinates of barrier nodes
Returns:self
Return type:TransportationAnalystParameter
set_edges_return(value=True)

Set whether the passing edge is included in the analysis result

Parameters:value (bool) – Specify whether the passing edge is included in the analysis result. Set to True, after the analysis is successful, you can use the TransportationAnalystResult object TransportationAnalystResult.edges method Return the passing edge; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter
set_nodes(nodes)

Set analysis route points. Required, but mutually exclusive with the set_points() method. If set at the same time, only the last setting before the analysis is valid. For example, if the node set is specified first, then the coordinate point set is specified, and then the analysis is performed. At this time, only the coordinate points are analyzed.

Parameters:nodes (list[int] or tuple[int]) – ID of passing node
Returns:self
Return type:TransportationAnalystParameter
set_nodes_return(value=True)

Set whether to include nodes in the analysis results

Parameters:value (bool) – Specify whether to include transit nodes in the analysis result. Set to True, after the analysis is successful, you can use the TransportationAnalystResult object TransportationAnalystResult.nodes method Return the passing node; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter
set_path_guides_return(value=True)

Set whether to include the driving guide set in the analysis result.

This method must be set to True, and pass the TransportationPathAnalystSetting.set_edge_name_field() method of the TransportationAnalystSetting class set the edge name field, the driving guide set will be included in the analysis result, otherwise the driving guide will not be returned, but it does not affect the acquisition of other content in the analysis result.

Parameters:value (bool) – Whether to include driving guidance in the analysis result. Set to True, after the analysis is successful, you can check the TransportationAnalystResult : py:attr:TransportationAnalystResult.path_guides method Return travel guide; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter
set_points(points)

Set the collection of passing points during analysis. Required, but mutually exclusive with the set_nodes() method. If set at the same time, only the last setting before the analysis is valid. For example, first specify the node set, and then Specify a set of coordinate points and then analyze. At this time, only coordinate points are analyzed.

If a point in the set of set waypoints is not within the range of the network dataset, the point will not participate in the analysis

Parameters:points (list[Point2D] or tuple[Point2D]) – passing points
Returns:self
Return type:TransportationAnalystParameter
set_routes_return(value=True)

Whether the analysis result contains routing (GeoLineM) object

param bool value:
 Specify whether to include routing objects. Set to True, after the analysis is successful, you can use the TransportationAnalystResult object TransportationAnalystResult.route Return the route object; if it is False, it Return None
return:self
rtype:TransportationAnalystParameter
set_stop_indexes_return(value=True)

Set whether to include the site index in the analysis results

Parameters:value (bool) – Specify whether to include the site index in the analysis result. Set to True, after the analysis is successful, you can use the TransportationAnalystResult object TransportationAnalystResult.stop_indexes method Return the site index; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter
set_weight_name(name)

Set the name of the weight field information. If not set, the name of the first weight field information object in the weight field information set will be used by default

Parameters:name (str) – the name identifier of the weight field information
Returns:self
Return type:TransportationAnalystParameter
weight_name

str – the name of the weight field information

class iobjectspy.analyst.TransportationAnalyst(analyst_setting)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Traffic network analysis class. This class is used to provide transportation network analysis functions such as path analysis, traveling salesman analysis, service area analysis, multiple traveling salesman (logistics distribution) analysis, nearest facility search and location analysis.

Traffic network analysis is an important part of network analysis, which is based on the analysis of traffic network models. Unlike the facility network model, the transportation network has no direction. Even though the direction can be specified for the network edge, the circulation medium (pedestrian or transmission resource) can determine its own direction, speed, and destination.

Initialization object

Parameters:analyst_setting (TransportationAnalystSetting) – Traffic network analysis environment setting object
allocate(supply_centers, demand_type=AllocationDemandType.BOTH, is_connected=True, is_from_center=True, edge_demand_field='EdgeDemand', node_demand_field='NodeDemand', weight_name=None)

Resource allocation analysis. Resource allocation analysis simulates the supply and demand relationship model of resources in the real world network. Resources are gradually allocated from supply points to demand points (including edges or nodes) based on the setting of network resistance values, and ensure the supply point can provide resources for the demand point in the most cost-effective way. The demand point with the smallest resistance value from the center point (including edge or nodes) gets resources first, Then allocate the remaining resources to the demand points (including edges or nodes) with the next smallest resistance value, and so on, until the resources at the center point are exhausted, and the allocation is suspended.

Parameters:
  • supply_centers (list[SupplyCenter] or tuple[SupplyCenter]) – collection of resource supply centers
  • demand_type (AllocationDemandType or str) – resource allocation mode
  • is_connected (bool) –

    Returns whether the route generated during the analysis must be connected. In the process of resource allocation analysis, the resources of a certain central point are allowed to pass through the service range of other central points that have completed resource allocation and continue to allocate their own resources to the demand object, that is, set this item to false, so that the result route obtained is Not connected. If set to true, during the resource allocation process of a certain central point, when it encounters an area that has been allocated to other centers, the allocation will stop, so that excess resources may accumulate at the resource central point.

    For example: The problem of power transmission from the power grid is not allowed to have a crossover situation, it must be mutually connected and cannot be disconnected, and the problem of students going to school to school is allowed to be set as a crossover distribution.

:param bool is_from_center:Whether to allocate resources from the resource supply center. Since the edge in the network data has positive and negative resistance, that is, the forward resistance value of the edge and the reverse resistance value may

be different. Therefore, in the analysis, the resources are allocated from the resource supply center to the demand point and from the demand point to the demand point. Under the two forms of resource supply center allocation, the analysis results obtained will be different.

The following are examples of two actual application scenarios to help further understand the differences between the two forms. It is assumed that the positive and negative resistance values of the edges in the network datasets are different.

-Starting from the resource supply center to allocate resources to demand points:

If your resource center is some storage center, and the demand point is the major supermarkets, in the actual resource allocation, the goods in the storage center are transported to the supermarkets that it serves. This form is that the resource supply center allocates to the demand points , That is, set is_from_center to true during analysis, that is, start allocation from the resource supply center.

-Do not allocate resources from the resource supply center:

If your resource center is some schools, and the demand point is the residential area, in the actual resource allocation, the students go to the school from the residential area, this form is not to allocate resources from the resource supply center, that is, analysis When you want to set is_from_center to false, that is, do not start the allocation from the resource supply center.
Parameters:
  • edge_demand_field (str) – edge demand field. This field is the name of the field in the network dataset used to indicate the amount of resources required by the network edge as a demand site.
  • node_demand_field (str) – Node demand field. This field is the name of the field in the network dataset used to indicate the amount of resources required by the network node as the demand site.
  • weight_name (str) – the name of the weight field information
Returns:

resource allocation analysis result object

Return type:

list[AllocationAnalystResult]

analyst_setting

TransportationAnalystSetting – Traffic network analysis environment setting object

find_closest_facility(parameter, event_id_or_point, facility_count, is_from_event, max_weight)

The nearest facility is searched and analyzed according to the specified parameters, and the event point is the node ID or coordinates.

The nearest facility analysis refers to a given incident and a group of facilities on the network, for the incident to find one or several facilities that can be reached with the least cost, the result is from the incident to the facility (or from the facility to the facility to Incident) the best path.

Facilities and incidents are the basic elements of the nearest facility search and analysis. Facilities are facilities that provide services, such as schools, supermarkets, gas stations, etc.; incident points are locations of events that require the services of facilities.

For example, in a traffic accident at a certain location, it is required to find the 3 hospitals that can be reached within 10 minutes. Those that can be reached within 10 minutes are not considered. In this example, the location of the accident is an incident, and the surrounding hospitals are facilities.

api\../image/FindClosestFacility.png

There are two ways to specify event points, one is to specify by coordinate points; the other is to specify the node ID in the network dataset, that is, to regard the network node as an event point

The facilities are specified in the parameter parameter of the TransportationAnalystParameter type. Through the TransportationAnalystParameter object There are two ways to specify passing points:

-Use the TransportationAnalystParameter.set_nodes() method of this object to specify facilities in the form of an array of node IDs in the network dataset,
Therefore, the facilities used in the analysis process are the corresponding network nodes;
-Use the object’s TransportationAnalystParameter.set_points() method to specify facilities in the form of coordinate point strings, so during the analysis process
The facilities used are the corresponding coordinate points.

Note

Incidents and facilities must be the same type, that is, both are specified in the form of coordinate points, or both are specified in the form of node ID. This method requires that both facilities and incidents are coordinate points, that is, they need to pass The TransportationAnalystParameter.set_points() method of the TransportationAnalystParameter object is used to set facilities.

Parameters:
  • parameter (TransportationAnalystParameter) – Transportation network analysis parameter object.
  • event_id_or_point (int or Point2D) – event point coordinates or node ID
  • facility_count (int) – The number of facilities to find.
  • is_from_event (bool) – Whether to search from the event point to the facility.
  • max_weight (float) – Find the radius. The unit is the same as the resistance field set in the network analysis environment. If you want to find the entire network, the value is set to 0.
Returns:

Analysis result. The number of results is the same as the number of the closest facilities found

Return type:

list[TransportationAnalystResult]

find_critical_edges(start_node, end_node)

Key edge query.

The key edge indicates the edge that must pass between two points.

By calling this interface, the ID array of the edges that must pass between two points can be obtained. If the return value is empty, it means that there is no critical edge at the two points.

Parameters:
  • start_node (int) – analysis starting point
  • end_node (int) – end of analysis
Returns:

Key edge ID array.

Return type:

list[int]

find_critical_nodes(start_node, end_node)

Key node query. A key node means a node that must pass between two points.

By calling this interface, you can get the node ID array that must pass between two points. If the return value is empty, it means that there is no key node at the two points.

Parameters:
  • start_node (int) – analysis starting point
  • end_node (int) – end of analysis
Returns:

Key node ID array.

Return type:

list[int]

find_group(terminal_points, center_points, max_cost, max_load, weight_name=None, barrier_nodes=None, barrier_edges=None, barrier_points=None, is_along_road=False)

Group analysis.

Grouping analysis is based on the network analysis model. The distribution point (TerminalPoint) is assigned according to certain rules (the distance from the distribution point to the center point cannot be greater than the maximum cost value max_cost, And the load of each center point cannot be greater than the maximum load max_load) to find the center point to which it belongs. Whenever a center point is assigned a distribution point, the load of the group where the center point is located will increase the corresponding distribution point Load.

Parameters:
  • terminal_points (list[TerminalPoint]) – distribution point collection
  • center_points (list[Point2D]) – center point coordinate collection
  • max_cost (float) – maximum cost
  • max_load (float) – maximum load
  • weight_name (str) – the name of the weight field information
  • barrier_nodes (list[int] or tuple[int]) – barrier node ID list
  • barrier_edges (list[int] or tuple[int]) – barrier edge ID list
  • barrier_points (list[Point2D] or tuple[Point2D]) – barrier coordinate point list
  • is_along_road (bool) – Whether to proceed along the road, if it is True, the distribution point will find a point on the nearest road (possibly a projection point or an edge node), and start from the nearest point on the road to find a reasonable center point for clustering. If it is False, Then the distribution point will directly find the nearest center point, and then merge the small clusters formed by each center point.
Returns:

group analysis result

Return type:

GroupAnalystResult

find_location(supply_centers, expected_supply_center_number=0, is_from_center=True, weight_name=None)

According to the given parameters, the site selection partition analysis. The location analysis is to determine the best location of one or more facilities to be built, so that the facilities can provide services or goods to the demander in the most cost-effective way. Location partitions are not just

a site selection process, and the needs of the demand points are also allocated to the service areas of the corresponding facilities, so it is called site selection and zoning.

-Resource supply center and demand point

Resource supply center: the center point is a facility that provides resources and services. It corresponds to a network node. The relevant information of the resource supply center includes the maximum resistance value, the type of resource supply center, and the ID of the node where the resource supply center is located in the network.

Demand point: usually refers to the location where the services and resources provided by the resource supply center are needed, and it also corresponds to the network node.

The maximum resistance value is used to limit the cost of the demand point to the resource supply center. If the cost from the demand point to the resource supply center is greater than the maximum resistance value, the demand point is filtered out, that is, the resource supply center cannot serve the demand point.

There are three types of resource supply centers: non-central points, fixed central points and optional central points. A fixed central point refers to a service facility (playing a resource supply role) that already exists in the network, has been built, or has been determined to be established; an optional central point refers to a resource supply center that can establish service facilities, that is, the service facilities to be built will start from The site is selected among these optional central points; non-central points are not considered in the analysis. In practice, the establishment of this facility may not be allowed or other facilities already exist.

In addition, the demand points used in the analysis process are all network nodes, that is, except for the network nodes corresponding to various types of central points, all network nodes are used as resource demand points to participate in the site selection zone analysis, if you want to exclude a certain part of the node, you can set it as an obstacle point.

-Whether to allocate resources from the resource supply center

The location zone can choose to allocate resources from the resource supply center or not from the resource supply center:

-Example of distribution (supply to demand) starting from the central point:

Electricity is generated from power stations and transmitted to customers through the grid. Here, the power station is the center of the network model because it can provide power supply. The customers of electric energy are distributed along the lines of the power grid (the edges in the network model)

. In this case, resources are transmitted from the supplier to the need through the network to achieve resource allocation.

-Example of not starting from the central point (demand to supply):

The relationship between schools and students also constitutes a distribution of supply and demand in the network. The school is the resource provider, and it is responsible for providing places for school-age children. School-age children are the demand side of resources, They require admission. As the demand-side school-age children are distributed along the street network, they generate a demand for the school’s resources as the supply side-student places.
  • Applications

    There are currently 3 primary schools in a certain area. According to demand, it is planned to build 3 more primary schools in this area. Nine locations are selected as candidate locations, and 3 best locations will be selected from these candidate locations to establish a new elementary school. As shown in Figure 1, the existing 3 primary schools are fixed center points, and the 7 candidate positions are optional center points. The conditions to be met for a new elementary school are: residents in residential areas must walk to school within 30 minutes. The site selection zoning analysis will give the best site location based on this condition, and circle each school, including the service areas of the three existing schools. As shown in Figure 2, the final optional center points with serial numbers 5, 6, and 8 were selected as the best place to build a new school.

    Note: All the network nodes in the network dataset in the following two pictures are regarded as the residential areas in this area. All of them participate in the analysis of location selection. The number of residents in a residential area is the number of services required by the residential area.

    api\../image/FindLocation_1.png api\../image/FindLocation_2.png
Parameters:
  • supply_centers (list[SupplyCenter] or tuple[SupplyCenter]) – collection of resource supply centers
  • expected_supply_center_number (int) – The number of resource supply centers expected to be used in the site selection of the final facility. When the input value is 0, the number of resource supply centers in the final facility location defaults to the minimum number of supply centers required in the coverage analysis area
  • is_from_center (bool) –

    Whether to allocate resources from the resource supply center. Since the edge in the network data has positive and negative resistance, that is, the forward resistance value of the edge and the reverse resistance value may be different. Therefore, in the analysis, the resources are allocated from the resource supply center to the demand point and from the demand point to the demand point. Under the two forms of resource supply center allocation, the analysis results obtained will be different. The following are examples of two actual application scenarios to help further understand the differences between the two forms. It is assumed that the positive and negative resistance values of the edges in the network datasets are different.

    -Starting from the resource supply center to allocate resources to demand points:

    If you choose a location for some storage centers, and the demand points are major supermarkets, in the actual resource allocation, the goods in the warehouse are transported to the supermarkets that they serve. This form is from the resource supply center to the demand point. Allocation, that is, set is_from_center to True during analysis, that is, start allocation from the resource supply center.

    -Do not allocate resources from the resource supply center:

    If you choose a location for a service organization like a post office, a bank, or a school, and the demand point is a residential area, in the actual resource allocation, the residents of the residential area will take the initiative to go to their service organization to handle business. The form is not to allocate resources from the resource supply center, that is, set is_from_center to False during analysis, that is, do not start the allocation from the resource supply center.
  • weight_name (str) – the name of the weight field information
Returns:

Location analysis result object

Return type:

LocationAnalystResult

find_mtsp_path(parameter, center_nodes_or_points, is_least_total_cost=False)

Multi-travel salesman (logistics distribution) analysis, the distribution center is a point coordinate string or a node ID array

Multi-travel salesman analysis is also called logistics distribution, which means that given M distribution center points and N distribution destinations (M, N are integers greater than zero) in the network dataset, find a cost-effective distribution route, and give Out the corresponding walking route. How to reasonably allocate the distribution order and delivery route to minimize the total cost of distribution or the cost of each distribution center is the problem that logistics distribution solves.

There are two ways to specify the distribution center point, one is to specify the coordinate point collection, and the other is to specify the network node ID array.

The delivery destination is specified in the parameter parameter of the TransportationAnalystParameter type. There are two ways to specify the delivery destination through the TransportationAnalystParameter object:

-Use the object: py:meth:TransportationAnalystParameter.set_nodes method ,Specify the delivery destination in the form of the node ID array in the network dataset, so the delivery destination used in the analysis process is the corresponding network node;

-Use the object’s TransportationAnalystParameter.set_points() method ,The delivery destination is specified in the form of a coordinate point string, so the delivery destination used in the analysis process is the corresponding coordinate point.

.

note:

The distribution center point and the distribution destination must be the same type, that is, both are specified in the form of coordinate points, or both are specified in the form of node ID. This method requires that both the delivery destination and the delivery center point are coordinate points.

The result of multi-traveling salesman analysis will give the distribution destinations that each distribution center is responsible for, as well as the sequence of these distribution destinations, and the corresponding walking routes, so as to minimize the distribution cost of the distribution center or make all the distributions The total cost of the center is minimal. Moreover, the distribution center point will eventually return to the distribution center point after completing the distribution task of the distribution destination it is responsible for.

Application example: There are 50 newspapers and periodicals retail locations (distribution destinations) and 4 newspapers and periodicals supply locations (distribution centers). Now we are seeking the optimal route for these 4 supply locations to send newspapers to newspapers and periodicals retail locations, which is a logistics and distribution problem.

The figure below shows the analysis results of newspaper distribution. The larger red dots represent the 4 newspaper supply locations (distribution centers), while the other smaller dots represent the newspaper retail locations (delivery destinations). The distribution plan of each distribution center is marked with different colors, including the distribution destination, distribution sequence and distribution route it is responsible for.

api\../image/MTSPPath_result1.png

The figure below shows the distribution plan of the No. 2 distribution center circled by the rectangular frame in the figure above. The small blue dots marked with numbers are the delivery destinations (18 in total) that the No. 2 distribution center is responsible for. The No. 2 distribution center will send newspapers in the order of the numbers marked on the delivery destinations, that is, No. 1 will be sent first From the newspaper retail location, send to the newspaper retail location No. 2, and so on, and complete the distribution along the blue line obtained by the analysis, and finally return to the distribution center.

api\../image/MTSPPath_result2.png

It should be noted that since the purpose of logistics distribution is to find a solution that minimizes the total cost of distribution or the cost of each distribution center, it is possible that some logistics distribution centers may not participate in the distribution in the analysis results.

Parameters:
  • parameter (TransportationAnalystParameter) – Transportation network analysis parameter object.
  • center_nodes_or_points (list[Point2D] or list[int]) – distribution center point coordinate string or node ID array
  • is_least_total_cost (bool) – Whether the distribution mode is the least total cost plan. If it is true, the distribution will be carried out according to the mode with the least total cost. At this time, it may happen that some distribution center points are more costly for distribution and other distribution center points are less costly. If it is false, it is locally optimal. This scheme will control the cost of each distribution center point, so that the cost of each center point is relatively even, and the total cost may not be the smallest at this time.
Returns:

Multi-travel salesman analysis results, the number of results is the number of central points participating in the distribution.

Return type:

list[TransportationAnalystResult]

find_path(parameter, is_least_edges=False)

Best path analysis. The problem that the best path analysis solves is that, given N points (N is greater than or equal to 2) in the network dataset, find the least costly path through these N points in the order of the given points. “Minimum cost” has many understandings, such as the shortest time, the lowest cost, the best scenery, the best road conditions, the least bridges, the least toll stations, and the most villages.

api\../image/FindPath.png

The passing point of the best path analysis is specified in the parameter of type TransportationAnalystParameter. Via TransportationAnalystParameter There are two ways for objects to specify passing points:

-Use the TransportationAnalystParameter.set_nodes() method of this object to specify the points passed by the optimal path analysis in the form of an array of node IDs in the network dataset, so the points passed during the analysis process are the corresponding network nodes; -Use the TransportationAnalystParameter.set_points() method of this object to specify the points passed by the best path analysis in the form of a coordinate point string, so the points passed during the analysis process are the corresponding coordinate points.

In addition, through the TransportationAnalystParameter object, you can also specify other information required for optimal path analysis, such as obstacle points (edges), whether the analysis results include routes, Driving guidance, passing edges or nodes, etc. For details, see the TransportationAnalystParameter class.

It should be noted that the traveling salesman analysis (find_tsp_path() method) in network analysis is similar to the best path analysis, which is to find the least expensive path to traverse all passing points in the network. But there is a clear difference between the two, that is, when traversing the passing points, the two processes are different in the order of visiting the passing points:

-Best path analysis: All points must be visited in the order of given passing points; -Traveling salesman analysis: It is necessary to determine the optimal order to visit all points, not necessarily in the order of the given passing points.
Parameters:
  • parameter (TransportationAnalystParameter) – Transportation network analysis parameter object
  • is_least_edges (bool) –

    Whether the number of edges is the least. true means that the query will be carried out according to the least number of edges. Since a small number of edges does not mean that the length of the edge is short, the result detected at this time may not be the shortest path. As shown in the figure below, if the number of edges of the green path connecting AB is less than the yellow path, when this parameter is set to True, the green path is the path obtained by the query, and when the parameter is set to false, the yellow path is the path obtained by the query.

    api\../image/hasLeastEdgeCount.png
Returns:

Best path analysis result

Return type:

TransportationAnalystResult

find_service_area(parameter, weights, is_from_center, is_center_mutually_exclusive=False, service_area_type=ServiceAreaType.SIMPLEAREA)

Service area analysis. The service area is an area centered on a designated point and within a certain resistance range, including all accessible edges and points. Service area analysis is the process of finding the service area (ie service area) for the location (ie center point) that provides a certain service on the network based on a given resistance value (ie, service radius). Resistance can be the time of arrival, distance, or any other cost. For example: Calculate the 30-minute service area for a certain point on the network, then the time from any point to the point in the resulting service area will not exceed 30 minutes.

The result of service area analysis includes the routes and areas that each service center can serve. Routing refers to the path that extends from the service center and follows the principle that the resistance value is not greater than the specified service radius; the service area is the area formed by enclosing the routing according to a certain algorithm. As shown in the figure below, the red dots represent the service center points that provide services or resources. The area areas of various colors are centered on the corresponding service center points. Within the given resistance range, each service center The route served by the point is also marked with the corresponding color.

api\../image/FindServiceArea_1.png

-Service center

There are two ways to specify the location of the service center point through the TransportationAnalystParameter object:

-Use the TransportationAnalystParameter.set_nodes() method to specify the service center point in the form of an array of node IDs in the network dataset,
So the service center point used in this analysis process is the corresponding network node.
-Use the TransportationAnalystParameter.set_points() method to specify the service center point in the form of the coordinate point string of the service center point.
The service center point used in the analysis process is the set of corresponding coordinate points.

-Whether to analyze from the central point

Whether to start the analysis from the central point, it reflects the relationship mode between the service center and the demand place that needs the service. Analyzing from the central point means that the service center provides services to the service demand; not starting from the central point of analysis means that the service demand is proactively to the service center to obtain services. For example: a milk station delivers milk to various residential areas. If you want to analyze the service area of this milk station and check the range that the milk station can serve under the conditions allowed, then you should use the central point in the actual analysis process. The beginning of the analysis model; another example, if you want to analyze the area that a certain school in an area can serve under permitted conditions, because in reality, students take the initiative to come to the school to study and receive the services provided by the school, then In the actual analysis process, a mode that does not start from the center point should be used.

-Mutually exclusive service areas

If two or more adjacent service areas have intersections, they can be mutually exclusive. After mutual exclusion processing, these service areas will not overlap. As shown in the figure on the left, mutual exclusion is not processed, and the right picture is mutually exclusive.

api\../image/FindServiceArea_2.png
Parameters:
  • parameter (TransportationAnalystParameter) – Transportation network analysis parameter object.
  • weights (list[float]) – Array of service area radius. The length of the array should be consistent with the number of given service center points, and the array elements correspond to the center points in a one-to-one order. Forward, reverse the weight per unit area radius information services specified in the same unit of the resistance of the field.
  • is_from_center (bool) – Whether to start the analysis from the center point.
  • is_center_mutually_exclusive (bool) – Whether to perform service area mutual exclusion processing. If it is set to true, mutual exclusion processing is performed, and if it is set to false, mutual exclusion processing is not performed.
  • service_area_type (ServiceAreaType or str) – service area type
Returns:

Service area analysis result

Return type:

list[ServiceAreaResult]

find_tsp_path(parameter, is_end_node_assigned=False)

Traveling salesman analysis.

Traveling salesman analysis is to find a path through a specified series of points, and traveling salesman analysis is a disorderly path analysis. Traveling dealers can decide the order of visiting nodes by themselves, and the goal is to minimize the total impedance of the travel route (or close to the minimum).

The passing point of the traveling salesman analysis is specified in the parameter parameter of the TransportationAnalystParameter type. There are two ways to specify passing points through the TransportationAnalystParameter object:

-Use the TransportationAnalystParameter.set_nodes() method of this object to specify the points passed by the traveling salesman analysis in the form of a node ID array in the network dataset, so the points passed during the analysis process are the corresponding network nodes;; -Use the TransportationAnalystParameter.set_points() method of this object to specify the points passed by the traveling salesman analysis in the form of a coordinate point string, so

the point passed during the analysis is the corresponding coordinate point.

It should be emphasized that this method defaults to the first point (node or coordinate point) in the given set of passing points as the starting point of the traveling salesman. In addition, the user can also specify the end point (corresponding to the is_end_node_assigned parameter in the method). If you choose to specify the end point, the last point in the set of given passing points is the end point. At this time, the traveling salesman starts from the first given point and ends at the specified end. The visiting order of other passing points is determined by the traveling salesman himself.

api\../image/FindTSPPath.png

In addition, if you choose to specify the end point, the end point can be the same as the starting point, that is, the last point in the set of passing points is the same as the first point. At this time, the result of traveling salesman analysis is a closed path , That is, start from the starting point and finally return to that point.

api\../image/FindTSPPath_1.png

NOTE: When using this method, if you choose to specify the end point (is_end_node_assigned method corresponding to the reference number), a first point designated passing point and the last point may be set the same or different; other points are not allowed to have the same point, otherwise the analysis will fail; when the end point is not specified, the same point is not allowed, if there are the same points, the analysis will fail.

It should be noted that the traveling salesman analysis (find_path() method) in the network analysis is similar to the best path analysis, which is to find the least expensive path to traverse all passing points in the network. But there is a clear difference between the two, that is, when traversing the passing points, the two processes are different in the order of visiting the passing points:

-Best path analysis: All points must be visited in the order of given passing points; -Traveling salesman analysis: It is necessary to determine the optimal order to visit all points, not necessarily in the order of the given passing points.
Parameters:
  • parameter (TransportationAnalystParameter) – Transportation network analysis parameter object.
  • is_end_node_assigned (bool) – Whether to specify the end point. Specifying true means specifying the end point, and the last point in the set of given passing points is the end point; otherwise, no end point is specified.
Returns:

Traveling salesman analysis result

Return type:

TransportationAnalystResult

find_vrp(parameter, vehicles, center_nodes_or_points, demand_points)
Logistics distribution analysis. Compared with the previous logistics and distribution interface: py:meth:find_mtsp_path, this interface has more settings for vehicle information, demand, etc., which can more fully meet the needs of different situations.

Logistics distribution analysis parameter object: py:class:VRPAnalystParameter You can set the name identification of obstacle edges, obstacle points, weight field information, and turn weight field. You can also set some analysis results, that is, whether to include in the analysis results Analyze the following content: node collection, edge collection, routing object collection, and site collection.

Vehicle information: py:class:VehicleInfo can set each vehicle’s own load, maximum consumption and other conditions.

The center point information center_nodes_or_points can be set to include the coordinates or node ID of the center; the demand point information demand_points can be set to the coordinates or node ID of each demand point, as well as the respective demand.

By setting relevant information about vehicles, demand points and central points, the interface can reasonably divide routes according to these conditions and complete corresponding assignment tasks.

param VRPAnalystParameter parameter:
 Logistics distribution analysis parameter object.
param vehicles:array of vehicle information
type vehicles:list[VehicleInfo] or tuple[VehicleInfo]
param center_nodes_or_points:
 center point information array
type center_nodes_or_points:
 list[int] or list[Point2D] or tuple[int] or tuple[Point2D]
param demand_points:
 demand point information array
type demand_points:
 list[DemandPointInfo] or tuple[DemandPointInfo]
return:logistics delivery result
rtype:list[VRPAnalystResult]
load()

Load the network model. This method loads the network model according to the environmental parameters in the TransportationAnalystSetting object. After setting the parameters of the traffic network analysis environment, the related parameters are modified. Only when this method is called, the traffic network analysis environment settings made will take effect in the process of traffic network analysis.

Returns:Return True if loading is successful, otherwise False
Return type:bool
set_analyst_setting(analyst_setting)

Set traffic network analysis environment setting object When using traffic network analysis to perform various traffic network analysis, you must first set the traffic network analysis environment, and you must first set the traffic network analysis environment

Parameters:analyst_setting (TransportationAnalystSetting) – Traffic network analysis environment setting object
Returns:self
Return type:TransportationAnalyst
update_edge_weight(edge_id, from_node_id, to_node_id, weight_name, weight)
This method is used to update the weight of the edge. This method is used to modify the edge weights of the network model loaded into the memory, and does not modify the network dataset.

This method can update the forward weight or the reverse weight of the edge. The forward weight refers to the cost from the start node of the edge to the end node, and the reverse weight refers to the cost from the end node of the edge to the start node. Therefore, specify from_node_id as the starting node ID of the updated edge in the network dataset, and to_node_id as the end node ID of the edge, then update the forward weight. Otherwise, specify from_node_id as the end of the edge in the network dataset Node ID, to_node_id is the starting node ID of the edge, then the reverse weight is updated.

Note that a negative weight means that the edge is prohibited from passing in this direction.

param int edge_id:
 The ID of the edge being updated
param int from_node_id:
 The starting node ID of the edge to be updated.
param int to_node_id:
 End node ID of the edge to be updated.
param str weight_name:
 The name of the weight field information object to which the weight field to be updated belongs
param float weight:
 weight, that is, use this value to update the old value. The unit is the same as the unit of the weight field in the weight information field object specified by weight_name.
return:Successfully return the weight before the update. Failure return -1.7976931348623157e+308
rtype:float
class iobjectspy.analyst.TransportationAnalystResult(java_object, index)

Bases: object

Traffic network analysis result class.

This class is used to return the route set of the analysis result, the set of nodes and edges passed by the analysis, the set of driving guidance, the set of stations and the set of weights, and the cost of each station. Through this type of setting, The results of analysis such as optimal route analysis, traveling salesman analysis, logistics distribution and nearest facility search can be obtained flexibly.

edges

list[int] – Return the set of edges of the analysis result. Note that you must set the TransportationAnalystParameter.set_edges_return() of the TransportationAnalystParameter object When the method is set to True, the analysis result will include the set of passing edges, otherwise it will return None

nodes

list[int] – The set of path nodes that return the analysis results. Note that the TransportationAnalystParameter.set_nodes_return() of the TransportationAnalystParameter object must be set If the method is set to True, the analysis result will include the set of passing nodes, otherwise it will be an empty array.

path_guides

PathGuide – Return travel guide. Note that the TransportationAnalystParameter.set_path_guides_return() of the TransportationAnalystParameter object must be added When the method is set to True, the driving guidance will be included in the analysis result, otherwise it will be an empty array.

route

GeoLineM – The route object that Return the analysis result. Note that the TransportationAnalystParameter.set_routes_return() of the TransportationAnalystParameter object must be added If the method is set to true, the routing object will be included in the analysis result, otherwise it will return None

stop_indexes

list[int] – Return the site index, this array reflects the order of the sites after the analysis. Note that the TransportationAnalystParameter object must be The TransportationAnalystParameter.set_stop_indexes_return() method is set to True, the site index will be included in the analysis result, otherwise it will be an empty array.

In different analyses, the meaning of the return value of this method is different:

-Best path analysis (TransportationAnalyst.find_path() method):

-Node mode: For example, if you set three nodes with analysis node IDs 1, 3, 5, the order of the result path must be 1, 3, 5, so the element values are 0, 1, 2, that is, the order of the result path Index in the initial set node string.

-Coordinate point mode: If the set analysis coordinate points are Pnt1, Pnt2, Pnt3, because the sequence of the result path must be Pnt1, Pnt2, Pnt3, so the element value is 0, 1, 2, that is, the coordinate point sequence of the result path is set in the initial coordinate The index in the point string.

-Traveling salesman analysis (TransportationAnalyst.find_tsp_path() method):

-Node mode: If the analysis node ID is set to three nodes with 1, 3, and 5, and the sequence of the results is 3, 5, 1, then the element values are in order
1, 2, 0, that is, the index of the result path sequence in the initial set node string.

-Coordinate point mode: If the analysis coordinate points are set as Pnt1, Pnt2, Pnt3, and the sequence of the result path is Pnt2, Pnt3, Pnt1, the element values are 1, 2, 0 in turn, that is, the coordinate point sequence of the result is set in the initial coordinate point. The index in the string.

-Multiple traveling salesman analysis (TransportationAnalyst.find_mtsp_path() method):

The meaning of the elements is the same as that of traveling salesman analysis, which represents the order in which the distribution route of the corresponding center point passes through the station. Note that when the distribution mode is locally optimal, all central points participate in the distribution, and when the total cost is the smallest mode, the number of central points participating in the distribution may be less than the specified number of central points.

-For the nearest facility search analysis (TransportationAnalyst.find_closest_facility() method), this method is invalid.

stop_weights

list[float] – Return the cost (weight) between sites after sorting sites according to site index. This method Return the cost between the station and the station. The station here refers to the analysis node or coordinate point, rather than all the nodes or coordinate points that the path passes. The order of the sites associated with the weights returned by this method is the same as the order of the site index values returned in the stop_indexes method, but the subtle differences should be noted for different analysis functions. E.g:

-Best path analysis (TransportationAnalyst.find_path() method): Suppose you specify passing points 1, 2, and 3, then the two-dimensional elements are: 1 to 2 cost, 2 to 3 cost;

-Traveling salesman analysis (TransportationAnalyst.find_tsp_path() method): Assuming that you specify passing points 1, 2, and 3, the site index in the analysis result is 1, 0, 2, and the two-dimensional elements are: 2 to 1 Cost, cost from 1 to 3;

-Multi-traveling salesman analysis (TransportationAnalyst.find_mtsp_path() method): that is, logistics and distribution. The element is the cost between the stations that the path passes. It should be noted that the path of the multi-traveling salesman analysis passes through the stations It includes the center point, and the start and end points of the path are the center points. For example, if a result path starts from the center point 1, passing through stations 2, 3, and 4, and the corresponding station index is 1, 2, 0, the station weights are: 1 to 3 cost, 3 to 4 cost, 4 Cost to 2 and cost 2 to 1.

-For the nearest facility search analysis (TransportationAnalyst.find_closest_facility() method), this method is invalid.

weight

float – the weight spent.

class iobjectspy.analyst.MapMatching(path_analyst_setting=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Map matching based on HMM (Hidden Markov Chain). Divide the trajectory points according to the identification field, sort and divide the trajectory by the time field, and find the most likely route of each trajectory. The purpose is to restore the real path based on the track points.

Initialization object

Parameters:path_analyst_setting (PathAnalystSetting) – Best path analysis parameters
batch_match(points)

Batch map matching, input a series of points for map matching. Note that this method assumes that the input point string belongs to the same track line by default.

Parameters:points (list[TrackPoint] or tuple[TrackPoint]) – track points to be matched
Returns:Map matching result
Return type:MapMatchingResult
batch_match_dataset(source_dataset, id_field, time_field, split_time_milliseconds, out_data=None, out_dataset_name=None, result_track_index_field='TrackIndex')

Map the dataset and save the result as point data

Parameters:
  • source_dataset (DatasetVector or str) – original track point dataset
  • id_field (str) – ID field of the track. Track points with the same ID value belong to a track, such as mobile phone number, license plate number, etc. When no ID field is specified, all points in the dataset will be classified as a track.
  • time_field (str) – The time field of the track point, must be a time or timestamp type field
  • split_time_milliseconds (float) – The time interval for splitting the track. If the time interval between two adjacent points in time is greater than the specified time interval for splitting the track, the track will be split between the two points.
  • out_data (Datasource or str) – The datasource to save the result dataset
  • out_dataset_name (str) – result dataset name
  • result_track_index_field (str) – The field that saves the track index. After the track is divided, a track may be divided into multiple sub-tracks. result_track_index_field will store the index value of the sub track, the value starts from 1. Because the result dataset will save all fields of the source track point dataset, so it must be ensured that the result_track_index_field field value is not occupied in the source track point dataset
Returns:

The result track point dataset, correctly matched to the track point on the road.

Return type:

DatasetVector

match(point, is_new_track=False)

Real-time map matching. Real-time map matching only inputs one track point at a time, but the information of the previous track point matching will continue to be retained for the current match. There may be multiple results returned by real-time map matching. The number of returned results is determined by the road that can be matched at the current point. The returned results are arranged in descending order of the possibility of the result. Before the trajectory is matched, the result returned by the current result only reflects the possible results of the current trajectory point and the previous point.

When is_new_track is True, it means that a new track will be opened, the previous matching records and information will be cleared, and the current point will be the first point of the track.

Parameters:
  • point (TrackPoint) – track point to be matched
  • is_new_track (bool) – Whether to open a new track
Returns:

real-time map matching

Return type:

list[MapMatchingLikelyResult]

max_limited_speed

float – maximum limit speed

measurement_error

float – track point error value

path_analyst_setting

PathAnalystSetting – Optimal path analysis parameter

set_max_limited_speed(value)

Set the maximum speed limit. The unit is km/h. When the calculated speed value of two adjacent points is greater than the specified speed limit, the two points are considered unreachable, that is, there is no effective road connecting. The default value is 150 km/h.

Parameters:value (float) –
Returns:self
Return type:MapMatching
set_measurement_error(value)

Set the track point error value. For example, the GPS error value, in meters. If the distance from the track point to the nearest road exceeds the error value, the track point is considered illegal. Setting a reasonable error value has a direct impact on the result of map matching. If the accuracy of the obtained track points is high, setting a smaller value can effectively improve the performance, for example, 15 meters. The default value is 30 meters.

Parameters:value (float) – track point error value
Returns:self
Return type:MapMatching
set_path_analyst_setting(path_analyst_setting)

Set optimal path analysis parameters

Parameters:path_analyst_setting (PathAnalystSetting) – Best path analysis parameters
Returns:self
Return type:MapMatching
class iobjectspy.analyst.MapMatchingResult(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The map matching result class. Including the trajectory point, trajectory line, edge id, matching accuracy rate, error rate, etc. obtained after matching.

edges

list[int] – ID of the edge that each matching path passes

evaluate_error(truth_edges)

Evaluate the error rate of current map matching results. Enter the real road line object and calculate the error rate of the matching result.

Parameters:truth_edges (list[GeoLine] or tuple[GeoLine]) – real road line objects
Returns:the error rate of the current result
Return type:float
evaluate_truth(truth_edges)

Evaluate the correctness of the current map matching results. Input the real road line object and calculate the correctness of the matching result.

The formula for calculating correctness is:

api\../image/evaluationTruth.png
Parameters:truth_edges (list[GeoLine] or tuple[GeoLine]) – real road line objects
Returns:the correctness of the current result
Return type:float
rectified_points

list[Point2D] – The trajectory points after map matching, which corresponds to the points processed by each input point. The size of the array is equal to the number of input points.

track_line

GeoLine – The result track line object. The line object constructed by track_points.

api\../image/MapMatchingResult.png
track_points

list[Point2D] – The track point string obtained after map matching, the track point string removes duplicate points and some points that failed to match

class iobjectspy.analyst.MapMatchingLikelyResult(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Real-time map matching result class

distance_to_road

float – the closest distance from the original track point to the current track path.

edges

list[int] – ID of the edge that the matching path passes through

evaluate_error(truth_edges)

Evaluate the error rate of current map matching results. Enter the real road line object and calculate the error rate of the matching result.

Parameters:truth_edges (list[GeoLine] or tuple[GeoLine]) – real road line objects
Returns:the error rate of the current result
Return type:float
evaluate_truth(truth_edges)

Evaluate the correctness of the current map matching results. Input the real road line object and calculate the correctness of the matching result.

The formula for calculating correctness is:

api\../image/evaluationTruth.png
Parameters:truth_edges (list[GeoLine] or tuple[GeoLine]) – real road line objects
Returns:the correctness of the current result
Return type:float
probability

float – When real-time map matching, when calculating which road the current point belongs to, the algorithm will generate the matching probability value from this point to all possible roads nearby, and select the probability The highest value as the Point matching road

rectified_point

Point2D – The track point after map matching, which corresponds to the point processed by each input point. The size of the array is equal to the number of input points.

track_line

GeoLine – The result track line object. The line object constructed by track_points.

track_points

list[Point2D] – The track point string obtained after map matching, the track point string removes duplicate points and some points that failed to match

class iobjectspy.analyst.TrajectoryPreprocessingResult(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Trajectory preprocessing result class

rectified_points

list[Point2D] – The processed trajectory points, which correspond to the points processed by each input point. The size of the array is equal to the number of input points.

track_line

GeoLine – The trajectory line generated by the processed trajectory points

track_points

list[Point2D] – The track points obtained after processing. For example, the remaining track points after removing all duplicate points

iobjectspy.analyst.split_track(track_points, split_time_milliseconds)

The trajectory is divided, and the trajectory is divided into segments according to the time interval.

api\../image/splitTrack.png
Parameters:
  • track_points (list[TrackPoint] or tuple[TrackPoint]) – track point string
  • split_time_milliseconds (float) – Time interval value, in milliseconds. When the time interval between two consecutive points is greater than the specified time interval value, the trajectory will be divided from the two points
Returns:

result track segment

Return type:

list[list[TrackPoint]]

iobjectspy.analyst.build_facility_network_directions(network_dataset, source_ids, sink_ids, ft_weight_field='SmLength', tf_weight_field='SmLength', direction_field='Direction', node_type_field='NodeType', progress=None)

Create a flow direction for the network dataset based on the location of the source and sink in the specified network dataset. The network dataset that flows to the future can be used for various facility network analysis. A facility network is a network with directions. Therefore, after creating a network dataset, a flow direction must be created for it before it can be used for various facility network path analysis, connectivity analysis, upstream and downstream tracking, etc.

Flow direction refers to the direction of resource flow in the network. The flow direction in the network is determined by the source and sink: resources always flow from the source to the sink. This method creates a flow direction for the network dataset through the given source and sink, as well as the facility network analysis parameter settings. After the flow direction is successfully created, two aspects of information will be written in the network dataset: flow direction and node type.

  • Flow direction

    The flow direction information will be written into the flow direction field of the subline dataset of the network dataset, and the field will be created if it does not exist.

    There are four values in the flow direction field: 0,1,2,3, the meaning of which is shown in the figure below. Take line AB as an example:

    0 means the flow direction is the same as the digitization direction. The digitization direction of the line segment AB is A–>B, and A is the source point, so the flow direction of AB is from A to B, which is the same as its digitization direction.

    1 means the flow direction is opposite to the digitization direction. The digitization direction of the line segment AB is A–>B, and A is the meeting point, so the flow direction of AB is from B to A, which is the opposite of its digitization direction.

    2 stands for invalid direction, also called uncertain flow direction. Both A and B are source points, so resources can flow from A to B, and from B to A, which constitutes an invalid flow.

    3 stands for disconnected edges, also called uninitialized direction. The line segment AB is not connected to the node where the source and sink are located, it is called a disconnected edge.

    api\../image/BuildFacilityNetworkDirections_1.png
  • Node type

    After establishing the flow direction, the system will also write the node type information into the node type field of the sub-point dataset of the specified network dataset. Node types are divided into source, sink, and ordinary nodes. The following table lists the value and meaning of the node type field:

    api\../image/BuildFacilityNetworkDirections_2.png
Parameters:
  • network_dataset (DatasetVector or str) – The network dataset of the flow direction to be created. The network dataset must be modifiable.
  • source_ids (list[int] or tuple[int]) – The network node ID array corresponding to the source. Both sources and sinks are used to establish the flow of network dataset. The flow direction of the network dataset is determined by the location of the source and sink.
  • sink_ids (list[int] or tuple[int]) – sink ID array. The ID array of the network node corresponding to the sink. Both sources and sinks are used to establish the flow of network dataset. The flow direction of the network dataset is determined by the location of the source and sink.
  • ft_weight_field (str) – forward weight field or field expression
  • tf_weight_field (str) – reverse weight field or field expression
  • direction_field (str) – flow direction field, used to save the flow direction information of the network dataset
  • node_type_field (str) – The name of the node type field. The node type is divided into source node, intersection node, and ordinary node. This field is a field in the network node dataset. If it does not exist, create the field.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

return true if created successfully, otherwise false

Return type:

bool

iobjectspy.analyst.build_network_dataset(lines, points=None, split_mode='NO_SPLIT', tolerance=0.0, line_saved_fields=None, point_saved_fields=None, out_data=None, out_dataset_name=None, progress=None)

The network dataset is the data basis for network analysis. The network dataset consists of two sub-dataset (a line dataset and a point dataset), which store the edges and nodes of the network model respectively. It also describes the spatial topological relationship between edges and edges, edges and nodes, and nodes and nodes.

This method provides a network dataset based on a single line dataset or multiple line and point datasets. If the user’s dataset already has the correct network relationship, you can directly use build_network_dataset_known_relation() to quickly build a network dataset.

For the constructed network dataset, you can use validate_network_dataset() to check whether the network topology is correct.

Parameters:
  • lines (DatasetVector or list[DatasetVector]) – The line dataset used to construct the network dataset, there must be at least one line dataset.
  • points (DatasetVector or list[DatasetVector]) – The point dataset used to construct the network dataset.
  • split_mode (NetworkSplitMode) – break mode, default is no break
  • tolerance (float) – node tolerance
  • line_saved_fields (str or list[str]) – Fields that need to be reserved in the line dataset
  • point_saved_fields (str or list[str]) – The fields that need to be reserved in the point dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource object that holds the result network dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result network dataset

Return type:

DatasetVector

iobjectspy.analyst.fix_ring_edge_network_errors(network_dataset, error_ids, edge_id_field=None, f_node_id_field=None, t_node_id_field=None, node_id_field=None)

Fix the topological error that the first and last nodes of the edge segment are equal in the topology error of the network dataset. For the topology error check of the network dataset, refer to: py:meth:.validate_network_dataset for details.

For edges with the same beginning and end, the center point of the edge is automatically taken to break the edge into two.

Parameters:
  • network_dataset (str or DatasetVector) – the network dataset to be processed
  • error_ids (list[int] or tuple[int] or str) – SmIDs of edges that are equal to each other.
  • edge_id_field (str) – The edge ID field of the network dataset. If it is empty, the edge ID field stored in the network dataset will be used by default.
  • f_node_id_field (str) – The starting node ID field of the network dataset. If it is empty, the starting node ID field stored in the network dataset will be used by default.
  • t_node_id_field (str) – The end node ID field of the network dataset. If it is empty, the end node ID field stored in the network dataset will be used by default.
  • node_id_field (str) – The node ID field of the network dataset. If it is empty, the node ID field stored in the network dataset will be used by default.
Returns:

Return True for success, False for failure

Return type:

bool

iobjectspy.analyst.build_facility_network_hierarchies(network_dataset, source_ids, sink_ids, direction_field, is_loop_valid, ft_weight_field='SmLength', tf_weight_field='SmLength', hierarchy_field='Hierarchy', progress=None)

Create a level for the network dataset with flow direction, and write the level information of the network dataset in the specified level field.

To establish a hierarchy for a network dataset, first, the network dataset must have established a flow direction, that is, the network dataset operated by the method for establishing a hierarchy must have flow direction information.

The grade field is recorded as an integer in the form of an integer. The value starts from 1, and the higher the grade, the smaller the value. For example, after the river is graded, the grade of the first-level river is recorded as 1, the grade of the second-level river is recorded as 2, and so on. Note that a value of 0 means that the level cannot be determined, usually because the edge is not connected.

Parameters:
  • network_dataset (DatasetVector or str) – The network dataset of the level to be created. The network dataset must be modifiable.
  • source_ids (list[int]) – source ID array
  • sink_ids (list[int]) – sink ID array
  • direction_field (str) – flow direction field
  • is_loop_valid (bool) – Specify whether the loop is valid. When the parameter is true, the loop is valid; when the parameter is false, the loop is invalid.
  • ft_weight_field (str) – forward weight field or field expression
  • tf_weight_field (str) – reverse weight field or field expression
  • hierarchy_field (str) – The given hierarchy field name, used to store hierarchy information.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

return true if created successfully, otherwise false

Return type:

bool

iobjectspy.analyst.build_network_dataset_known_relation(line, point, edge_id_field, from_node_id_field, to_node_id_field, node_id_field, out_data=None, out_dataset_name=None, progress=None)

According to the point, line data and the existing fields that express the topological relationship between edges and nodes, a network dataset is constructed. When the line and point objects in the existing line and point dataset correspond to the edges and nodes of the network to be constructed, and have information describing the spatial topological relationship between the two, that is, the line dataset contains the edge ID and the edge starting point. Start node ID and end node ID fields. When the point dataset contains the node ID field of the point object, this method can be used to construct a network dataset.

After successfully constructing a network dataset using this method, the number of result objects is consistent with the number of objects in the source data, that is, a line object in the line data is written as an edge, and a point object in the point data is written as a node, and retain all non-system fields of the point and line datasets to the result dataset.

For example, for pipeline and pipe point data collected for establishing a pipe network, the pipeline and pipe points are all identified by a unique fixed code. One of the characteristics of the pipe network is that the pipe points are only located at both ends of the pipeline, so the pipe points correspond to all the nodes of the pipe network to be constructed, and the pipeline corresponds to all the edges of the pipe network to be constructed, and there is no need to break at the intersection of the pipeline and the pipeline. In the pipeline data, the pipe point information at both ends of the pipeline object is recorded, that is, the start pipe point code and the end pipe point code, which means that the pipeline and pipe point data already contain the information of the spatial topological relationship between the two, so it is suitable for use This method builds a network dataset.

Note that the edge ID, edge start node ID, edge end node ID, and node ID fields of the network dataset constructed in this way are the fields specified when calling this method, not SmEdgeID, SmFNode, System fields such as SmTNode and SmNodeID. Specifically, the corresponding fields can be obtained through the DatasetVector.get_field_name_by_sign() method of DatasetVector.

Parameters:
  • line (str or DatasetVector) – the line dataset used to construct the network dataset
  • point (str or DatasetVector) – point dataset used to construct the network dataset
  • edge_id_field (str) – The field representing the edge ID in the specified line dataset. If it is specified as a null or empty string, or the specified field does not exist, SMID is automatically used as the edge ID. Only 16-bit integer and 32-bit integer fields are supported.
  • from_node_id_field (str) – The field representing the starting node ID of the edge in the specified line dataset. Only 16-bit integer and 32-bit integer fields are supported.
  • to_node_id_field (str) – The field in the specified line dataset that represents the end node ID of the edge. Only 16-bit integer and 32-bit integer fields are supported.
  • node_id_field (str) – The field representing the node ID in the specified point dataset. Only 16-bit integer and 32-bit integer fields are supported.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource object that holds the result network dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result network dataset

Return type:

DatasetVector

iobjectspy.analyst.append_to_network_dataset(network_dataset, appended_datasets, progress=None)

To add data to an existing network dataset, you can add points, lines or networks. Network datasets are generally constructed from line data (and point data). Once the data used to construct the network changes, the original network will become outdated. If the network is not updated in time, the correctness of the analysis results may be affected. By adding new data to the original network, a newer network can be obtained without rebuilding the network. As shown in the figure below, several roads (red lines) have been newly built in a certain area. These roads are abstracted as line data and added to the network constructed before the expansion, thereby updating the road network.

api\../image/AppendToNetwork.png

This method supports adding points, lines, and network dataset to an existing network, and can add multiple dataset of the same or different types at the same time, for example, adding one point data and two line data at the same time. Note that if the data to be added has multiple types, the system will add the network first, then the line, and finally the point. The methods and rules for adding points, lines and networks to the network are introduced below.

  • To add points to an existing network:

    After a point is added to an existing network, it will become a new node in the network. When adding points to an existing network, you need to pay attention to the following points:

    1. The point to be added must be on the edge of the existing network. After appending, add a new node at that point on the edge, and the edge will be automatically broken into two edges at the new node, as shown in the following figure, point a and point d. If the point to be added is not on the network, that is, it is not on the edge, nor does it overlap the node, it will be ignored and will not be added to the network, because the isolated node has no geological significance in the network. This is the case at point b in the figure below.
    2. If the point to be added overlaps the node of the existing network, merge the point to be added with the overlapping node, as shown in point c in the figure below.
    api\../image/AppendPointsToNetwork.png
  • Add line to existing network

    After the line is added to the existing network, it will become a new edge in the network, and the end of the line and the intersection with other lines (or edges) will be interrupted and new nodes will be added. When adding points to an existing network, you need to pay attention to the following points:

    1. The line to be added cannot overlap or partially overlap with the existing network edge, otherwise it will cause errors in the added network.
  • Add another network to an existing network

    After adding a network to an existing network, the two will become one network, as shown in the figure below. Note that, as with the additional line, when adding a network to an existing network, you need to ensure that there is no overlap or partial overlap of edges between the two networks, otherwise it will cause errors in the added network.

    api\../image/AppendChildNet_1.png

    When the network to be added overlaps with the network to be added, a new node will be added at the intersection to establish a new topological relationship.

    api\../image/AppendChildNet_3.png

    The connectivity of the network does not affect the addition of the network. In the following example, after adding the network to be added to the original network, the result is a network dataset containing two subnets, and the two subnets are disconnected.

    api\../image/AppendChildNet_2.png
  • Note:

    1. This method will directly modify the added network dataset, and will not generate a new network dataset.
    2. The point, line or network dataset to be added must have the same coordinate system as the network dataset to be added.

    3. In the point, line or network dataset to be appended, if there are attribute fields that are the same as the network dataset (name and type must be the same), then these attribute values will be automatically retained in the appended network dataset; if there are no identical Fields are not reserved. Among them, the attributes of the point dataset and the node dataset of the network dataset are retained in the node attribute table of the added network; the attributes of the line dataset are retained in the edge attribute table of the added network.

Parameters:
  • network_dataset (DatasetVector or str) – the added network dataset
  • appended_datasets (list[DatasetVector]) – The specified data to be appended can be point, line or network datasets.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Whether the append is successful. If successful, return True, otherwise return False.

Return type:

bool

iobjectspy.analyst.validate_network_dataset(network_dataset, edge_id_field=None, f_node_id_field=None, t_node_id_field=None, node_id_field=None)

This method is used to check the network dataset and give error information, so that users can modify the data according to the error information, so as to avoid network analysis errors caused by data errors.

The error types of the results of checking the network datasets are shown in the following table:

api\../image/TransportationNetwork_Check.png
Parameters:
  • network_dataset (DatasetVector or str) – the network dataset or 3D network dataset to be checked
  • edge_id_field (str) – The edge ID field of the network dataset. If it is empty, the edge ID field stored in the network dataset will be used by default.
  • f_node_id_field (str) – The starting node ID field of the network dataset. If it is empty, the starting node ID field stored in the network dataset will be used by default.
  • t_node_id_field (str) – The end node ID field of the network dataset. If it is empty, the end node ID field stored in the network dataset will be used by default.
  • node_id_field (str) – The node ID field of the network dataset. If it is empty, the node ID field stored in the network dataset will be used by default.
Returns:

Error result information of the network dataset.

Return type:

NetworkDatasetErrors

iobjectspy.analyst.compile_ssc_data(parameter, progress=None)

Compile network data into SSC file containing shortcut information

Parameters:
  • parameter (SSCCompilerParameter) – SSC file compilation parameter class.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

Return True for success, False for failure

Return type:

bool

class iobjectspy.analyst.AllocationDemandType

Bases: iobjectspy._jsuperpy.enums.JEnum

Resource allocation demand model

Variables:
  • AllocationDemandType.NODE – Node demand mode. In this mode, only the resource demand of the node is considered in the analysis, and the demand of the arc is excluded. For example, at Christmas, Saint The elderly give gifts to the children. Santa’s location is a fixed center point, the number of gifts is the amount of resources, and the child’s address is the demand node, a certain Children’s demand for the number of gifts is the value of the node demand field of the node. Obviously, when Santa Claus gives gifts to children, we only consider gifts This is a node demand event.
  • AllocationDemandType.EDGE – Edge demand mode. In this mode, only the demand for resources on the edge is considered during analysis, and the demand for nodes is excluded. For example, at Christmas, Saint The elderly give gifts to the children. Santa’s location is a fixed center point, Santa’s car gasoline inventory is the resource, and the children’s address is the demand. The fuel consumption of the node, from the fixed center point to the address of the adjacent child and the address of the adjacent child is the arc demand field value. Obviously santa knot When children deliver gifts, we only consider the distribution of driving fuel consumption. This is an arc demand event.
  • AllocationDemandType.BOTH – Node and edge demand mode. At the same time, consider the needs of the nodes and the needs of the arcs. For example, at Christmas, Santa Claus sends gifts to children, namely Considering the distribution of gifts and driving fuel consumption, this is a node and edge demand event.
BOTH = 3
EDGE = 2
NODE = 1
class iobjectspy.analyst.AllocationAnalystResult(nodes, edges, demands, routes, node_id)

Bases: object

The analysis result class of resource allocation.

demand_results

DemandResult – Demand result object

edges

list[int] – collection of IDs of arcs passed in the analysis result

nodes

list[int] – collection of node IDs passed in the analysis result

static parse(java_object, supply_centers)
routes

GeoLineM – The routing result of the allocation path analyzed by resource allocation.

supply_center_node_id

int – ID of the resource supply center to which it belongs

class iobjectspy.analyst.BurstAnalystResult(java_object)

Bases: object

Burst analysis result class. Burst analysis results return key facilities, common facilities and edges.

critical_nodes

list[int] – Critical facilities in the burst analysis that affect the upstream and downstream of the burst location. Critical facilities include two types of facilities:

  1. All facilities in the upstream of the burst location that directly affect the burst location.
  2. The downstream facilities are directly affected by the location of the burst pipe and have outflow (that is, the outflow degree is greater than 0).
edges

list[int] – The edges that affect the position of the burst tube and the edges affected by the position of the burst tube. It is the key to two-way search from the position of the burst tube The edge to which facilities and general facilities are traversed.

normal_nodes

list[int] – Common facilities affected by the location of the burst tube in the burst analysis. General facilities include three types of facilities:

  1. Facilities that are directly affected by the location of the burst pipe and have no outflow (the outflow degree is 0).
  2. All facilities A directly affected by the outflow edge of each upstream critical facility (excluding all critical facilities), and facility A needs to meet, the influencing edge from upstream critical facility to facility A and upstream and downstream critical facilities The edge of influence has a common part.
  3. Facility A (critical facility 2 and ordinary facility 1) that is directly affected by the location of the blast pipe downstream of the location of the blast pipe, and the facilities directly affecting facility A in the upstream of facility A Point B (excluding all key facilities), and it needs to be satisfied that the influencing arc from facility A to facility B and the influencing arc of upstream and downstream key facilities have a common part.
class iobjectspy.analyst.DemandPointInfo

Bases: object

Demand point information class. The coordinates or node ID of the demand point and the demand quantity of the demand point are stored.

demand_node

int – Demand point ID

demand_point

Point2D – Demand point coordinates

demands

list[float] – Demand quantity of demand point.

end_time

datetime.datetime – The latest time of arriving, which means the latest time point when the vehicle arrives at that point

set_demand_node(value)

Set demand point ID

Parameters:value (int) – demand point ID
Returns:self
Return type:DemandPointInfo
set_demand_point(value)

Set the coordinates of demand point

Parameters:value (Point2D) – the coordinates of demand point
Returns:self
Return type:DemandPointInfo
set_demands(value)

Set the demand for the demand point. The demand can be multi-dimensional, and its dimension must be the same as the vehicle’s load dimension and meaning. If the demand of a destination is too large to exceed the maximum load of the vehicle, this point will be discarded in the analysis.

Parameters:value (list[float] or tuple[float]) – The demand quantity of the demand point.
Returns:self
Return type:DemandPointInfo
set_end_time(value)

Set the latest arrival time

Parameters:value (datetime.datetime) – the latest arrival time, which means the latest time when the vehicle arrives at that point
Returns:self
Return type:DemandPointInfo
set_start_time(value)

Set the earliest arrival time.

Parameters:value (datetime.datetime or str) – the earliest time of arrival, which means the earliest time when the vehicle arrives at that point.
Returns:self
Return type:DemandPointInfo
set_unload_time(value)

Set the unloading time.

Parameters:value (int) – Time to unload the cargo, which means the time the vehicle needs to stay at this point. The default unit is minutes.
Returns:self
Return type:DemandPointInfo
start_time

datetime.datetime – Earliest time of arrival, which means the earliest time point when the vehicle arrives at that point

unload_time

int – Unloading time, indicating the time the vehicle needs to stay at that point. The default unit is minutes.

class iobjectspy.analyst.DirectionType

Bases: iobjectspy._jsuperpy.enums.JEnum

Direction, used for driving guidance

Variables:
EAST = 0
NONE = 4
NORTH = 3
SOUTH = 1
WEST = 2
class iobjectspy.analyst.VRPAnalystType

Bases: iobjectspy._jsuperpy.enums.JEnum

Analysis Mode in Logistics Analysis

Variables:
AREAANALYST = 2
AVERAGECOST = 1
LEASTCOST = 0
class iobjectspy.analyst.VRPDirectionType

Bases: iobjectspy._jsuperpy.enums.JEnum

Types of VRP analysis routes

Variables:
ENDBYCENTER = 2
ROUNDROUTE = 0
STARTBYCENTER = 1
class iobjectspy.analyst.VRPAnalystParameter

Bases: object

Logistic distribution analysis parameter setting class.

This class is mainly used to set the parameters of logistics distribution analysis. Through the traffic network analysis parameter setting class, you can set the name identification of obstacle edges, obstacle points, and weight field information, and you can also set some settings for the analysis results, that is, whether the analysis results include the following content of the analysis path: node set, edge Segment collection, routing object collection, and site collection.

analyst_type

VRPAnalystType – Analysis mode in logistics analysis

barrier_edges

list[int] – Barrier edge ID list

barrier_nodes

list[int] – Barrier node ID list

barrier_points

list[Point2D] – The coordinate list of barrier nodes

is_edges_return

bool – Whether the passing edge is included in the analysis result

is_nodes_return

bool – Whether the analysis result contains nodes

is_path_guides_return

bool – Whether the analysis result contains driving guide

is_routes_return

bool – Whether the analysis result contains routing objects

is_stop_indexes_return

bool – whether to include site index

route_count

int – The value of the number of vehicles dispatched in one analysis

set_analyst_type(value=True)

Set the logistics analysis mode, including LEASTCOST minimum cost mode (default value), AVERAGECOST average cost mode, AREAANALYST regional analysis mode.

Parameters:value (VRPAnalystType or str) – logistics analysis mode
Returns:self
Return type:VRPAnalystParameter
set_barrier_edges(value=True)

Set obstacle edge ID list

Parameters:value (list[int]:) – Barrier edge ID list
Returns:self
Return type:VRPAnalystParameter
set_barrier_nodes(value=True)

Set barrier node ID list

Parameters:value (list[int]:) – Barrier node ID list
Returns:self
Return type:VRPAnalystParameter
set_barrier_points(points=True)

Set the coordinate list of obstacle nodes

Parameters:points (list[Point2D] or tuple[Point2D]) – list of coordinates of barrier nodes
Returns:self
Return type:VRPAnalystParameter
set_edges_return(value=True)

Set whether the passing edge is included in the analysis result

Parameters:value (bool) – Whether the passing edge is included in the analysis result
Returns:self
Return type:VRPAnalystParameter
set_nodes_return(value=True)

Set whether to include nodes in the analysis results

Parameters:value (bool) – Whether the analysis result contains nodes
Returns:self
Return type:VRPAnalystParameter
set_path_guides_return(value=True)

Set whether to include driving guidance in the analysis result.

Parameters:value (bool) – Whether the analysis result includes driving guidance. This method must be set to True, and the edge name field must be set by the method: py:meth:TransportationPathAnalystSetting.set_edge_name_field of the class: py:class:TransportationAnalystSetting, then the driving guidance set will be included in the analysis result, otherwise The driving guide will not be returned, but it will not affect the acquisition of other content in the analysis results.
Returns:self
Return type:VRPAnalystParameter
set_route_count(value)

Set the value of the number of vehicles dispatched in one analysis. Set as required. In this analysis, the number of routes is obtained based on the number of vehicles. The number of routes is the same as the actual number of vehicles dispatched; if this parameter is not set, the default number of dispatched vehicles will not exceed the total number of vehicles that can be provided N (vehicleInfo[N])

Parameters:value (int) – Number of dispatched vehicles
Returns:self
Return type:VRPAnalystParameter
set_routes_return(value=True)

Set whether the analysis result contains a collection of routing objects

Parameters:value (bool) – Whether the analysis result contains a collection of routing objects
Returns:self
Return type:VRPAnalystParameter
set_stop_indexes_return(value=True)

Set whether to include site index

Parameters:value (bool) – whether to include the site index
Returns:self
Return type:VRPAnalystParameter
set_time_weight_field(value)

Set the name of the time field information.

Parameters:value (str) – The name of the time field information, the value set is the name of the weight field information in TransportationAnalystSetting.
Returns:self
Return type:VRPAnalystParameter
set_vrp_direction_type(value)

Set the type of logistics analysis route

Parameters:value (VRPDirectionType or str) – Type of logistics analysis route
Returns:self
Return type:VRPAnalystParameter
set_weight_name(value)

Set the name of the weight field information

Parameters:value (str) – the name of the weight field information
Returns:self
Return type:VRPAnalystParameter
time_weight_field

str – the name of the time field information

vrp_direction_type

VRPDirectionType – Type of logistics analysis route

weight_name

str – the name of the weight field information

class iobjectspy.analyst.VRPAnalystResult(java_object, index)

Bases: object

VRP analysis result class.

This class is used to obtain the route collection of the analysis result, the node collection and the edge collection, the driving guidance collection, the station collection and the weight collection and the cost of each station. And the time consumption and total load consumption of the VRP line.

edges

list[int] – The set of edges passed by the analysis result

nodes

list[int] – a collection of nodes passing through the analysis result

path_guides

list[PathGuideItem] – Return travel guide

route

GeoLineM – The route object of the analysis result

stop_indexes

list[int] – Site index, reflecting the order of the sites after analysis.

According to different analysis line types: py:class:VRPDirectionType, the meaning of the value of this array is different:

-ROUNDROUTE: The first element and the last element are the center point index, and the other elements are the demand point index. -STARTBYCENTER: The first element is the center point index, and the other elements are the demand point index. -ENDBYCENTER: The last element is the center point index, and the other elements are the demand point index.
stop_weights

list[float] – After sorting the sites according to the site index, the cost (weight) between sites

times

list[datetime.datetime] – Return the departure time of each distribution point in each route of the logistics distribution (except the last point, which represents the time of arrival)

vehicle_index

int – vehicle index of each route in logistics distribution

vrp_demands

list[int] – The load of each line in the logistics distribution

weight

float – the total cost of each delivery route.

class iobjectspy.analyst.FacilityAnalyst(analyst_setting)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Facility network analysis class.

Facilities network analysis class. It is one of the network analysis functions, mainly used for various connectivity analysis and tracking analysis.

The facility network is a network with directions. That is, the medium (water flow, current, etc.) will flow in the network according to the rules of the network itself.

The premise of facility network analysis is that a dataset for facility network analysis has been established. The basis for establishing a dataset for facility network analysis is to establish a network dataset, and use it on this basis The build_facility_network_directions() method gives the network dataset unique data information for facility network analysis, that is, the network dataset Establish the flow direction so that the original network dataset has the most basic conditions for facility network analysis. At this time, various facility network analysis can be performed. If your facility network with level information, you can further use the build_facility_network_hierarchies() method to add level information.

Parameters:analyst_setting (FacilityAnalystSetting) – Set the network analysis environment.
analyst_setting

FacilityAnalystSetting – Facility network analysis environment

burst_analyse(source_nodes, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Two-way tube explosion analysis, by specifying the edge of the tube, find the nodes in the upstream and downstream of the edge of the tube that directly affect the position of the tube and the nodes directly affected by the position of the tube.

Parameters:
  • source_nodes (list[int] or tuple[int]) – The specified array of facility node ID . It can not be empty.
  • edge_or_node_id (int) – The specified edge ID or node ID, it’s the location of the tube burst.
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – Whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID.
Returns:

Explosion tube analysis result

Return type:

BurstAnalyseResult

check_loops()

Check the network loop and return the ID array of the edges that make up the loop.

In the facility network, a loop refers to a closed path formed by two or more edge segments with a flow direction value of 2 (that is, an uncertain flow direction). This means that the loop must meet the following two conditions at the same time:

  1. It is a closed path composed of at least two edges;
  2. The flow directions of the edge segments forming the loop are all 2, that is, the flow direction is uncertain.

For the flow direction, please refer to the build_facility_network_directions() method.

The figure below is a part of the facility network, using different symbols to show the flow direction of the network edge. Perform a loop check on the network and check out two loops, the red closed path in the figure. On the upper right, there is an edge with a flow direction of 2. Since it does not form a closed path with other edges with a flow direction of 2, it is not a loop:

api\../image/CheckLoops.png
Returns:The array of the edge ID that make up the loop.
Return type:list[int]
find_common_ancestors(edge_or_node_ids, is_uncertain_direction_valid=False, is_edge_ids=True)

According to the given array of edge ID or node ID , find the common upstream edge of these edges and return the edge ID array. Common upstream refers to a common upstream network of multiple nodes (or edges). This method is used to find the common upstream edge of multiple edges, that is, take the intersection of the respective upstream edges of these edges, and return the edge ID of these edges as a result.

As shown in the figure below, the flow is in the direction indicated by the arrow in the figure. The first two figures are the results of upstream tracking of node 1 and node 2, and find their respective upstream edges (green). The three pictures look for a common upstream edge (orange) for node 1 and node 2. It is easy to see that the common upstream edge of node 1 and node 2 is the intersection of their respective upstream edges.

api\../image/CommonAncestors.png
Parameters:
  • edge_or_node_ids (list[int] or tuple[int]) – specified array of edge ID or node ID
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_ids (bool) – Whether edge_or_node_ids represents the edge ID, True represents the edge ID, and False represents the node ID.
Returns:

The array of edge ID

Return type:

list[int]

find_common_catchments(edge_or_node_ids, is_uncertain_direction_valid=False, is_edge_ids=True)

According to the specified node ID array or edge ID array, find the common downstream edge of these nodes, and return the edge ID array.

Common downstream refers to a common downstream network of multiple nodes (or edges). This method is used to find the common downstream edges of multiple nodes, that is, take the intersection of the respective downstream edges of these nodes, and return the edge IDs of these edges as a result.

As shown in the figure below, the flow is in the direction indicated by the arrow in the figure. The first two figures are the results of downstream tracking of node 1 and node 2, and find the respective downstream edges (green). The three pictures look for a common downstream edge (orange) for node 1 and node 2. It is easy to see that the common downstream edge of node 1 and node 2 is the intersection of their respective downstream edges.

api\../image/CommonCatchments.png
Parameters:
  • edge_or_node_ids (list[int]) – specified node ID array or edge ID array
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_ids (bool) – Whether edge_or_node_ids represents the edge ID, True represents the edge ID, and False represents the node ID.
Returns:

The array of edge IDs that are common downstream of the given node

Return type:

list[int]

find_connected_edges(edge_or_node_ids, is_edge_ids=True)

According to the given node ID array or edge ID array, find the edges connected with these edges (or nodes), and return the edge ID array. This method is used to find the edge that is connected to a given edge (or node). After finding the connected edge, the corresponding connect nodes can be found according to the network topology, that is, the starting node and the ending node of the edge..

Parameters:
  • edge_or_node_ids (list[int] or tuple[int]) – node ID array or edge ID array
  • is_edge_ids (bool) – Whether edge_or_node_ids represents the edge ID, True represents the edge ID, and False represents the node ID.
Returns:

The array of edge IDs that are common downstream of the given node

Return type:

list[int]

find_critical_facilities_down(source_node_ids, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Downstream key facility search, that is, find the key downstream facility nodes of a given edge, and return the ID array of the key facility node and the downstream edge ID array affected by the given edge. In the search and analysis of downstream key facilities, we divide the nodes of the facility network into two types: ordinary nodes and facility nodes. The facility nodes are considered to be nodes that can affect network connectivity. For example, a valve in a water supply pipe network; ordinary nodes are nodes that do not affect network connectivity, such as fire hydrants or three links in the water supply pipe network.

The downstream key facility search analysis will filter out the key nodes from the given facility nodes. These key nodes are the most basic nodes for maintaining connectivity between the analysis edge and its downstream, that is, After closing these key nodes, the analysis node cannot communicate with the downstream. At the same time, the result of this analysis also includes the union of downstream edges affected by a given edge.

The search method of key facility nodes can be summarized as follows: starting from the analysis edge and searching its downstream, the first facility node encountered in each direction is the key facility node to be searched.

Parameters:
  • source_node_ids (list[int] or tuple[int]) – The specified facility node ID array. Can not be empty.
  • edge_or_node_id (int) – Specified analysis edge ID or node ID
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – Whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID.
Returns:

facility network analysis result

Return type:

FacilityAnalystResult

find_critical_facilities_up(source_node_ids, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Upstream key facility search, that is, find the key facility nodes upstream of a given arc, and return the key node ID array and its downstream edge ID array. In the search and analysis of upstream key facilities, we divide the nodes of the facility network into two types: ordinary nodes and facility nodes. The facility nodes are considered to be the nodes that can affect network connectivity. For example, a valve in a water supply pipe network; ordinary nodes are nodes that do not affect network connectivity, such as fire hydrants or three links in the water supply pipe network. The upstream critical facility search and analysis needs to specify facility nodes and Analysis node, where the analysis node can be a facility node or an ordinary node.

The upstream key facility search analysis will filter out the key nodes from the given facility nodes. These key nodes are the most basic nodes for maintaining connectivity between the analysis edge and its upstream, that is, After closing these key nodes, the analysis node cannot communicate with the upstream. At the same time, the result of the analysis also includes the union of the downstream edges of the key nodes found.

The search method of key facility nodes can be summarized as follows: starting from the analysis edge and tracing back to its upstream, the first facility node encountered in each direction is the key facility node to be searched. As shown in the figure below, starting from the analysis edge (red), the key facility nodes found include: 2, 8, 9, and 7. Nodes 4 and 11 are not the first facility node encountered in the backtracking direction. Therefore, it is not a critical facility node. As an illustration, only the upstream part of the analysis edge is given here, but note that the analysis results will also give the downstream edges of key facility nodes 2, 8, 9 and 7.

api\../image/findCriticalFacilitiesUp.png
  • Applications

After a pipe burst occurs in the water supply network, all valves can be used as facility nodes, and the bursting pipe section or pipe point can be used as an analysis edge or analysis node to find and analyze upstream key facilities. Quickly find the minimum number of valves in the upstream that need to be closed. After closing these valves, the burst pipe section or pipe point no longer communicates with its upstream, thereby preventing the outflow of water and preventing the disaster from aggravating and Waste of resources. At the same time, the analysis shows that the union of the downstream edges of the valve that needs to be closed, that is, the range of influence after the valve is closed, is used to determine the water stop area, and promptly notify and emergency measures.

api\../image/FindClosestFacilityUp.png
Parameters:
  • source_node_ids (list[int] or tuple[int]) – The specified facility node ID array. Can not be empty.
  • edge_or_node_id (int) – analysis edge ID or node ID
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult

find_loops(edge_or_node_ids, is_edge_ids=True)

According to the given node ID array or edge ID array, find the loops connected with these nodes (edges), and return the edge ID array that forms the loop.

Parameters:
  • edge_or_node_ids (list[int] or tuple[int]) – The specified node or edge ID array.
  • is_edge_ids (bool) – whether edge_or_node_ids represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

The edge ID array of the loop connected with the given edge.

Return type:

list[int]

find_path(start_id, end_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Facility network path analysis, that is, according to the given start and end node IDs, find the path with the least cost, and return the edges, nodes and costs included in the path.

The search process of the least costly path between two nodes is: starting from a given starting node, according to the flow direction, find all paths to a given end node, and then find the least costly one and return.

The figure below is a schematic diagram of the minimum cost path for two nodes. Starting from the starting node B, along the network flow direction, there are three paths to the ending node P, namely BDLP, BCGIJKP And E-E-F-H-M-N-O-P, the path B-C-G-I-J-K-P has the least cost, which is 105, so this path is the least costly path from node B to P.

api\../image/FacilityFindPath.png
Parameters:
  • start_id (int) – Starting node ID or edge ID.
  • end_id (int) – End node ID or edge ID. The start ID and end ID must be node ID or edge ID at the same time
  • weight_name (str) – The name of the specified weight field information object.
  • is_uncertain_direction_valid (bool) – Whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult

find_path_down(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Facility network downstream path analysis, according to a given node ID or edge ID participating in the analysis, query the node (edge) downstream path with the least cost, and return the edge, node and cost included in the path.

The search process of the downstream least costly path can be understood as: starting from a given node (or edge), according to the flow direction, find all the downstream paths of the node (or edge), and then find out from it Return the least expensive one. This method is used to find the minimum cost path downstream of a given node.

The figure below is a simple facility network. Arrows are used on the network edges to mark the flow of the network, and the weights are marked next to the edges. For the analysis node H, perform the downstream least cost path analysis. First, start from node H, search down according to the flow direction, and find all downstream paths of node H. There are 4 paths in total, including: HLG, HLK, HMS and HMQR. The network resistance (ie, the weight) calculates the cost of these paths, and it can be concluded that the cost of the HLK path is 11.1. Therefore, the lowest cost path of node H is HLK.

api\../image/PathDown.png
Parameters:
  • edge_or_node_id (int) – the specified node ID or edge ID
  • weight_name (str) – The name of the specified weight field information object. That is, set the specified in the network analysis environment: py:attr:FacilityAnalystSetting.weight_fields Specific one: py:class:WeightFieldInfo: py:attr:WeightFieldInfo.weight_name.
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult

find_path_up(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Analysis of the upstream path of the facility network, according to the given node ID or edge ID, query the node (edge) upstream path with the least cost, and return the edge, node and cost included in the path.

The search process of the upstream minimum cost path can be understood as: starting from a given node (or edge), according to the flow direction, find all the upstream paths of the node (or edge), and then find from it. Return the least expensive one. This method is used to find the minimum cost path upstream of a given node.

The figure below is a simple facility network. Arrows are used on the network edges to mark the flow of the network, and the weights are marked next to the edges. For the analysis node I, perform the upstream minimum cost path analysis. First, start from node I and trace upward according to the flow direction to find all the upstream paths of node I. There are 6 in total, including: EFI, AFI, BGJI, DGJI, CGJI And HJI, and then calculate the cost of these paths according to the network resistance (ie weight), it can be concluded that the cost of the EFI path is the smallest, which is 8.2. Therefore, the upstream of node I is the smallest The cost path is EFI.

api\../image/PathUp.png
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult

find_sink(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Find the sink according to a given node ID or edge ID, that is, starting from a given node (edge), find the downstream sink that flows out of the node according to the flow direction, and return the minimum cost for the given node to reach the sink The edges, nodes, and costs included in the path. Starting from a given node, this method finds the downstream sink point that flows out of the node according to the flow direction. The result of the analysis is the edge, node and cost included in the minimum cost path from the node to the found sink. If there are multiple sinks in the network, the farthest sink will be searched, which is the sink with the least cost and the largest cost from a given node. In order to facilitate understanding, the realization process of this function can be divided into three steps:

  1. Starting from a given node, according to the flow direction, find all the sink points downstream of the node;
  2. Analyze the minimum cost path from a given node to each sink and calculate the cost;
  3. Select the path corresponding to the maximum cost calculated in the previous step as the result, and give the edge ID array, node ID array and the cost of the path on the path.

Note: The node ID array in the analysis result does not include the analysis node itself.

The figure below is a simple facility network. Arrows are used on the network edges to mark the flow of the network, and the weights are marked next to the edges. Perform search sink analysis for analysis node D. Know, Starting from node D and searching downwards according to the flow direction, there are 4 sinks in total. The least costly paths from node D to sink are: EHLG, EHLK, EHMS and EHMQR, According to the network resistance, that is, the weight of the edge, it can be calculated that the cost of the EHMQR path is 16.6, so the node R is the sink found.

api\../image/FindSink.png
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult

find_source(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Find the source according to a given node ID or edge ID, that is, starting from a given node (edge), find the network source flowing to the node according to the flow direction, and return the edges, nodes and costs included in the minimal cost path from the source to the given node. This method starts from a given node, according to the flow direction, finds the network source node (that is, the source point) that flows to the node, and the result of the analysis is the least costly path from the found source to the given node. Including edges, nodes and costs. If there are multiple sources in the network, it will search for the farthest source that is the least expensive to reach a given node. In order to facilitate understanding, the realization process of this function can be divided into three steps:

  1. Starting from a given node, according to the flow direction, find all the source points upstream of the node;
  2. Analyze the minimum cost path for each source to reach a given node and calculate the cost;
  3. Select the path corresponding to the maximum cost calculated in the previous step as the result, and give the edge ID array, node ID array and the cost of the path on the path.

Note: The node ID array in the analysis result does not include the analysis node itself.

The figure below is a simple facility network. Arrows are used on the network edges to mark the flow of the network, and the weights are marked next to the edges. For the analysis node M, perform source search analysis. Know, Starting from node M and tracing upward according to the flow direction, there are 7 sources in total. The least costly paths from source to node M are: CHM, AEHM, BDEHM, FDEHM, JNM, INM, and PNM, according to the network resistance, which is the edge weight, can calculate that the cost of the BDEHM path is the largest, which is 18.4. Therefore, node B is the source found.

api\../image/FindSource.png
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult

find_unconnected_edges(edge_or_node_ids, is_edge_ids=True)

According to the given node ID array, find the edges that are not connected to these nodes, and return the edge ID array. This method is used to find edges that are not connected to a given node. After finding connected edges, the corresponding disconnected nodes can be queried according to the network topology, that is, the starting node and ending node of the edge.

Parameters:
  • edge_or_node_ids (list[int] or tuple[int]) – node ID array or edge ID array
  • is_edge_ids (bool) – whether edge_or_node_ids represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

edge ID array

Return type:

list[int]

is_load()

Determine whether the network dataset model is loaded.

Returns:The network dataset model load return True, otherwise it return False
Return type:bool
load()

Load the facility network model according to the facility network analysis environment settings.

Note that in the following two situations, the load method must be called again to load the network model, and then analyze.
-The parameters of the facility network analysis environment setting object have been modified, and the method needs to be called again, otherwise the modification will not take effect and the analysis result will be wrong; -Any modification to the network dataset used, including modifying the data in the network dataset, replacing the dataset, etc., needs to reload the network model, otherwise the analysis may go wrong.
Returns:Used to indicate the success of loading the facility network model. It returns true if it succeeds, otherwise it returns false.
Return type:bool
set_analyst_setting(value)

Set up the environment for facility network analysis.

The setting of environmental parameters of facility network analysis directly affects the results of facility network analysis. The parameters required for facility network analysis include: the dataset used for facility network analysis ( The network dataset of flow direction is established or the network dataset of flow direction and level is established at the same time, which means that the method corresponds to: py:class:FacilityAnalystSetting The specified network dataset must have flow direction or flow direction and level information), node ID field, edge ID field, edge start node ID field, edge end node ID field, weight value information, distance tolerance from point to edge, obstacle node, obstacle edge, flow direction, etc.

Parameters:value (FacilityAnalystSetting) – facility network analysis environment parameter
Returns:self
Return type:FacilityAnalyst
trace_down(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Perform downstream tracking based on the given edge ID or node ID, that is, find the downstream of a given edge (node), and return the edge, node and total cost included in the downstream. Downstream tracking is the process of starting from a given node (or ) edge and finding its downstream according to the flow direction. This method is used to find the downstream of a given edge, and the analysis results are the edges, nodes, and costs flowing through the entire downstream.

Downstream tracking is often used to analyze the scope of influence. E.g:

-After the tap water supply pipeline bursts, all downstream pipelines at the location of the accident will be tracked and analyzed through downstream, and then the affected water supply area will be determined through spatial query, so as to promptly issue notices and adopt
Take emergency measures, such as a fire truck or a water company arranging vehicles to deliver water to the water-shutdown area.
-When pollution is found in a certain location of the river, downstream tracking can be used to analyze all downstream river sections that may be affected, as shown in the figure below. Before analysis, you can also
Types, discharges, etc., combined with appropriate water quality management models, analyze the downstream river sections or locations that will not be polluted before pollution is removed, and set them as obstacles (set in FacilityAnalystSetting), When tracking downstream, the tracking stops when the obstacle is reached, which can narrow the scope of analysis. After identifying the potentially affected reach, the spatial query and analysis are used to mark all the nearby reach Water-using units and residential areas shall issue notices in a timely manner and take emergency measures to prevent further expansion of pollution hazards.
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult

trace_up(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Perform upstream tracking based on a given node ID or edge ID, that is, find the upstream of a given node, and return the edge, node and total cost contained in the upstream.

  • Upstream and downstream

    For a node (or edge) of a facility network, the edge and the node through which resources in the network finally flow into the node (or edge) are called its upstream; The edge and network through which the node (or edge) flows out and finally flows into the sink point is called its downstream.

    Take the upstream and downstream of the node as an example. The following figure is a schematic diagram of a simple facility network, using arrows to mark the flow of the network. According to the flow direction, it can be seen that the resources flow through the nodes 2, 4, 3, 7, 8, and edges 10, 9, 3, 4, and 8 eventually flow into node 10. Therefore, these nodes and edges are called upstream of node 10, and the nodes are called its upstream node, The edge is called its upstream edge. Similarly, resources flow out of node 10, flow through nodes 9, 11, 12, and edges 5, 7, 11, and finally flow out of the network. Therefore, these nodes and edges are called the downstream of node 10, where the node is called its downstream node, and the edge is called its downstream edge.

    api\../image/UpAndDown.png
  • Upstream tracking and downstream tracking

    Upstream tracking is the process of starting from a given node (or edge) and finding its upstream according to the flow direction. Similarly, downstream tracking starts from a given node (or edge), according to the flow direction, find its downstream process. The FacilityAnalyst class respectively provides methods for upstream or downstream tracking starting from a node or edge. The result of the analysis is the upstream or downstream found The edge ID array and node ID array included in the tour, and the cost of flowing through the entire upstream or downstream. This method is used to find the upstream of a given edge.

  • Applications

    A common application of upstream tracking is to help locate the source of river water pollutants. Rivers are not only an important path for the earth’s water cycle, but also the most important freshwater resource for mankind. If the source of pollution is not found and eliminated in time, it is likely to affect people’s normal drinking water and health. Because the river flows from high to low under the influence of gravity, when the river is polluted, It should be considered that there may be pollution sources in the upstream, such as industrial wastewater discharge, domestic sewage discharge, pesticide and fertilizer pollution, etc. The general steps for tracking the source of river water pollutants are generally:

    -When the monitoring data of the water quality monitoring station shows that the water quality is abnormal, first determine the location of the abnormality, and then find the edge or node where the location is (or the nearest) on the river network (network dataset), as the upstream tracking starting point; -If you know the normal water quality monitoring locations closest to the starting point in the upstream of the starting point, you can set these locations or the river section where they are located as obstacles, which can help further narrow the scope of the analysis. Because it can be considered that upstream of the normal location of water quality monitoring, it is impossible for the pollution source of this investigation to exist. After setting as an obstacle, perform upstream tracking analysis. After tracking to the location, it will not continue to track the upstream of the location; -Perform upstream tracking analysis to find all river sections that converge to the river section where the water quality abnormality occurs; -Use spatial query and analysis to find all possible pollution sources located near these river sections, such as chemical plants, garbage treatment plants, etc.; -Further screening of pollution sources based on the monitoring data of abnormal water quality; -Analyze the pollutant discharge load of the selected pollution sources and rank them according to their likelihood of causing pollution; -Conduct on-site investigations and research on possible pollution sources in order to finally determine the source of the pollution.

Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the uncertain flow direction is valid. Specify true to indicate that the uncertain flow direction is valid, and analyze when the uncertain flow direction is encountered continue; specify false, indicating that the uncertain flow direction is invalid, and it will stop continuing in that direction when encountering an uncertain flow direction continue to find. When the value of the flow direction field is 2, it means that the flow direction of the edge is uncertain.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult

class iobjectspy.analyst.TerminalPoint(point, load)

Bases: object

Terminal point, used for grouping analysis, terminal point contains coordinate information and load

Initialization object

Parameters:
  • point (Point2D) – coordinate point information
  • load (int) – load
load

int – load amount

point

Point2D – coordinate point

set_load(load)

Set load

Parameters:load (int) – load
Returns:self
Return type:TerminalPoint
set_point(point)

Set coordinate point

Parameters:point (Point2D) – coordinate point
Returns:self
Return type:TerminalPoint
class iobjectspy.analyst.FacilityAnalystResult(java_object)

Bases: object

Facility network analysis result class. This class is used to obtain the results of facility network analysis such as finding sources and sinks, upstream and downstream tracking, and finding routes, including result edge ID array, result node ID array, and cost.

cost

float – The cost of the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find Source: This value is the cost of the least costly path from the analysis edge or node to the source. -Find sink: This value is the least costly path cost of analyzing edge or node to sink. -Upstream Tracking: This value is the total cost of analyzing the edge or the edge included upstream of the node. -Downstream Tracking: This value is the total cost of analyzing the edge or the edge included downstream of the node. -Path analysis: This value is the cost of the least costly path found. -Upstream path analysis: This value is the cost of the found upstream path with the least cost. -Downstream path analysis: This value is the cost of the found downstream path with the least cost.

edges

list[int] – The edge ID array in the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find source: This value is the edge ID array of the edge included in the least costly path from the analysis edge or node to the source. -Find sink: This value is the edge ID array of the edge included in the least costly path from the analyzed edge or node to the sink. -Upstream Tracking: This value is the edge ID array of the edge included in the upstream of the analyzed edge or node. -Downstream Tracking: The value is an array of edge IDs of the analyzed edge or the edge contained downstream of the node. -Path analysis: The value is the edge ID array of the edges that the least costly path found . -Upstream path analysis: This value is the edge ID array of the edges that the found upstream least costly path passes. -Downstream path analysis: This value is the edge ID array of the edges that the found downstream least costly path passes.

nodes

list[int] – The node ID array in the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find Source: This value is the node ID array of the nodes included in the least costly path from the analysis edge or node to the source. -Find sink: This value is the node ID array of the nodes included in the least costly path from the analyzed edge or node to the sink. -Upstream Tracking: The value is the node ID array of the nodes included in the upstream of the analysis edge or node. -Downstream tracking: This value is the node ID array of the nodes included in the downstream of the analysis edge or node. -Path analysis: The value is an array of node IDs of the nodes that the least costly path found. -Upstream path analysis: This value is the node ID array of the nodes that the found upstream least costly path passes. -Downstream path analysis: This value is the node ID array of the nodes that the found downstream least costly path passes.

class iobjectspy.analyst.FacilityAnalystSetting

Bases: object

Facilities network analysis environment setting class. Facilities network analysis environment setting class. This class is used to provide all the parameter information needed for facility network analysis. The setting of each parameter of the facility network analysis environment setting category directly affects the result of the analysis.

barrier_edge_ids

list[int] – ID list of barrier edge segments

barrier_node_ids

list[int] – ID list of barrier nodes

direction_field

str – flow direction field

edge_id_field

str – The field that marks the edge ID in the network dataset

f_node_id_field

str – The field that marks the starting node ID of the edge in the network dataset

network_dataset

DatasetVector – network dataset

node_id_field

str – The field that identifies the node ID in the network dataset

set_barrier_edge_ids(value)

Set the ID list of barrier edges

Parameters:value (str or list[int]) – ID list of barrier edge
Returns:self
Return type:FacilityAnalystSetting
set_barrier_node_ids(value)

Set the ID list of barrier nodes

Parameters:value (str or list[int]) – ID list of barrier node
Returns:self
Return type:FacilityAnalystSetting
set_direction_field(value)

Set flow direction field

Parameters:value (str) – flow direction field
Returns:self
Return type:FacilityAnalystSetting
set_edge_id_field(value)

Set the field that identifies the node ID in the network dataset

Parameters:value (str) – The field that marks the edge segment ID in the network dataset
Returns:self
Return type:FacilityAnalystSetting
set_f_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) – The field that marks the starting node ID of the edge in the network dataset
Returns:self
Return type:FacilityAnalystSetting
set_network_dataset(dt)

Set up network dataset

Parameters:dt (DatasetVetor or str) – network dataset
Returns:self
Return type:FacilityAnalystSetting
set_node_id_field(value)

Set the field of the network dataset to identify the node ID

Parameters:value (str) – The field that identifies the node ID in the network dataset
Returns:self
Return type:FacilityAnalystSetting
set_t_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) –
Returns:self
Return type:FacilityAnalystSetting
set_tolerance(value)

Set node tolerance

Parameters:value (float) – node tolerance
Returns:self
Return type:FacilityAnalystSetting
set_weight_fields(value)

Set weight field

Parameters:value (list[WeightFieldInfo] or tuple[WeightFieldInfo]) – weight field
Returns:self
Return type:FacilityAnalystSetting
t_node_id_field

str – the field that marks the starting node ID of the edge in the network dataset

tolerance

float – node tolerance

weight_fields

list[WeightFieldInfo] – weight field

class iobjectspy.analyst.SideType

Bases: iobjectspy._jsuperpy.enums.JEnum

Indicates whether it is on the left, right or on the road. Used for driving guidance.

Variables:
LEFT = 1
MIDDLE = 0
NONE = -1
RIGHT = 2
class iobjectspy.analyst.TurnType

Bases: iobjectspy._jsuperpy.enums.JEnum

Turning direction for driving guidance

Variables:
AHEAD = 3
BACK = 4
END = 0
LEFT = 1
NONE = 255
RIGHT = 2
class iobjectspy.analyst.ServiceAreaType

Bases: iobjectspy._jsuperpy.enums.JEnum

Service area type. Used for service area analysis.

Variables:
COMPLEXAREA = 1
SIMPLEAREA = 0
class iobjectspy.analyst.VehicleInfo

Bases: object

Vehicle information category. Information such as the maximum cost value and maximum load of the vehicle is stored.

area_ratio

float – area coefficient of logistics analysis

cost

float – the maximum cost of the vehicle

end_time

datetime.datetime – the latest vehicle return time

load_weights

list[float] – the load of the vehicle

se_node

int – start and end node ID in one-way route of logistics analysis

se_point

Point2D – start and end point coordinates in one-way route of logistics analysis

set_area_ratio(value)

Set the regional coefficient of logistics analysis. It is only valid when VRPAnalystType is VRPAnalystType.AREAANALYST.

Parameters:value (float) – Area coefficient of logistics analysis
Returns:self
Return type:VehicleInfo
set_cost(value)

Set the maximum cost of the vehicle

Parameters:value (float) – The maximum cost value of the vehicle.
Returns:self
Return type:VehicleInfo
set_end_time(value)

Set the latest vehicle return time

Parameters:value (datetime.datetime or int or str) – the latest return time of the vehicle
Returns:self
Return type:VehicleInfo
set_load_weights(value)

Set the load capacity of the vehicle. The load capacity can be multi-dimensional, for example, the maximum carrying weight and the maximum carrying volume can be set at the same time. It is required that the transport vehicle load of each line in the analysis does not exceed this value.

Parameters:value (list[float]) – the load of the vehicle
Returns:self
Return type:VehicleInfo
set_se_node(value)

Set the start and end node ID in the one-way route of logistics analysis.

When setting this method, the route type: py:class:VRPDirectionType must be: py:attr:VRPDirectionType.STARTBYCENTER or Or: py:attr:VRPDirectionType.ENDBYCENTER This parameter only works.

When the route type is: py:attr:VRPDirectionType.STARTBYCENTER, this parameter indicates the final stop position of the vehicle.

When the route type is: py:attr:VRPDirectionType.ENDBYCENTER, this parameter indicates the initial starting position of the vehicle.

Parameters:value (int) – start and end node ID in one-way route of logistics analysis
Returns:self
Return type:VehicleInfo
set_se_point(value)

Set the starting and ending point coordinates in the one-way route of logistics analysis.

When setting this method, the route type VRPDirectionType must be VRPDirectionType.STARTBYCENTER or VRPDirectionType.ENDBYCENTER This parameter is effective.

When the route type is: py:attr:VRPDirectionType.STARTBYCENTER, this parameter indicates the final stop position of the vehicle.

When the route type is: py:attr:VRPDirectionType.ENDBYCENTER, this parameter indicates the initial starting position of the vehicle.

Parameters:value (Point2D) – starting and ending point coordinates in one-way route of logistics analysis
Returns:self
Return type:VehicleInfo
set_start_time(value)

Set the earliest departure time of the vehicle

Parameters:value (datetime.datetime or int or str) – The earliest departure time of the vehicle
Returns:self
Return type:VehicleInfo
start_time

datetime.datetime – earliest vehicle departure time

class iobjectspy.analyst.NetworkDatasetErrors(java_errors)

Bases: object

The check result of the topology relationship of the network dataset, including the error information of the edge segment of the network dataset and the node error information.

arc_errors

dict[int,int] – edge error information. The key is the SmID of the error edge in the network dataset, and the value is the error type.

node_errors

dict[int,int] – node error information. The key is the SmID of the error node in the network dataset, and the value is the error type.

class iobjectspy.analyst.GroupAnalystResult(java_object)

Bases: object

Group analysis result class. This class is used to return the results of grouping analysis, including the unallocated distribution point set and the analysis result item set.

error_terminal_point_indexes

list[int] – a collection of unallocated distribution points

groups

list[GroupAnalystResultItem] – Analysis result item collection

class iobjectspy.analyst.GroupAnalystResultItem(java_object)

Bases: object

Group analysis result item category. The group analysis result item records the index of the center point in each group, the set of distribution point indexes contained in the group, the total cost in the group, and the set of lines from each distribution point to the center point and the total load of the group.

center

int – The index of the center point of the grouping result

cost

float – total cost of grouping results

lines

list[GeoLineM] – The collection of lines from each distribution point to the center point of the grouping result

load_sum

float – total load of grouping results

members

list[int] – The index collection of the distribution points of the grouping result

class iobjectspy.analyst.SupplyCenterType

Bases: iobjectspy._jsuperpy.enums.JEnum

Constant type of resource center point in network analysis, mainly used for resource allocation and location selection

Variables:
FIXEDCENTER = 2
NULL = 0
OPTIONALCENTER = 1
class iobjectspy.analyst.SupplyCenter(supply_center_type, center_node_id, max_weight, resource=0)

Bases: object

Resource supply center class. The resource supply center category stores the information of the resource supply center, including the ID, maximum cost and type of the resource supply center.

Initialization object

Parameters:
  • supply_center_type (SupplyCenterType or str) – The types of resource supply center points include non-center, fixed center and optional center. Fixed center is used for resource allocation analysis; fixed center and optional center are used for location analysis, Non-centrality is not considered in both types of network analysis.
  • center_node_id (int) – ID of the resource supply center point.
  • max_weight (float) – The maximum cost of the resource supply center (resistance value)
  • resource (float) – The amount of resources in the resource supply center
center_node_id

int – ID of the resource supply center

max_weight

float – the maximum cost of the resource supply center.

resource

float – the amount of resources in the resource supply center

set_center_node_id(value)

Set the ID of the resource supply center.

Parameters:value (int) – ID of the resource supply center
Returns:self
Return type:SupplyCenter
set_max_weight(value)

Set the maximum cost of the resource supply center. The larger the maximum resistance setting of the center point is, the larger the influence range of the resources provided by the center point is. The maximum resistance value is used to limit the cost from the demand point to the center point. If the cost of the demand point (edge or node) to this center is greater than the maximum resistance value, the demand point is filtered out. The maximum resistance value can be edited.

Parameters:value (float) – The maximum cost of the resource supply center (resistance value)
Returns:self
Return type:SupplyCenter
set_resource(value)

Set the resource amount of the resource supply center

Parameters:value (float) – The amount of resources in the resource supply center
Returns:self
Return type:SupplyCenter
set_supply_center_type(value)

Set the type of resource supply center point in network analysis

Parameters:value (SupplyCenterType or str) – The types of resource supply center points include non-center, fixed center and optional center. Fixed center is used for resource allocation analysis; fixed center and optional center are used for location analysis, Non-centrality is not considered in both types of network analysis.
Returns:self
Return type:SupplyCenter
supply_center_type

SupplyCenterType – The type of resource supply center point in network analysis

class iobjectspy.analyst.SupplyResult(java_object)

Bases: object

Resource supply center point result class.

This category provides the results of resource supply, including the type of resource supply center, ID, maximum resistance, number of demand points, average cost, and total cost.

average_weight

float – average cost, that is, total cost divided by the number of points required

center_node_id

int – ID of the resource supply center

demand_count

int – the number of demand nodes served by the resource supply center

max_weight

float – The maximum cost (resistance value) of the resource supply center. The maximum resistance value is used to limit the cost from the demand point to the center point. If the cost of the demand point (node) to this center is greater than the maximum resistance value, the demand point is filtered out. The maximum resistance value can be edited.

total_weight

float – total cost. When the location selection analysis selects to allocate resources from the resource supply center, the total cost is the sum of the cost from the resource supply center to all the demand nodes it serves; conversely, if the resource supply center is not allocated, the total cost is that The sum of the cost from all demand nodes served by the resource supply center to the resource supply center.

type

SupplyCenterType – The type of the resource supply center

class iobjectspy.analyst.DemandResult(java_object)

Bases: object

Demand result class. This class is used to return relevant information about demand results, including demand node ID and resource supply center ID.

actual_resource

float – The actual amount of resources allocated, only valid for resource allocation.

demand_id

int – When the is_edge method is True, this method return the ID of the edge, when it is False, this method what is returned is the ID of the node.

is_edge

bool – Return whether the demand result is an edge. If it is not an edge, the demand result is a node. Only valid for resource allocation, otherwise False.

supply_center_node_id

int – Resource Supply Center ID

class iobjectspy.analyst.LocationAnalystResult(java_object)

Bases: object

The location analysis result class.

demand_results

list[DemandResult] – Demand result object array

supply_results

list[SupplyResult] – resource supply result object array

class iobjectspy.analyst.RouteType

Bases: iobjectspy._jsuperpy.enums.JEnum

The analysis mode of the best path analysis is used for SSC-based best path analysis.

Variables:
MINLENGTH = 2
NOHIGHWAY = 3
RECOMMEND = 0
class iobjectspy.analyst.NetworkSplitMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Construct a network dataset interruption mode. Used to control the mode of processing line-line breaks or dot-line breaks when building a network dataset

Variables:
LINE_SPLIT_BY_POINT = 1
LINE_SPLIT_BY_POINT_AND_LINE = 2
NO_SPLIT = 0
TOPOLOGY_PROCESSING = 3
class iobjectspy.analyst.NetworkSplitMode3D

Bases: iobjectspy._jsuperpy.enums.JEnum

Construct a three-dimensional network dataset interruption mode. Used to control the mode of processing line-line breaks or dot-line breaks when building a network dataset

Variables:
LINE_SPLIT_BY_POINT = 1
LINE_SPLIT_BY_POINT_AND_LINE = 2
NO_SPLIT = 0
iobjectspy.analyst.build_network_dataset_known_relation_3d(line, point, edge_id_field, from_node_id_field, to_node_id_field, node_id_field, out_data=None, out_dataset_name=None, progress=None)

According to the point, line data and the existing fields that express the topological relationship of arc nodes, a network dataset is constructed. When the line and point objects in the existing line and point dataset correspond to the arcs and nodes of the network to be constructed, and have information describing the spatial topological relationship between the two, that is, the line dataset contains the edge ID and the edge starting point. Start node ID and end node ID fields. When the point dataset contains the node ID field of the point object, this method can be used to construct a network dataset.

After successfully constructing a network dataset using this method, the number of result objects is consistent with the number of objects in the source data, that is, a line object in the line data is written as an arc, and a point object in the point data is written as a node, and retain all non-system fields of the point and line datasets to the result dataset.

For example, for pipeline and pipe point data collected for establishing a pipe network, the pipeline and pipe points are all identified by a unique fixed code. One of the characteristics of the pipe network is that the pipe points are only located at the two ends of the pipeline, so the pipe points correspond to all the nodes of the pipe network to be constructed, and the pipeline corresponds to all the arcs of the pipe network to be constructed, and there is no need to be at the intersection of the pipeline and the pipeline. interrupt. In the pipeline data, the pipe point information at both ends of the pipeline object is recorded, that is, the start pipe point code and the end pipe point code, which means that the pipeline and pipe point data already contain the information of the spatial topological relationship between the two, so it is suitable for use This method builds a network dataset.

Note that the arc ID, arc start node ID, edge end node ID, and node ID fields of the network dataset constructed in this way are the fields specified when calling this method, and are no longer SmEdgeID, SmFNode, SmTNode, SmNodeID and other system fields. Specifically, the corresponding fields can be obtained through the DatasetVector.get_field_name_by_sign() method of DatasetVector.

Parameters:
  • line (str or DatasetVector) – 3D line dataset used to build network dataset
  • point (str or DatasetVector) – 3D point dataset used to construct network dataset
  • edge_id_field (str) – The field representing the edge ID in the specified line dataset. If it is specified as a null or empty string, or the specified field does not exist, SMID is automatically used as the edge ID. Only 16-bit integer and 32-bit integer fields are supported.
  • from_node_id_field (str) – The field representing the starting node ID of the edge in the specified line dataset. Only 16-bit integer and 32-bit integer fields are supported.
  • to_node_id_field (str) – The field in the specified line dataset that represents the end node ID of the edge. Only 16-bit integer and 32-bit integer fields are supported.
  • node_id_field (str) – The field representing the node ID in the specified point dataset. Only 16-bit integer and 32-bit integer fields are supported.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource object that holds the result network dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result network dataset

Return type:

DatasetVector

iobjectspy.analyst.build_facility_network_directions_3d(network_dataset, source_ids, sink_ids, direction_field='Direction', node_type_field='NodeType', progress=None)

Create a flow direction for the network dataset based on the location of the source and sink in the specified network dataset. The network dataset that flows to the future can be used for various facility network analysis. A facility network is a network with directions. Therefore, after creating a network dataset, a flow direction must be created for it before it can be used for various facility network path analysis, connectivity analysis, upstream and downstream tracking, etc.

Flow direction refers to the direction of resource flow in the network. The flow direction in the network is determined by the source and sink: resources always flow from the source to the sink. This method creates a flow direction for the network dataset through the given source and sink,
as well as the facility network analysis parameter settings. After the flow direction is successfully created, two aspects of information will be written in the network dataset: flow direction and node type.
  • Flow direction

    The flow direction information will be written into the flow direction field of the subline dataset of the network dataset, and the field will be created if it does not exist.

    There are four values in the flow direction field: 0,1,2,3, the meaning of which is shown in the figure below. Take line AB as an example:

    0 means the flow direction is the same as the digitization direction. The digitization direction of the line segment AB is A–>B, and A is the source point, so the flow direction of AB is from A to B, which is the same as its digitization direction.

    1 means the flow direction is opposite to the digitization direction. The digitization direction of the line segment AB is A–>B, and A is the meeting point, so the flow direction of AB is from B to A, which is the opposite of its digitization direction.

    2 stands for invalid direction, also called uncertain flow direction. Both A and B are source points, so resources can flow from A to B, and from B to A, which constitutes an invalid flow.

    3 stands for disconnected edges, also called uninitialized direction. The line segment AB is not connected to the node where the source and sink are located, it is called a disconnected edge.

    api\../image/BuildFacilityNetworkDirections_1.png
  • Node type

    After establishing the flow direction, the system will also write the node type information into the node type field of the sub-point dataset of the specified network dataset. Node types are divided into source, sink, and ordinary nodes. The following table lists the value and meaning of the node type field:

    api\../image/BuildFacilityNetworkDirections_2.png
Parameters:
  • network_dataset (DatasetVector or str) – The 3D network dataset of the flow direction to be created. The 3D network dataset must be modifiable.
  • source_ids (list[int] or tuple[int]) – The network node ID array corresponding to the source. Both sources and sinks are used to establish the flow of network dataset. The flow direction of the network dataset is determined by the location of the source and sink.
  • sink_ids (list[int] or tuple[int]) – sink ID array. The ID array of the network node corresponding to the sink. Both sources and sinks are used to establish the flow of network dataset. The flow direction of the network dataset is determined by the location of the source and sink.
  • direction_field (str) – flow direction field, used to save the flow direction information of the network dataset
  • node_type_field (str) – The name of the node type field. The node type is divided into source node, intersection node, and ordinary node. This field is a field in the network node dataset. If it does not exist, create the field.
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

return true if created successfully, otherwise false

Return type:

bool

iobjectspy.analyst.build_network_dataset_3d(line, point=None, split_mode='NO_SPLIT', tolerance=0.0, line_saved_fields=None, point_saved_fields=None, out_data=None, out_dataset_name=None, progress=None)

The network dataset is the data basis for network analysis. The 3D network dataset consists of two sub-dataset (a 3D line dataset and a 3D point dataset), which store the arcs and nodes of the network model respectively. It also describes the spatial topological relationship between arcs and arcs, arcs and nodes, and nodes and nodes.

This method provides to construct a network dataset based on a single line dataset or a single line and a single point. If the user’s dataset already has the correct network relationship, you can directly build a network dataset through build_network_dataset_known_relation_3d().

For the constructed network dataset, you can use validate_network_dataset_3d() to check whether the network topology is correct.

Parameters:
  • line (DatasetVector) – the line dataset used to construct the network dataset
  • point (DatasetVector) – The point dataset used to construct the network dataset.
  • split_mode (NetworkSplitMode3D) – break mode, default is no break
  • tolerance (float) – node tolerance
  • line_saved_fields (str or list[str]) – Fields that need to be reserved in the line dataset
  • point_saved_fields (str or list[str]) – The fields that need to be reserved in the point dataset.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource object that holds the result network dataset
  • out_dataset_name (str) – result dataset name
  • progress (function) – progress information processing function, please refer to:py:class:.StepEvent
Returns:

result 3D network dataset

Return type:

DatasetVector

class iobjectspy.analyst.FacilityAnalystResult3D(java_object)

Bases: object

Facility network analysis result class. This class is used to obtain the results of facility network analysis such as finding sources and sinks, upstream and downstream tracking, and finding routes, including result edge ID array, result node ID array, and cost.

cost

float – the cost of the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find Source: This value is the cost of the least costly path from the analysis arc or node to the source. -Find sink: This value is the least costly path cost of analyzing arc or node to sink. -Upstream Tracking: This value is the total cost of analyzing the arc or the arc included upstream of the node. -Downstream Tracking: This value is the total cost of analyzing the arc or the arc included downstream of the node.

edges

list[int] – The edge ID array in the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find source: This value is the edge ID array of the arc included in the least costly path from the analysis arc or node to the source. -Find sink: This value is the edge ID array of the arc included in the least costly path from the analyzed arc or node to the sink. -Upstream Tracking: This value is the edge ID array of the arc included in the upstream of the analyzed arc or node. -Downstream Tracking: This value is an array of edge IDs of the arcs included in the downstream of the analyzed arc or node.

nodes

list[int] – Node ID array in the facility network analysis result. For different facility network analysis functions, the meaning of the return value of this method is different:

-Find Source: This value is the node ID array of the nodes included in the least costly path from the analysis arc or node to the source. -Find sink: This value is the node ID array of the nodes included in the least costly path from the analyzed arc or node to the sink. -Upstream Tracking: The value is the node ID array of the nodes included in the upstream of the analysis arc or node. -Downstream tracking: This value is the node ID array of the nodes included in the downstream of the analysis arc or node.

class iobjectspy.analyst.FacilityAnalystSetting3D

Bases: object

Facilities network analysis environment setting class. Facilities network analysis environment setting class. This class is used to provide all the parameter information needed for facility network analysis. The setting of each parameter of the facility network analysis environment setting category directly affects the result of the analysis.

barrier_edge_ids

list[int] – Barrier edge ID list

barrier_node_ids

list[int] – ID list of barrier nodes

direction_field

str – flow direction field

edge_id_field

str – The field that marks the edge ID in the network dataset

f_node_id_field

str – The field that marks the starting node ID of the arc in the network dataset

network_dataset

DatasetVector – network dataset

node_id_field

str – The field that identifies the node ID in the network dataset

set_barrier_edge_ids(value)

Set the ID list of barrier arcs

Parameters:value (str or list[int]) – ID list of barrier arc
Returns:self
Return type:FacilityAnalystSetting3D
set_barrier_node_ids(value)

Set the ID list of barrier nodes

Parameters:value (str or list[int]) – ID list of barrier node
Returns:self
Return type:FacilityAnalystSetting3D
set_direction_field(value)

Set flow direction field

Parameters:value (str) – flow direction field
Returns:self
Return type:FacilityAnalystSetting3D
set_edge_id_field(value)

Set the field that identifies the node ID in the network dataset

Parameters:value (str) – The field that marks the arc segment ID in the network dataset
Returns:self
Return type:FacilityAnalystSetting3D
set_f_node_id_field(value)

Set the field to mark the starting node ID of the arc in the network dataset

Parameters:value (str) – The field that marks the starting node ID of the arc in the network dataset
Returns:self
Return type:FacilityAnalystSetting3D
set_network_dataset(dt)

Set up network dataset

Parameters:dt (DatasetVetor or str) – network dataset
Returns:self
Return type:FacilityAnalystSetting3D
set_node_id_field(value)

Set the field of the network dataset to identify the node ID

Parameters:value (str) – The field that identifies the node ID in the network dataset
Returns:self
Return type:FacilityAnalystSetting3D
set_t_node_id_field(value)

Set the field to mark the starting node ID of the arc in the network dataset

Parameters:value (str) –
Returns:self
Return type:FacilityAnalystSetting3D
set_tolerance(value)

Set node tolerance

Parameters:value (float) – node tolerance
Returns:self
Return type:FacilityAnalystSetting3D
set_weight_fields(value)

Set weight field

Parameters:value (list[WeightFieldInfo] or tuple[WeightFieldInfo]) – weight field
Returns:self
Return type:FacilityAnalystSetting3D
t_node_id_field

str – The field that marks the starting node ID of the arc in the network dataset

tolerance

float – node tolerance

weight_fields

list[WeightFieldInfo] – weight field

class iobjectspy.analyst.FacilityAnalyst3D(analyst_setting)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Facility network analysis class.

Facilities network analysis class. It is one of the network analysis functions, mainly used for various connectivity analysis and tracking analysis.

The facility network is a network with directions. That is, the medium (water flow, current, etc.) will flow in the network according to the rules of the network itself.

The premise of facility network analysis is that a dataset for facility network analysis has been established. The basis for establishing a dataset for facility network analysis is to establish a network dataset. On this basis, use the method: py:meth:build_facility_network_directions_3d The data information unique to the network dataset for facility network analysis, that is, to establish the flow direction for the network dataset, so that the original network dataset has the most basic conditions for the facility network analysis. At this time, it can be carried out. Network analysis of various facilities.

Parameters:analyst_setting (FacilityAnalystSetting3D) – Set the network analysis environment.
analyst_setting

FacilityAnalystSetting3D – Facility Network Analysis Environment

burst_analyse(source_nodes, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Two-way tube explosion analysis, by specifying the arc of the tube, find the nodes in the upstream and downstream of the arc of the tube that directly affect the position of the tube and the nodes directly affected by the position of the tube.

Parameters:
  • source_nodes (list[int] or tuple[int]) – The specified facility node ID array. Can not be empty.
  • edge_or_node_id (int) – The specified edge ID or node ID, and the location of the tube burst.
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – Whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID.
Returns:

Explosion tube analysis result

Return type:

BurstAnalystResult3D

find_critical_facilities_down(source_node_ids, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Downstream key facility search, that is, find the key downstream facility nodes of a given arc, and return the ID array of the key facility node and the downstream edge ID array affected by the given arc. In the search and analysis of downstream key facilities, we divide the nodes of the facility network into two types: ordinary nodes and facility nodes. The facility nodes are considered to be nodes that can affect network connectivity. For example, a valve in a water supply pipe network; ordinary nodes are nodes that do not affect network connectivity, such as fire hydrants or three links in the water supply pipe network.

The downstream key facility search analysis will filter out the key nodes from the given facility nodes. These key nodes are the most basic nodes for maintaining connectivity between the analysis arc and its downstream, that is, After closing these key nodes, the analysis node cannot communicate with the downstream. At the same time, the result of this analysis also includes the union of downstream arcs affected by a given arc.

The search method of key facility nodes can be summarized as follows: starting from the analysis arc and searching its downstream, the first facility node encountered in each direction is the key facility node to be searched.

Parameters:
  • source_node_ids (list[int] or tuple[int]) – The specified facility node ID array. Can not be empty.
  • edge_or_node_id (int) – Specified analysis edge ID or node ID
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – Whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID.
Returns:

facility network analysis result

Return type:

FacilityAnalystResult3D

find_critical_facilities_up(source_node_ids, edge_or_node_id, is_uncertain_direction_valid=False, is_edge_id=True)

Upstream key facility search, that is, find the key facility nodes upstream of a given arc, and return the key node ID array and its downstream edge ID array. In the search and analysis of upstream key facilities, we divide the nodes of the facility network into two types : ordinary nodes and facility nodes . The facility nodes are considered to be the nodes that can affect network connectivity. For example, a valve in a water supply pipe network; ordinary nodes are nodes that do not affect network connectivity, such as fire hydrants or three links in the water supply pipe network. The upstream critical facility search and analysis needs to specify facility nodes and Analysis node, where the analysis node can be a facility node or an ordinary node.

The upstream key facility search analysis will filter out the key nodes from the given facility nodes. These key nodes are the most basic nodes for maintaining connectivity between the analysis arc and its upstream, that is, After closing these key nodes, the analysis node cannot communicate with the upstream. At the same time, the result of the analysis also includes the union of the downstream arcs of the key nodes found.

The search method of key facility nodes can be summarized as follows: starting from the analysis arc and tracing back to its upstream, the first facility node encountered in each direction is the key facility node to be searched. As shown in the figure below, starting from the analysis arc (red), the key facility nodes found include: 2, 8, 9, and 7. Nodes 4 and 11 are not the first facility node encountered in the backtracking direction. Therefore, it is not a critical facility node. As an illustration, only the upstream part of the analysis arc is given here, but note that the analysis results will also give the downstream arcs of key facility nodes 2, 8, 9 and 7.

api\../image/findCriticalFacilitiesUp.png
  • Applications

After a pipe burst occurs in the water supply network, all valves can be used as facility nodes, and the bursting pipe section or pipe point can be used as an analysis arc or analysis node to find and analyze upstream key facilities. Quickly find the minimum number of valves in the upstream that need to be closed. After closing these valves, the burst pipe section or pipe point no longer communicates with its upstream, thereby preventing the outflow of water and preventing the disaster from aggravating and Waste of resources. At the same time, the analysis shows that the union of the downstream arcs of the valve that needs to be closed, that is, the range of influence after the valve is closed, is used to determine the water stop area, and promptly notify and emergency measures.

api\../image/FindClosestFacilityUp.png
Parameters:
  • source_node_ids (list[int] or tuple[int]) – The specified facility node ID array. Can not be empty.
  • edge_or_node_id (int) – analysis edge ID or node ID
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult3D

find_sink(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Find the sink according to a given node ID or edge ID, that is, starting from a given node (arc), find the downstream sink that flows out of the node according to the flow direction, and return the minimum cost for the given node to reach the sink The arcs, nodes, and costs included in the path. Starting from a given node, this method finds the downstream sink point that flows out of the node according to the flow direction. The result of the analysis is the arc, node and cost included in the minimum cost path from the node to the found sink. If there are multiple sinks in the network, the farthest sink will be searched, which is the sink with the least cost and the largest cost from a given node. In order to facilitate understanding, the realization process of this function can be divided into three steps:

  1. Starting from a given node, according to the flow direction, find all the sink points downstream of the node;
  2. Analyze the minimum cost path from a given node to each sink and calculate the cost;
  3. Select the path corresponding to the maximum cost calculated in the previous step as the result, and give the edge ID array, node ID array and the cost of the path on the path.

Note: The node ID array in the analysis result does not include the analysis node itself.

The figure below is a simple facility network. Arrows are used on the network arcs to mark the flow of the network, and the weights are marked next to the arcs. Perform search sink analysis for analysis node D. Know, Starting from node D and searching downwards according to the flow direction, there are 4 sinks in total. The least costly paths from node D to sink are: EHLG, EHLK, EHMS and EHMQR, According to the network resistance, that is, the weight of the arc, it can be calculated that the cost of the EHMQR path is 16.6, so the node R is the sink found.

api\../image/FindSink.png
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult3D

find_source(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

The given node ID or edge ID locate the source, i.e. from a given node (arc), check to the flowing find flows of the node network source, and return to the source to a given node the minimum cost The arcs, nodes, and costs included in the path. This method starts from a given node and finds the network source node (ie source point) that flows to the node according to the flow direction. The result of the analysis is the arc included in the least costly path from the found source to the given node, nodes and Costs.

If there are multiple sources in the network, it will search for the farthest source that is the least expensive to reach a given node. In order to facilitate understanding, the realization process of this function can be divided into three steps:

  1. Starting from a given node, according to the flow direction, find all the source points upstream of the node;
  2. Analyze the minimum cost path for each source to reach a given node and calculate the cost;
  3. Select the path corresponding to the maximum cost calculated in the previous step as the result, and give the edge ID array, node ID array and the cost of the path on the path.

Note: The node ID array in the analysis result does not include the analysis node itself.

The figure below is a simple facility network. Arrows are used on the network arcs to mark the flow of the network, and the weights are marked next to the arcs. For the analysis node M, perform source search analysis. Know, Starting from node M and tracing upward according to the flow direction, there are 7 sources in total. The least costly paths from source to node M are: CHM, AEHM, BDEHM, FDEHM, JNM, INM, and PNM, according to the network resistance, which is the arc weight, can calculate that the cost of the BDEHM path is the largest, which is 18.4. Therefore, node B is the source found.

api\../image/FindSource.png
Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

facility network analysis result

Return type:

FacilityAnalystResult3D

is_load()

Determine whether the network dataset model is loaded.

Returns:The network dataset model load Return True, otherwise it Return False
Return type:bool
load()

Load the facility network model according to the facility network analysis environment settings.

Note that in the following two situations, the load method must be called again to load the network model, and then analyze.
-The parameters of the facility network analysis environment setting object have been modified, and the method needs to be called again, otherwise the modification will not take effect and the analysis result will be wrong; -Any modification to the network dataset used, including modifying the data in the network dataset, replacing the dataset, etc., needs to reload the network model, otherwise the analysis may go wrong.
Returns:Used to indicate the success of loading the facility network model. It Return true if it succeeds, otherwise it Return false.
Return type:bool
set_analyst_setting(value)

Set up the environment for facility network analysis.

The setting of facility network analysis environmental parameters directly affects the result of facility network analysis. The parameters required for facility network analysis include: the dataset used for facility network analysis (that is, the network dataset that establishes the flow direction or the network dataset that establishes the flow direction and level at the same time, that is to say, the method corresponds to: py:class : The network dataset specified by FacilityAnalystSetting3D must have flow direction or flow direction and level information), node ID field, edge ID field, arc start node ID field, edge end node ID field, weight information , Distance tolerance from point to arc, obstacle node, obstacle arc, flow direction, etc.

Parameters:value (FacilityAnalystSetting3D) – facility network analysis environment parameter
Returns:self
Return type:FacilityAnalyst3D
trace_down(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Perform downstream tracking based on the given edge ID or node ID, that is, find the downstream of a given arc (node), and return the arc, node and total cost included in the downstream. Downstream tracking is the process of starting from a given node (or arc) and finding its downstream according to the flow direction. This method is used to find the downstream of a given arc, and the analysis results are the arcs, nodes, and costs flowing through the entire downstream.

Downstream tracking is often used to analyze the scope of influence. E.g:

-After the tap water supply pipeline bursts, all downstream pipelines at the location of the accident are tracked and analyzed by downstream, and then the affected water supply area is determined through spatial query, so that timely notifications are issued and emergency measures are taken, such as a fire truck or a water company arranges for vehicles. Water supply to the water-cut area.

-When it is found that a certain location of the river is polluted, downstream tracking can be used to analyze all the downstream river sections that may be affected, as shown in the figure below. Before the analysis, according to the types of pollutants and emissions, combined with appropriate water quality management models, analyze the downstream river sections or locations that will not be polluted before the pollution is removed, and set them as obstacles (set in FacilityAnalystSetting). When tracking downstream, the tracking stops when the obstacle is reached, which can narrow the scope of analysis. After determining the river sections that may be affected, all water-using units and residential areas located near the resulting river sections are marked through spatial inquiry and analysis, a notice is issued in time, and emergency measures are taken to prevent further expansion of pollution hazards.

Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult3D

trace_up(edge_or_node_id, weight_name=None, is_uncertain_direction_valid=False, is_edge_id=True)

Perform upstream tracking based on a given node ID or edge ID, that is, find the upstream of a given node, and return the arc, node and total cost contained in the upstream.

  • Upstream and downstream

    For a node (or arc) of a facility network, the arc and the node through which resources in the network finally flow into the node (or arc) are called its upstream; from the node (or arc) The arc and the network through which the arc flows out and finally flows into the sink point are called its downstream.

    Take the upstream and downstream of the node as an example. The figure below is a schematic diagram of a simple facility network, using arrows to mark the flow of the network.

    According to the flow direction, it can be seen that resources flow through nodes 2, 4, 3, 7, 8, and arcs 10, 9, 3, 4, and 8 and finally flow into node 10. Therefore, these nodes and arcs are called nodes. The upstream of 10, the node among them is called its upstream node, and the arc is called its upstream arc. Similarly, resources flow out of node 10, flow through nodes 9, 11, and 12 and arcs 5, 7, and 11, and finally flow out of the network. Therefore, these nodes and arcs are called the downstream of node 10. The point is called its downstream node, and the arc is called its downstream arc.

    api\../image/UpAndDown.png
  • Upstream tracking and downstream tracking

    Upstream tracking is the process of starting from a given node (or arc) and finding its upstream according to the flow direction. Similarly, downstream tracking starts from a given node (or arc), according to the flow direction, Find its downstream process. The FacilityAnalyst class provides methods for upstream or downstream tracking starting from a node or arc. The result of the analysis is the found upstream or downstream arc ID array, node ID array, and flow through the entire upstream or Downstream costs. This method is used to find the upstream of a given arc.

  • Applications

    A common application of upstream tracking is to assist in locating the source of river water pollutants. Rivers are not only an important path for the earth’s water cycle, but also the most important freshwater resource for mankind. Once a river is polluted, the source of pollution is not found and eliminated in time, which may affect people’s normal drinking water and health. Because the river flows from high to low under the influence of gravity, when the river is polluted, it should be considered that there may be pollution sources upstream, such as industrial wastewater discharge, domestic sewage discharge, pesticide and fertilizer pollution, etc. The general steps for tracking the source of river water pollutants are generally:

    -When the monitoring data of the water quality monitoring station shows that the water quality is abnormal, first determine the location of the abnormality, and then find the arc or node where the location is (or the nearest) on the river network (network dataset), as the upstream tracking starting point; -If you know the normal water quality monitoring locations closest to the starting point in the upstream of the starting point, you can set these locations or the river section where they are located as obstacles, which can help further narrow the scope of the analysis. Because it can be considered that upstream of the normal location of water quality monitoring, it is impossible for the pollution source of this investigation to exist. After setting as an obstacle, perform upstream tracking analysis. After tracking to the location, it will not continue to track the upstream of the location; -Perform upstream tracking analysis to find all river sections that converge to the river section where the water quality abnormality occurs; -Use spatial query and analysis to find all possible pollution sources located near these river sections, such as chemical plants, garbage treatment plants, etc.; -Further screening of pollution sources based on the monitoring data of abnormal water quality; -Analyze the pollutant discharge load of the selected pollution sources and rank them according to their likelihood of causing pollution; -Conduct on-site investigations and research on possible pollution sources in order to finally determine the source of the pollution.

Parameters:
  • edge_or_node_id (int) – node ID or edge ID
  • weight_name (str) – The name of the weight field information object
  • is_uncertain_direction_valid (bool) – Specifies whether the flow direction is not valid or not. Specify true to indicate that the uncertain flow direction is valid, and the analysis will continue when the uncertain flow direction is encountered; specify false, indicate that the uncertain flow direction is invalid, and the uncertain flow direction will stop and continue searching in that direction. When the value of the flow direction field is 2, it means that the flow direction of the arc segment is an uncertain flow direction.
  • is_edge_id (bool) – whether edge_or_node_id represents the edge ID, True represents the edge ID, False represents the node ID
Returns:

Facility network analysis result.

Return type:

FacilityAnalystResult3D

class iobjectspy.analyst.BurstAnalystResult3D(java_object)

Bases: object

Burst analysis result class. Burst analysis results return key facilities, common facilities and arcs.

critical_nodes

list[int] – Critical facilities that affect the upstream and downstream of the burst location in the burst analysis. Critical facilities include two types of facilities:

  1. All facilities in the upstream of the burst location that directly affect the burst location.
  2. The downstream facilities are directly affected by the location of the burst pipe and have outflow (that is, the outflow degree is greater than 0).
edges

list[int] – The arcs that affect the position of the burst pipe upstream and downstream, and the arcs affected by the position of the burst pipe. It is the key to two-way search from the burst position The arc to which facilities and general facilities are traversed.

normal_nodes

list[int] – Ordinary facilities affected by the location of the burst tube in the burst tube analysis. General facilities include three types of facilities:

  1. Facilities that are directly affected by the location of the burst pipe and have no outflow (the outflow degree is 0).
  2. All facilities A directly affected by the outflow arc of each upstream critical facility (excluding all critical facilities), and facility A needs to meet, The arc of influence from upstream key facilities to facility A has a common part with the arc of influence from key upstream and downstream facilities.

3. Facility A (critical facility 2 and ordinary facility 1) that is directly affected by the location of the burst pipe downstream of the location of the burst pipe, facility B that directly affects facility A in the upstream of facility A (excluding all critical facilities), and requires It is satisfied that the influential arcs from facility A to facility B and the influential arcs of upstream and downstream key facilities have a common part.

class iobjectspy.analyst.TransportationAnalyst3D(analyst_setting)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Three-dimensional traffic network analysis class. This class is used to provide traffic network analysis functions based on 3D network dataset. Currently only the best path analysis is provided.

Roads, railways, passages in buildings, mine tunnels, etc. can be simulated using transportation networks. Unlike facility networks, transportation networks have no direction, that is, circulation media (pedestrians or transmitted resources) can decide the direction, speed and destination by yourself. Of course, certain restrictions can also be imposed, such as setting traffic rules, such as one-way lanes, no-go lanes, etc.

Three-dimensional traffic network analysis is based on the analysis of three-dimensional network datasets. It is an important part of three-dimensional network analysis. The best path analysis is currently provided. For transportation networks, especially those that cannot be clearly displayed on a two-dimensional plane, such as the internal passages of buildings and mine tunnels, the three-dimensional network can more truly reflect the spatial topology and analysis results of the network.

The general steps of 3D traffic network analysis:

  1. Construct a three-dimensional network dataset. According to research needs and existing data, choose a suitable network model construction method. SuperMap provides two methods for constructing three-dimensional network dataset. For details, please See: py:meth:build_network_dataset_3d or: py:meth:build_network_dataset_know_relation_3d.
  2. (Optional) It is recommended to perform a data check on the network dataset used for analysis (validate_network_dataset_3d() ).
  3. Set the 3D traffic network analysis environment (set_analyst_setting method);
  4. Load the network model (load method);
  5. Use the various transportation network analysis methods provided by the TransportationAnalyst3D class to perform corresponding analysis.

Initialization object

Parameters:analyst_setting (TransportationAnalystSetting3D) – Traffic network analysis environment setting object
analyst_setting

TransportationAnalystSetting3D – Traffic network analysis environment setting object

find_path(parameter)

Best path analysis. The best path analysis is used to find the best path through a given N points (N is greater than or equal to 2) in the network dataset. This best path has the following two characteristics:

This path must pass through these N points in sequence in the order of the given N points, that is to say, the points passed in the best path analysis are in order; This path has the least cost. The cost is determined according to the weights specified by the traffic network analysis parameters. The weight can be length, time, road conditions, cost, etc., so the best path can be the shortest path, the path with the least time, the path with the best traffic conditions, the path with the lowest cost, etc.

There are two ways to specify the passing points to be analyzed:

-Node method: Use: py:class:TransportationAnalystParameter3D class: py:meth:TransportationAnalystParameter3D.set_nodes method to specify the ID of the node passed by the best path analysis, at this time, the point passed in the analysis process is the corresponding network node , And the order of passing points is the order of network nodes in this node ID array;

-Arbitrary coordinate point method: through: py:class:TransportationAnalystParameter3D class: py:meth:TransportationAnalystParameter3D.set_points specifing the coordinates of the points passed by the best path analysis. At this time, the points passed during the analysis process are the corresponding coordinate point collections. The order of the passed points during the analysis process is the order of the coordinate points in the point collection.

Note: Only one of the two methods can be used, not at the same time.

Parameters:parameter (TransportationAnalystParameter3D) – Transportation network analysis parameter object
Returns:Best path analysis result
Return type:TransportationAnalystResult3D
load()

Load the network model. This method loads the network model according to the environmental parameters in the traffic network analysis environment setting (TransportationAnalystSetting3D) object. After setting the parameters of the traffic network analysis environment, the relevant parameters are modified. Only when this method is called, the traffic network analysis environment settings made will take effect in the process of traffic network analysis.

Returns:Return True if loading is successful, otherwise False
Return type:bool
set_analyst_setting(analyst_setting)

Set traffic network analysis environment setting object When using traffic network analysis to perform various traffic network analysis, you must first set the traffic network analysis environment, and you must first set the traffic network analysis environment

Parameters:analyst_setting (TransportationAnalystSetting3D) – Traffic network analysis environment setting object
Returns:self
Return type:TransportationAnalyst3D
class iobjectspy.analyst.TransportationAnalystResult3D(java_object, index)

Bases: object

3D traffic network analysis result class. This class is used to return the results of various three-dimensional traffic network analysis, including the route collection, the node collection through the analysis, the arc collection, the station collection and the weight collection And the cost of each site.

edges

**list[int]* – Return the set of the analysis results of the arcs. Note that the TransportAnalystParameter3D object must be* – py:meth:TransportationAnalystParameter3D.set_edges_return When the method is set to True, the analysis result will include the set of passing arcs, otherwise it will return None

nodes

list[int] – A collection of nodes that return the analysis results. Note that the TransportationAnalystParameter3D.set_nodes_return() of the TransportationAnalystParameter3D object must be added If the method is set to True, the analysis result will include the set of passing nodes, otherwise it will be an empty array.

route

GeoLineM – The routing object that Return the analysis result. Note that the TransportationAnalystParameter3D.set_routes_return() of the TransportationAnalystParameter3D object must be added If the method is set to true, the routing object will be included in the analysis result, otherwise it will return None

stop_indexes

list[int] – Return the site index. This array reflects the order of the sites after analysis. Note that the TransportationAnalystParameter3D object must be The TransportationAnalystParameter3D.set_stop_indexes_return() method is set to True, only when the analysis result Contains the site index, otherwise an empty array.

-Best path analysis (TransportationAnalyst3D.find_path() method):

-Node mode: For example, if the set analysis node ID is 1, 3, and 5, the order of the result path must be 1, 3, and 5, so the element value is 0, 1, 2,
that is, the order of the result path is set in the initial setting. The index in the point string.

-Coordinate point mode: If the analysis coordinate points are set to Pnt1, Pnt2, Pnt3, Because the sequence of the result path must be Pnt1, Pnt2, Pnt3, the element values are 0, 1, 2, that is, the index of the coordinate point sequence of the result path in the initial set coordinate point string.

stop_weights

list[float] – Return the cost (weight) between sites after sorting the sites by site index. This method Return the cost between the station and the station. The station here refers to the analysis node or coordinate point, rather than all the nodes or coordinate points that the path passes. The order of the sites associated with the weights returned by this method is consistent with the order of the site index values returned in the stop_indexes method.

-Optimal path analysis (TransportationAnalyst3D.find_path() method): Assuming that the passing points 1, 2, 3 are specified, the two-dimensional elements are sequentially
as: 1 to 2 cost, 2 to 3 cost;
weight

float – the weight spent.

class iobjectspy.analyst.TransportationAnalystParameter3D

Bases: object

Three-dimensional traffic network analysis parameter class.

This class is used to set various parameters required for 3D traffic network analysis, such as the collection of nodes (or arbitrary points) passing by during analysis, weight information, obstacle points and obstacle arcs, and whether the analysis result includes a collection of passing nodes, a collection of arcs, routing objects, etc.

barrier_edges

list[int] – Barrier edge ID list

barrier_nodes

list[int] – Barrier node ID list

barrier_points

list[Point3D] – List of obstacle points

is_edges_return

bool – Does the analysis result include passing arcs

is_nodes_return

bool – Whether the analysis result contains passing nodes

is_routes_return

bool – Return whether the analysis result contains a three-dimensional line (GeoLine3D) object

is_stop_indexes_return

bool – Whether to include the site index in the analysis result

nodes

list[int] – Analysis path point

points

list[Point3D] – Pass points during analysis

set_barrier_edges(edges)

Set the barrier edge ID list. Optional. The obstacle arc specified here and the obstacle arc specified in the traffic network analysis environment (TransportationAnalystSetting) work together to analyze the traffic network.

Parameters:edges (list[int] or tuple[int]) – list of obstacle edge IDs
Returns:self
Return type:TransportationAnalystParameter3D
set_barrier_nodes(nodes)

Set the list of barrier node IDs. Optional. The obstacle arc specified here and the obstacle arc specified in the traffic network analysis environment (TransportationAnalystSetting) work together to analyze the traffic network.

Parameters:nodes (list[int] or tuple[int]) – Barrier node ID list
Returns:self
Return type:TransportationAnalystParameter3D
set_barrier_points(points)
Set the coordinate list of obstacle nodes. Optional. The specified obstacle point can not be on the network (neither on the arc nor on the node). The analysis will be based on the distance tolerance (TransportationPathAnalystSetting.tolerance)
coming down the obstacle point to the nearest network. Currently, it supports best route analysis, nearest facility search, traveling salesman analysis, and logistics distribution analysis.
Parameters:points (list[Point2D] or tuple[Point2D]) – list of coordinates of barrier nodes
Returns:self
Return type:TransportationAnalystParameter3D
set_edges_return(value=True)

Set whether the passing arc is included in the analysis result

Parameters:value (bool) – Specify whether the passing arc is included in the analysis result. Set to True, after the analysis is successful, you can use the TransportationAnalystResult3D object TransportationAnalystResult3D.edges method return the passing arc; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter3D
set_nodes(nodes)

Set analysis route points. Required, but mutually exclusive with the set_points() method. If set at the same time, only the last setting before the analysis is valid. For example, if the node set is specified first, then the coordinate point set is specified, and then the analysis is performed. At this time, only the coordinate points are analyzed.

Parameters:nodes (list[int] or tuple[int]) – ID of passing node
Returns:self
Return type:TransportationAnalystParameter3D
set_nodes_return(value=True)

Set whether to include nodes in the analysis results

Parameters:value (bool) – Specify whether to include transit nodes in the analysis result. Set to True, after the analysis is successful, you can use the TransportationAnalystResult3D object TransportationAnalystResult3D.nodes method Return the passing node; if it is False, it Return None
Returns:self
Return type:TransportationAnalystParameter3D
set_points(points)

Set the collection of passing points during analysis. Required, but mutually exclusive with the set_nodes() method. If set at the same time, only the last setting before the analysis is valid. For example, first specify the node set, and then Specify a set of coordinate points and then analyze. At this time, only coordinate points are analyzed.

If a point in the set of set waypoints is not within the range of the network dataset, the point will not participate in the analysis

Parameters:points (list[Point3D] or tuple[Point3D]) – passing points
Returns:self
Return type:TransportationAnalystParameter3D
set_routes_return(value=True)

Whether the analysis result contains a three-dimensional line (GeoLine3D) object

param bool value:
 Specify whether to include routing objects. Set to True, after the analysis is successful, you can use the TransportationAnalystResult3D object TransportationAnalystResult3D.route Return the route object; if it is False, it Return None
return:self
rtype:TransportationAnalystParameter3D
set_stop_indexes_return(value=True)

设置分析结果中是否要包含站点索引的

Parameters:value (bool) – 指定分析结果中是否要包含站点索引。设置为 True,在分析成功后,可以从 TransportationAnalystResult3D 对象 TransportationAnalystResult3D.stop_indexes 方法返回站点索引;为 False 则返回 None
Returns:self
Return type:TransportationAnalystParameter3D
set_weight_name(name)

Set the name of the weight field information. If not set, the name of the first weight field information object in the weight field information set will be used by default

Parameters:name (str) – the name identifier of the weight field information
Returns:self
Return type:TransportationAnalystParameter3D
weight_name

str – the name of the weight field information

class iobjectspy.analyst.TransportationAnalystSetting3D(network_dataset=None)

Bases: object

Traffic network analysis analysis environment.

Initialization object

Parameters:network_dataset (DatasetVector or str) – network dataset
barrier_edge_ids

list[int] – ID list of barrier edge segments

barrier_node_ids

list[int] – ID list of barrier nodes

edge_filter

str – edge filtering expression in traffic network analysis

edge_id_field

str – The field that marks the edge ID in the network dataset

edge_name_field

str – Road name field

f_node_id_field

str – The field that marks the starting node ID of the edge in the network dataset

ft_single_way_rule_values

list[str] – an array of strings used to represent forward one-way lines

network_dataset

DatasetVector – network dataset

node_id_field

str – The field that identifies the node ID in the network dataset

prohibited_way_rule_values

list[str] – an array of strings representing prohibited lines

rule_field

str – A field in the network dataset representing the traffic rules of the network arc

set_barrier_edge_ids(value)

Set the ID list of barrier edges

Parameters:value (str or list[int]) – ID list of barrier edge
Returns:self
Return type:TransportationAnalystSetting3D
set_barrier_node_ids(value)

Set the ID list of barrier nodes

Parameters:value (str or list[int]) – ID list of barrier node
Returns:self
Return type:TransportationAnalystSetting3D
set_edge_filter(value)

Set the edge filter expression in traffic network analysis

Parameters:value – edge filtering expression in traffic network analysis
Returns:self
Return type:TransportationAnalystSetting3D
set_edge_id_field(value)

Set the field that identifies the node ID in the network dataset

Parameters:value (str) – The field that marks the arc segment ID in the network dataset
Returns:self
Return type:TransportationAnalystSetting3D
set_edge_name_field(value)

Set road name field

Parameters:value (str) – Road name field
Returns:self
Return type:TransportationAnalystSetting3D
set_f_node_id_field(value)

Set the field to mark the starting node ID of the edge in the network dataset

Parameters:value (str) – The field that marks the starting node ID of the edge in the network dataset
Returns:self
Return type:TransportationAnalystSetting3D
set_ft_single_way_rule_values(value)

Set the array of strings used to represent the forward one-way line

Parameters:value (str or list[str]) – An array of strings used to represent the forward one-way line
Returns:self
Return type:TransportationAnalystSetting3D
set_network_dataset(dataset)

Set up a network dataset for optimal path analysis

Parameters:dataset (DatasetVetor or str) – network dataset
Returns:current object
Return type:TransportationAnalystSetting3D
set_node_id_field(value)

Set the field of the network dataset to identify the node ID

Parameters:value (str) – The field that identifies the node ID in the network dataset
Returns:self
Return type:TransportationAnalystSetting3D
set_prohibited_way_rule_values(value)

Set up an array of strings representing forbidden lines

Parameters:value (str or list[str]) – an array of strings representing the forbidden line
Returns:self
Return type:TransportationAnalystSetting3D
set_rule_field(value)

Set the fields in the network dataset that represent the traffic rules of the network edge

Parameters:value (str) – A field in the network dataset representing the traffic rules of the network edge
Returns:self
Return type:TransportationAnalystSetting3D
set_t_node_id_field(value)

Set the field to mark the starting node ID of the arc in the network dataset

Parameters:value (str) –
Returns:self
Return type:TransportationAnalystSetting3D
set_tf_single_way_rule_values(value)

Set up an array of strings representing reverse one-way lines

Parameters:value (str or list[str]) – an array of strings representing the reverse one-way line
Returns:self
Return type:TransportationAnalystSetting3D
set_tolerance(value)

Set node tolerance

Parameters:value (float) – node tolerance
Returns:current object
Return type:TransportationAnalystSetting3D
set_two_way_rule_values(value)

Set an array of strings representing two-way traffic lines

Parameters:value (str or list[str]) – An array of strings representing two-way traffic lines
Returns:self
Return type:TransportationAnalystSetting3D
set_weight_fields(value)

Set weight field

Parameters:value (list[WeightFieldInfo] or tuple[WeightFieldInfo]) – weight field
Returns:self
Return type:TransportationAnalystSetting3D
t_node_id_field

str – The field that marks the starting node ID of the edge in the network dataset

tf_single_way_rule_values

list[str] – an array of strings representing reverse one-way lines

tolerance

float – node tolerance

two_way_rule_values

list[str] – an array of strings representing two-way traffic lines

weight_fields

list[WeightFieldInfo] – weight field

iobjectspy.analyst.build_address_indices(output_directory, datasets, index_fields, save_fields=None, top_group_field=None, secondary_group_field=None, lowest_group_field=None, is_build_reverse_matching_indices=False, bin_distance=0.0, dictionary_file=None, is_append=False, is_traditional=False)

Build an index file with matching addresses

Parameters:
  • output_directory (str) – index file result output directory
  • datasets (list[DatasetVector] or tuple[DatasetVector]) – Save the address information, the dataset used to create the index
  • index_fields (str or list[str] or tuple[str]) – The fields that need to be indexed, such as the detailed address field or address name, etc. The set of fields should exist in every dataset
  • save_fields (str or list[str] or tuple[str]) – Fields of additional information that needs to be stored. This information is not used for address matching, but will be returned in the address matching result.
  • top_group_field (str) – The field name of the first level grouping, such as the name of each province.
  • secondary_group_field (str) – The name of the secondary group, such as the name of each city.
  • lowest_group_field (str) – The name of the three-level group, such as the name of each county.
  • is_build_reverse_matching_indices (bool) – Whether to create an index for reverse address matching
  • bin_distance (float) – The interval distance created by the reverse address matching index. The distance unit is consistent with the coordinate system.
  • dictionary_file (str) – The address of the dictionary file. If it is empty, the default dictionary file will be used.
  • is_append (bool) – Whether to append to the original index, if there is an index file in the specified output index file directory, if it is True, append to the original index file, but it is required When adding new data, the structure of the new attribute table must be the same as that of the loaded data. If False, a new index file is created.
  • is_traditional (bool) – whether it is Traditional Chinese.
Returns:

Whether to create the index successfully

Return type:

bool

class iobjectspy.analyst.AddressItem(java_object)

Bases: object

Chinese address fuzzy matching result class. The Chinese address fuzzy matching result class stores the detailed information of the query result that matches the input Chinese address, including the queried address, the dataset where the address is located, the SmID of the address in the source dataset, the score value of the query result, and The geographic location information of the address.

address

str – the matched address

address_as_tuple

tuple[str] – the array form of the matched address

dataset_index

int – The index of the dataset where the queried Chinese address is located

location

Point2D – the geographic location of the queried address

record_id

int – SMID corresponding to the queried address in the source dataset

score

float – matching score result

class iobjectspy.analyst.AddressSearch(search_directory)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Chinese address fuzzy matching class.

Implementation process and precautions for fuzzy matching of Chinese addresses:

  1. Specify the directory of the address index created by the Chinese address library data, and create a Chinese address matching object;
  2. Call the match object, specify the city of the address to be searched, it must be the value in the city-level field in the Chinese address database, and then specify the Chinese address to be searched and the number of Return;
  3. The system performs word segmentation on the keywords to be matched, and then matches the content in the specified field in the specified dataset, and Return the matched result through certain operations.

Initialization object

Parameters:search_directory (str) – The directory where the address index is located
is_valid_lowest_group_name(value)

Determine whether the specified name is legal as a three-level group name.

Parameters:value (str) – the name of the field to be judged
Returns:legally return True, otherwise return False
Return type:bool
is_valid_secondary_group_name(value)

Determine whether the specified name is legal as a secondary group name.

Parameters:value (str) – the name of the field to be judged
Returns:legally return True, otherwise return False
Return type:bool
is_valid_top_group_name(value)

Determine whether the specified name is legal as a first-level group name.

Parameters:value (str) – the name of the field to be judged
Returns:legally return True, otherwise return False
Return type:bool
match(address, is_address_segmented=False, is_location_return=True, max_result_count=10, top_group_name=None, secondary_group_name=None, lowest_group_name=None)

The addresses match. This method supports multithreading.

Parameters:
  • address (str) – the address of the place name to be retrieved
  • is_address_segmented (bool) – Whether the incoming Chinese address has been segmented, that is, it is segmented with the “*” separator.
  • is_location_return (bool) – Whether the Chinese address fuzzy matching result object contains location information.
  • max_result_count (int) – The maximum number of matching results for fuzzy matching of Chinese addresses
  • top_group_name (str) – the first-level group name, provided that the first-level group field name is set when the data index is created
  • secondary_group_name (str) – secondary group name, provided that the secondary group field name is set when the data index is created
  • lowest_group_name (str) – three-level group name, provided that the three-level group field name is set when creating the data index
Returns:

Chinese address fuzzy matching result collection

Return type:

list[AddressItem]

reverse_match(geometry, distance, max_result_count=10)

Reverse address matching. This method supports multithreading.

Parameters:
  • geometry (GeoPoint or Point2D) – the specified point object
  • distance (float) – The specified search range.
  • max_result_count (int) – The maximum number of matching results searched
Returns:

Chinese address matching result collection

Return type:

list[AddressItem]

search_directory

str – The directory where the address index is located

set_search_directory(search_directory)

Set the directory where the address index is located. The address index is created using the build_address_indices() method.

Parameters:search_directory (str) – The directory where the address index is located
Returns:self
Return type:AddressSearch
class iobjectspy.analyst.TopologicalSchemaOrientation

Bases: iobjectspy._jsuperpy.enums.JEnum

Variables:
BOTTOMTOTOP = 3
LEFTTORIGHT = 0
RIGHTTOLEFT = 1
TOPTOBOTTOM = 2
class iobjectspy.analyst.NetworkEdge(edge_id, from_node_id, to_node_id, edge=None)

Bases: object

The network edge object represents the network relationship. It is composed of the line object, the unique identifier of the line object, and the unique identifiers of the two nodes at the beginning and the end of the line object.

Parameters:
  • edge_id (int) – network edge line object ID
  • from_node_id (int) – the starting point ID of the network edge segment line object
  • to_node_id (int) – The end point ID of the network edge segment line object
  • edge (GeoLine) – network edge line object
edge

GeoLine – In the network relationship, the edge geometry object

edge_id

int – The ID of the edge object in the network relationship

from_node_id

int – ID of the starting point of the arc in the network relationship

set_edge(value)

Set the geometric object of the edge

param GeoLine value:
 edge geometry object
return:NetworkEdge
rtype:self
set_edge_id(value)

Set the ID of the edge object

param int value:
 edge object ID
return:self
rtype:NetworkEdgeID
set_from_node_id(value)

Set the ID of the starting point of the edge

param int value:
 The starting point ID of the edge
return:self
rtype:NetworkEdgeID
set_to_node_id(value)

Set the ID of the end point of the edge

param int value:
 End point ID of edge
return:self
rtype:NetworkEdgeID
to_node_id

int – In the network relationship, the edge end point ID

class iobjectspy.analyst.NetworkNode(node_id, node)

Bases: object

The node object in the network edge relationship is represented by the unique identifier of the node object and the node object in the network arc relationship.

param int node_id:
 network node ID
param GeoPoint node:
 network node object

In the network relationship, the edge geometry object

node

GeoPoint – Network node object

node_id

int – network node ID

set_node(value)

Set the network node object

param value:network node object
type value:GeoPoint or Point2D
return:self
rtype:NetworkNode
set_node_id(value)

Set the network node ID

param int value:
 network node ID
return:self
rtype:NetworkNode
class iobjectspy.analyst.TopologicalHierarchicalSchema(node_spacing=30.0, rank_node_spacing=50.0, smooth=1, orientation=TopologicalSchemaOrientation.LEFTTORIGHT)

Bases: object

Hierarchical topology logic diagram.

The hierarchical diagram is applicable to networks with a clear source in a similar directed facility network. As shown in the figure below, there is only one source in the network, that is, in a subnet, there is only one entry, and the others are all exits. This kind of network can generate a hierarchical graph:

Parameters:node_spacing (float) – the distance of the topological logic graph node :param float rank_node_spacing: the distance between the topological logic graph levels :param int smooth: smoothness coefficient :param orientation: the layout direction of the topological logic diagram :type orientation: TopologicalSchemaOrientation or str
node_spacing

float – the distance of the topological logic diagram node. The default is 30.

orientation

TopologicalSchemaOrientation – Topological logic diagram layout trend

rank_node_spacing

float – the distance between the topological logic diagram levels. The default is 50.

set_node_spacing(value)

Set the node distance of the topology logic diagram.

As shown in the figure below, the node distance from node ① to node ② is dy

param float value:
 the distance of the topological logic diagram node
return:self
rtype:TopologicalHierarchicalSchema
set_orientation(value)

Set the layout direction of the topology logic diagram. The default layout is from left to right.

param value:The layout trend of the topology logic diagram
type value:TopologicalSchemaOrientation or str
return:self
rtype:TopologicalHierarchicalSchema
set_rank_node_spacing(value)

Set the distance between the levels of the topology logic diagram.

As shown in the figure below, there are two levels from node ① to node ②, and dz is the distance between the levels

param float value:
 the distance between the topological logic diagram levels
return:self
rtype:TopologicalHierarchicalSchema
set_smooth(value)

Set the smoothness factor. If you need to smooth the result, you can set a smoothing coefficient greater than 1. The default is not smoothing, that is, the value is 1.

param int value:
 smoothness coefficient
return:self
rtype:TopologicalHierarchicalSchema
smooth

int – smoothness factor. The default is 1.

class iobjectspy.analyst.TopologicalTreeSchema(node_spacing=30.0, level_spacing=20.0, break_radio=0.5, orientation=TopologicalSchemaOrientation.LEFTTORIGHT)

Bases: object

Tree topology logic diagram.

The tree diagram is the same as the hierarchical diagram. Both are applicable to networks with a clear source in a similar directed facility network.

In a subnet, only single source or single sink is supported, that is, there can only be one source in a subnet, and sinks do not require it. It also supports only one sink point, and the source point does not require it.

Parameters:
  • node_spacing (float) – the distance of the topological logic graph node
  • level_spacing (float) – the distance between tree layer levels
  • break_radio (float) – Get the break ratio of the broken line, the value range is [0,1]
  • orientation (TopologicalSchemaOrientation or str) – the layout direction of the topological logic diagram
break_radio

float – Broken line break ratio. The default is 0.5

level_spacing

float – distance between tree layer levels

node_spacing

float – the distance of the topological logic graph node, the default value is 30.

orientation

TopologicalSchemaOrientation – Topological logic diagram layout direction

set_break_radio(value)
Set the percentage of broken lines.

As shown in the figure below, the connecting polyline from node ① to node ② is the polyline to be broken. Set the breaking distance to 0.7 to get the effect as shown in the figure.

param float value:
 Broken line ratio
return:self
rtype:TopologicalTreeSchema
set_level_spacing(value)
Set the distance between tree layer levels.

As shown in the figure below, there are two levels from node ① to node ②, and dx is the distance between the levels

param float value:
 the distance between the tree layer levels
return:self
rtype:TopologicalTreeSchema
set_node_spacing(value)
Set the node distance of the topology logic diagram.

As shown in the figure below, the node distance from node ① to node ② is dy

param float value:
 the distance of the topological logic diagram node
return:self
rtype:TopologicalTreeSchema
set_orientation(value)

GeoLine: In the network relationship, the edge geometry object sets the layout trend of the topology logic diagram

param value:The layout trend of the topology logic diagram
type value:TopologicalSchemaOrientation or str
return:self
rtype:TopologicalTreeSchema
class iobjectspy.analyst.TopologicalOrthogonalSchema(node_spacing=10.0)

Bases: object

Right-angle orthogonal topology logic diagram.

The right-angle orthogonal graph requires less data, but requires that the data cannot have an edge segment that is self-circulating (that is, in a network edge segment, the start point is equal to the end point).

Parameters:node_spacing (float) – Topological logic graph node distance
node_spacing

float – Topological logic diagram node distance

set_node_spacing(value)

Set the distance between the nodes of the topology logic diagram. The default value is 10 units.

param float value:
 the distance of the topological logic diagram node
return:self
rtype:TopologicalOrthogonalSchema
iobjectspy.analyst.build_topological_schema(schema_param, network_dt_or_edges, network_nodes=None, is_merge=False, tolerance=1e-10, out_data=None, out_dataset_name=None)

Build the topology logic diagram.

The topological logic diagram is a schematic diagram that reflects its own logical structure based on the network data set. It expresses the complex network in an intuitive way and simplifies the form of the network. It can be applied to resource management in telecommunications, transportation, pipeline power and other industries. Viewing the network through the topology logic diagram can effectively evaluate the distribution of existing network resources, predict and plan the configuration of subsequent resources, etc.

SuperMap supports the construction of a topological logic diagram based on the network relationship represented by the network edges and network nodes, which is convenient for checking network connectivity and obtaining a schematic diagram of network data.

>>> network_dt = open_datasource('/home/iobjectspy/data/example_data.udbx')['network_schema']

Build a hierarchy diagram:

>>> hierarchical_schema_result = build_topological_schema(TopologicalHierarchicalSchema(smooth=3), network_dt)

Build a tree diagram:

>>> tree_schema_result = build_topological_schema(TopologicalTreeSchema(), network_dt)

Construct a right-angled orthogonal graph:

>>> orthogonal_schema_result = build_topological_schema(TopologicalOrthogonalSchema(), network_dt)

Construct a topology diagram through network relationships:

>>> network_edges = list()
>>> network_edges.append(NetworkEdge(1, 1, 2))
>>> network_edges.append(NetworkEdge(2, 1, 3))
>>> network_edges.append(NetworkEdge(3, 2, 4))
>>> network_edges.append(NetworkEdge(4, 2, 5))
>>> network_edges.append(NetworkEdge(5, 3, 6))
>>> network_edges.append(NetworkEdge(6, 3, 7))
>>> out_ds = create_mem_datasource()
>>> tree_schema_result_2 = build_topological_schema(TopologicalTreeSchema(), network_edges, out_data=out_ds)

Construct a topology diagram through network edges and network nodes:

>>> edges = []
>>> nodes = []
>>> edge_rd = network_dt.get_recordset()
>>> while edge_rd.has_next():
>>> edge_id = edge_rd.get_value('SmEdgeID')
>>> f_node = edge_rd.get_value('SmFNode')
>>> t_node = edge_rd.get_value('SmTNode')
>>> edges.append(NetworkEdge(edge_id, f_node, t_node, edge_rd.get_geometry()))
>>> edge_rd.move_next()
>>> edge_rd.close()
>>> node_rd = network_dt.child_dataset.get_recordset()
>>> while node_rd.has_next():
>>> node_id = node_rd.get_value('SmNodeID')
>>> nodes.append(NetworkNode(node_id, node_rd.get_geometry()))
>>> node_rd.move_next()
>>> node_rd.close()
>>>
>>> tree_schema_result_2 = build_topological_schema(TopologicalTreeSchema(), edges, nodes, is_merge=True,
>>> tolerance=1.0e-6, out_data=out_ds, out_dataset_name='SchemaRes')
>>>
Parameters:
  • schema_param (TopologicalHierarchicalSchema or TopologicalTreeSchema or TopologicalOrthogonalSchema) – Topological logic diagram parameter class object
  • network_dt_or_edges (DatasetVector or list[NetworkEdge]) – two-dimensional network data set or virtual network edge
  • network_nodes (list[NetworkNode]) – Virtual network node, when network_dt_or_edges is list[NetworkEdge], virtual network node objects can be set to express the complete network relationship. When network_nodes is not set, you can use list[NetworkEdge] alone to express network relationships as well. And NetworkEdge may not need network edge space objects.
  • is_merge (bool) –

    Whether to set repeated network edges and network node objects in the merged spatial position. In the network relationship,if there are repeated edges and repeated nodes in the spatial position, if this parameter is set to True, a common edge relationship will be extracted to construct a logical diagram, and the constructed topological logical diagram also contains spatial positions Repeating edges and nodes.

    If this parameter is set to False, each correct network topology relationship will be processed normally when calculating the topology logic diagram.

    Is_merge is valid only when a valid network_nodes is set.

  • tolerance (float) – node tolerance, used for node object comparison in space calculation. Only valid if is_merge is True
  • out_data (Datasource) – The data source to store the result data. When network_dt_or_edges is not a network data set, a valid result data source must be set.
  • out_dataset_name (str) – result data set name
Returns:

A two-dimensional network data set used to represent a topological logic diagram

Return type:

DatasetVector

iobjectspy.conversion module

conversion 模块提供基本的数据导入和导出功能,通过使用 conversion 模块可以快速的将第三方的文件导入到 SuperMap 的数据源中,也可以将 SuperMap 数据源 中的数据导出为 第三方文件。

在 conversion 模块中,所有导入数据的接口中,output 参数输入结果数据集的数据源信息,可以为 Datasource 对象,也可以为 DatasourceConnectionInfo 对象, 同时,也支持当前工作空间下数据源的别名,也支持 UDB 文件路径,DCF 文件路径等。

>>> ds = Datasource.create(':memory:')
>>> alias = ds.alias
>>> shape_file = 'E:/Point.shp'
>>> result1 = import_shape(shape_file, ds)
>>> result2 = import_shape(shape_file, alias)
>>> result3 = import_shape(shape_file, 'E:/Point_Out.udb')
>>> result4 = import_shape(shape_file, 'E:/Point_Out_conn.dcf')

而导入数据的结果返回一个 Dataset 或 str 的列表对象。当导入数据只生成一个数据集时,列表的个数为1,当导入数据生成多个数据集时,列表的个数可能大于1。 列表中返回 Dataset 还是 str 是由输入的 output 参数决定的,当输入的 output 参数可以直接在当前工作空间中获取到数据源对象时,将会返回 Dataset 的列表, 如果输入的 output 参数无法直接在当前工作空间中获取到数据源对象时,程序将自动尝试打开数据源或新建数据源(只支持新建 UDB 数据源),此时,返回的结果将是 结果数据集的数据集名称,而完成数据导入后,结果数据源也会被关闭。所以如果用户需要继续基于导入后的结果数据集进行操作,则需要根据结果数据集名称和数据源信息再次开发数据源以获取数据集。

所有导出数据集的接口,data 参数是被导出的数据集信息,data 参数接受输入一个数据集对象(Dataset)或数据源别名与数据集名称的组合(例如,’alias/dataset_name’, ‘alias\dataset_name’), ,也支持数据源连接信息与数据集名称的组合(例如, ‘E:/data.udb/dataset_name’),值得注意的是,当输入的是数据源信息时,程序会自动打开数据源,但是不会自动关闭数据源,也就是打开后的数据源 会存在当前工作空间中

>>> export_to_shape('E:/data.udb/Point', 'E:/Point.shp', is_over_write=True)
>>> ds = Datasource.open('E:/data.udb')
>>> export_to_shape(ds['Point'], 'E:/Point.shp', is_over_write=True)
>>> export_to_shape(ds.alias + '|Point', 'E:/Point.shp', is_over_write=True)
>>> ds.close()
iobjectspy.conversion.import_shape(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=False, is_import_empty=False, source_file_charset=None, is_import_as_3d=False, progress=None)

Import the shape file into the datasource. Supports importing file directories.

Parameters:
  • source_file (str) – the imported shape file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – Import mode type, which can be ImportMode enumeration value or name
  • is_ignore_attrs (bool) – Whether to ignore attribute information, the default value is False
  • is_import_empty (bool) – No to import empty dataset, the default is not to import. The default is False
  • source_file_charset (Charset or str) – The original character set type of the shape file
  • is_import_as_3d (bool) – Whether to import as a 3D dataset
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

>>> result = import_shape('E:/point.shp','E:/import_shp_out.udb')
>>> print(len(result) == 1)
>>> print(result[0])
iobjectspy.conversion.import_dbf(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, source_file_charset=None, progress=None)

Import the dbf file into the datasource. Supports importing file directories.

Parameters:
  • source_file (str) – the imported dbf file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – Import mode type, which can be ImportMode enumeration value or name
  • is_import_empty (bool) – No to import empty dataset, the default is not to import. The default is False
  • source_file_charset (Charset or str) – the original character set type of the dbf file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

>>> result = import_dbf('E:/point.dbf','E:/import_dbf_out.udb')
>>> print(len(result) == 1)
>>> print(result[0])
iobjectspy.conversion.import_csv(source_file, output, out_dataset_name=None, import_mode=None, separator=', ', head_is_field=True, fields_as_point=None, field_as_geometry=None, is_import_empty=False, source_file_charset=None, progress=None)

Import the CSV file. Supports importing file directories.

Parameters:
  • source_file (str) – The imported csv file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – Import mode type, which can be ImportMode enumeration value or name
  • separator (str) – The separator of the field in the source CSV file. The default is’,’ as the separator
  • head_is_field (bool) – Whether the first line of the CSV file is a field name
  • fields_as_point (list[str] or list[int]) – Specify the field as X, Y or X, Y, Z coordinates. If the conditions are met, a point or three-dimensional point dataset will be generated
  • field_as_geometry (int) – Specify the Geometry index position of the WKT string
  • is_import_empty (bool) – Whether to import empty dataset, the default is False, that is, do not import
  • source_file_charset (Charset or str) – The original character set type of the CSV file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_mapgis(source_file, output, out_dataset_name=None, import_mode=None, is_import_as_cad=True, color_index_file_path=None, import_network_topology=False, source_file_charset=None, progress=None)

Import MapGIS files. Linux platform does not support importing MapGIS files.

Parameters:
  • source_file (str) – the imported MAPGIS file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • color_index_file_path (str) – MAPGIS color index table file path when importing data, the default file path is MapGISColor.wat under the system path
  • import_network_topology (bool) – Whether to import the network dataset when importing
  • source_file_charset (Charset or str) – the original character set of the MAPGIS file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_aibingrid(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import AIBinGrid files. Linux platform does not support importing AIBinGrid files.

Parameters:
  • source_file (str) – the imported AIBinGrid file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_bmp(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, world_file_path=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import the BMP file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported BMP file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – The mode of ignoring color values of BMP files
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • world_file_path (str) – The coordinate reference file path of the imported source image file
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_dgn(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=True, style_map_file=None, is_import_by_layer=False, is_cell_as_point=False, progress=None)

Import the DGN file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported dgn file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – import mode
  • is_import_empty (bool) – Whether to import an empty dataset, the default is False
  • is_import_as_cad (bool) – Whether to import as a CAD dataset, the default is True
  • style_map_file (str) – Set the storage path of the style comparison table. The style comparison table refers to the comparison file between the SuperMap system and other system styles (including symbols, line styles, fills, etc.). The style comparison table only works on CAD data, such as DXF, DWG, and DGN. Before setting up the style comparison table, you must ensure that the data is imported in CAD mode and the style is not ignored.
  • is_import_by_layer (bool) – Whether to merge the CAD layer information in the source data in the imported data. CAD is stored as layer information. The default is False, that is, all layer information is merged into a CAD dataset. Otherwise, a CAD dataset is generated corresponding to each layer in the source data.
  • is_cell_as_point (bool) – Whether to import cell (unit) objects as point objects (cell header) or all feature objects except cell header. By default, all feature objects except cell header are imported.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_dwg(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=True, is_import_by_layer=False, ignore_block_attrs=True, block_as_point=False, import_external_data=False, import_xrecord=True, import_invisible_layer=False, keep_parametric_part=False, ignore_lwpline_width=False, shx_paths=None, curve_segment=73, style_map_file=None, progress=None)

Import DWG files. Linux platform does not support importing DWG files. Supports importing file directories.

Parameters:
  • source_file (str) – the imported dwg file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_empty (bool) – Whether to import empty dataset, the default is False, that is, do not import
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • is_import_by_layer (bool) – Whether to merge the CAD layer information in the source data in the imported data. CAD is stored as layer information. The default is False, that is, all layer information is merged into a CAD dataset. Otherwise, a CAD dataset is generated corresponding to each layer in the source data.
  • ignore_block_attrs (bool) – Whether to ignore block attributes when importing data. The default is True
  • block_as_point (bool) – Whether to import the symbol block as a point object or a composite object, the default is False, that is, the original symbol block is imported as a composite object, otherwise the point object is used instead of the symbol block.
  • import_external_data (bool) – Whether to import external data. The external data is the data similar to the attribute table in CAD. The format after importing is some extra fields, the default is False, otherwise the external data will be appended to the default field.
  • import_xrecord (bool) – Whether to import user-defined fields and attribute fields as extended records.
  • import_invisible_layer (bool) – whether to import invisible layers
  • keep_parametric_part (bool) – Whether to keep the parameterized part in the Acad data
  • ignore_lwpline_width (bool) – Whether to ignore the polyline width, the default is False.
  • shx_paths (list[str]) – path of shx font library
  • curve_segment (int) – curve fitting accuracy, the default is 73
  • style_map_file (str) – storage path of style comparison table
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_dxf(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=True, is_import_by_layer=False, ignore_block_attrs=True, block_as_point=False, import_external_data=False, import_xrecord=True, import_invisible_layer=False, keep_parametric_part=False, ignore_lwpline_width=False, shx_paths=None, curve_segment=73, style_map_file=None, progress=None)

Import DXF files. Linux platform does not support importing DXF files. Supports importing file directories.

Parameters:
  • source_file (str) – dxf file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_empty (bool) – Whether to import empty dataset, the default is False, that is, do not import
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • is_import_by_layer (bool) – Whether to merge the CAD layer information in the source data in the imported data. CAD is stored as layer information. The default is False, that is, all layer information is merged into a CAD dataset. Otherwise, a CAD dataset is generated corresponding to each layer in the source data.
  • ignore_block_attrs (bool) – Whether to ignore block attributes when importing data. The default is True
  • block_as_point (bool) – Whether to import the symbol block as a point object or a composite object, the default is False, that is, the original symbol block is imported as a composite object, otherwise the point object is used instead of the symbol block.
  • import_external_data (bool) – Whether to import external data. The external data is the data similar to the attribute table in CAD. The format after importing is some extra fields, the default is False, otherwise the external data will be appended to the default field.
  • import_xrecord (bool) – Whether to import user-defined fields and attribute fields as extended records.
  • import_invisible_layer (bool) – whether to import invisible layers
  • keep_parametric_part (bool) – Whether to keep the parameterized part in the Acad data
  • ignore_lwpline_width (bool) – Whether to ignore the polyline width, the default is False.
  • shx_paths (list[str]) – path of shx font library
  • curve_segment (int) – curve fitting accuracy, the default is 73
  • style_map_file (str) – storage path of style comparison table
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_e00(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=True, source_file_charset=None, progress=None)

Import the E00 file. Support for importing file directories.

Parameters:
  • source_file (str) – the imported E00 file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_ignore_attrs (bool) – Whether to ignore attribute information
  • source_file_charset (Charset or str) – The original character set of the E00 file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_ecw(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, progress=None)

Import ECW files. Supports importing file directories.

Parameters:
  • source_file (str) – ECW file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – ECW file ignore color value mode
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band dataset, single multi-band dataset or single single-band dataset.
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_geojson(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=False, source_file_charset=None, progress=None)

Import GeoJson files

Parameters:
  • source_file (str) – GeoJson file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – import mode
  • is_import_empty (bool) – Whether to import an empty dataset, the default is False
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • source_file_charset (Charset or str) – Original character set type of GeoJson file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_gif(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, world_file_path=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import GIF files. Supports importing file directories.

Parameters:
  • source_file (str) – GIF file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – The mode of ignoring color values of GIF files
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • world_file_path (str) – The coordinate reference file path of the imported source image file
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_grd(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import GRD file

Parameters:
  • source_file (str) – the imported GRD file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_img(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import the Erdas Image file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported IMG file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – Erdas Image’s mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band dataset, single multi-band dataset or single single-band dataset.
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_jp2(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_import_as_grid=False, progress=None)

Import the JP2 file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported JP2 file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_jpg(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, world_file_path=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import JPG files. Supports importing file directories.

Parameters:
  • source_file (str) – the imported JPG file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • world_file_path (str) – The coordinate reference file path of the imported source image file
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_kml(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=False, is_ignore_invisible_object=True, source_file_charset=None, progress=None)

Import KML files. Supports importing file directories.

Parameters:
  • source_file (str) – KML file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • is_import_empty (bool) – whether to import an empty dataset
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • is_ignore_invisible_object (bool) – Whether to ignore invisible objects
  • source_file_charset (Charset or str) – The original character set of the KML file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_kmz(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_import_as_cad=False, is_ignore_invisible_object=True, source_file_charset=None, progress=None)

Import the KMZ file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported KMZ file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_empty (bool) – whether to import an empty dataset
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • is_ignore_invisible_object (bool) – Whether to ignore invisible objects
  • source_file_charset (Charset or str) – the original character set of the KMZ file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_mif(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=True, is_import_as_cad=False, style_map_file=None, source_file_charset=None, progress=None)

Import the MIF file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported mif file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_ignore_attrs (bool) – Whether to ignore the attributes of the data when importing MIF format data, including the attribute information of vector data.
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • source_file_charset (Charset or str) – the original character set of the mif file
  • style_map_file (str) – storage path of style comparison table
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_mrsid(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, progress=None)

Import MrSID file, Linux platform does not support importing MrSID file.

Parameters:
  • source_file (str) – the imported MrSID file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – The mode of ignoring the color value of the MrSID file
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band dataset, single multi-band dataset or single single-band dataset.
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_osm(source_file, output, out_dataset_name=None, import_mode=None, source_file_charset=None, progress=None)

Import OSM vector data, the Linux platform does not support OSM files. Supports importing file directories.

Parameters:
  • source_file (str) – OSM file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • source_file_charset (Charset or str) – the original character set of the OSM file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_png(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, world_file_path=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import Portal Network Graphic (PNG) files. Supports importing file directories.

Parameters:
  • source_file (str) – PNG file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – PNG file ignore color value mode
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • world_file_path (str) – The coordinate reference file path of the imported source image file
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_simplejson(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, source_file_charset=None, progress=None)

Import SimpleJson file

Parameters:
  • source_file (str) – SimpleJson file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_empty (bool) – whether to import an empty dataset
  • source_file_charset (Charset or str) – The original character set of the SimpleJson file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_sit(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, password=None, progress=None)

Import the SIT file. Supports importing file directories.

Parameters:
  • source_file (str) – SIT file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – SIT file ignore color value mode
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band dataset, single multi-band dataset or single single-band dataset.
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • password (str) – password
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_tab(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=True, is_import_empty=False, is_import_as_cad=False, style_map_file=None, source_file_charset=None, progress=None)

Import TAB files. Supports importing file directories.

Parameters:
  • source_file (str) – TAB file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_ignore_attrs (bool) – Whether to ignore the attributes of the data when importing TAB format data, including the attribute information of vector data.
  • is_import_empty (bool) – Whether to import empty dataset, the default is False, that is, do not import
  • is_import_as_cad (bool) – Whether to import as CAD dataset
  • source_file_charset (Charset or str) – the original character set of the mif file
  • style_map_file (str) – storage path of style comparison table
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_tif(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, world_file_path=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import the TIF file. Supports importing file directories.

Parameters:
  • source_file (str) – the imported TIF file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – Tiff/BigTIFF/GeoTIFF file ignore color value mode
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band dataset, single multi-band dataset or single single-band dataset.
  • world_file_path (str) – The coordinate reference file path of the imported source image file
  • is_import_as_grid (bool) – Whether to import as a Grid dataset
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[DatasetImage] or list[str]

iobjectspy.conversion.import_usgsdem(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the USGSDEM file.

Parameters:
  • source_file (str) – the imported USGS DEM file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • ignore_mode (IgnoreMode or str) – mode to ignore color values
  • ignore_values (list[float] color values to ignore) – color values to ignore
  • is_build_pyramid (bool) – whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_vct(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, source_file_charset=None, layers=None, progress=None)

Import the VCT file. Supports importing file directories.

Parameters:
  • source_file (str) – The imported VCT file
  • output (Datasource or DatasourceConnectionInfo or str) – result datasource
  • out_dataset_name (str) – result dataset name
  • import_mode (ImportMode or str) – dataset import mode
  • is_import_empty (bool) – whether to import an empty dataset
  • source_file_charset (Charset or str) – the original character set of the VCT file
  • layers (str or list[str]) – The names of the layers to be imported. When set to None, all will be imported.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result dataset or result dataset name

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_bil(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the BIL file.

Parameters:
  • source_file (str) – the imported BIL file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_bip(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the BIP file.

Parameters:
  • source_file (str) – the imported BIP file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_orange_tab(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=False, fields_as_point=None, progress=None)

Import the Orange tab file.

Parameters:
  • source_file (str) – Orange tab file to be imported.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_ignore_attrs (bool) – Whether to ignore attribute information
  • fields_as_point (list[str] or str) – Specify the field as X, Y or X, Y, Z coordinates. If the conditions are met, a two-dimensional point or three-dimensional point data set is generated.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_bsq(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the BSQ file.

Parameters:
  • source_file (str) – the imported BSQ file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_gpkg(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=False, progress=None)

Import the OGC Geopackage vector file.

Parameters:
  • source_file (str) – The imported OGC Geopackage vector file.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_ignore_attrs (bool) – Whether to ignore attribute information
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_gbdem(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the GBDEM file.

Parameters:
  • source_file (str) – GBDEM file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_grib(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import the GRIB file.

Parameters:
  • source_file (str) – GRIB file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band data sets, single multi-band data sets or single single-band data sets.
  • is_import_as_grid (bool) – Whether to import as Grid dataset
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_egc(source_file, output, out_dataset_name=None, scale=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

Import the EGC file.

Parameters:
  • source_file (str) – the imported EGC file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • scale (int) – scale
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_raw(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, is_build_pyramid=True, progress=None)

导入 RAW 文件。

Parameters:
  • source_file (str) – 被导入的 RAW 文件
  • output (Datasource or DatasourceConnectionInfo or str) – 结果数据源
  • out_dataset_name (str) – 结果数据集名称
  • ignore_mode (IgnoreMode or str) – 忽略颜色值的模式
  • ignore_values (list[float] 要忽略的颜色值) – 要忽略的颜色值
  • is_build_pyramid (bool) – 是否自动建立影像金字塔
  • progress (function) – 进度信息处理函数,请参考 StepEvent
Returns:

导入后的结果数据集或结果数据集名称

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_vrt(source_file, output, out_dataset_name=None, ignore_mode='IGNORENONE', ignore_values=None, multi_band_mode=None, is_import_as_grid=False, is_build_pyramid=True, progress=None)

Import GDAL Virtual (VRT) files.

Parameters:
  • source_file (str) – the imported VRT file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • ignore_mode (IgnoreMode or str) – Ignore the mode of the color value
  • ignore_values (list[float] color values to ignore) – the color values to ignore
  • multi_band_mode (MultiBandImportMode or str) – Multi-band import mode, which can be imported as multiple single-band data sets, single multi-band data sets or single single-band data sets.
  • is_import_as_grid (bool) – Whether to import as Grid dataset
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_lidar(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=False, is_import_as_3d=False, progress=None)

Import lidar files.

Parameters:
  • source_file (str) – Lidar file to be imported.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_ignore_attrs (bool) – Whether to ignore attribute information
  • is_import_as_3d (bool) – Whether to import as a 3D dataset
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_gjb(source_file, output, out_dataset_name=None, import_mode=None, layers=None, config_file=None, is_import_empty=False, progress=None)

Import the GJB file. Only supports Windows.

Parameters:
  • source_file (str) – GJB file to be imported
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • layers (str or list[str]) – The names of the layers to be imported. When set to None, all will be imported.
  • config_file (str) – The configuration file path of a specific format that compares the font size and color transparency of the text
  • is_import_empty (bool) – Whether to import an empty data set, the default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_tems_clutter(source_file, output, out_dataset_name=None, is_build_pyramid=True, progress=None)

Import telecom industry image data (TEMSClutter) files.

Parameters:
  • source_file (str) – The imported telecom industry image data (TEMSClutter) file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • is_build_pyramid (bool) – Whether to automatically build an image pyramid
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetGrid] or list[str]

iobjectspy.conversion.import_tems_vector(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, progress=None)

Import telecom vector line data.

Parameters:
  • source_file (str) – The imported telecom vector feature annotation.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_import_empty (bool) – Whether to import an empty data set.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_tems_building_vector(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, is_char_separator=False, progress=None)

Import telecom vector data.

Parameters:
  • source_file (str) – The telecom vector plane data to be imported.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_import_empty (bool) – Whether to import an empty data set.
  • is_char_separator (bool) – Whether to split the field by characters, the default is False, and it is split by spaces.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_file_gdb_vector(source_file, output, out_dataset_name=None, import_mode=None, is_ignore_attrs=False, is_import_empty=False, progress=None)

Import the Esri Geodatabase exchange file. Only supports Windows.

Parameters:
  • source_file (str) – The imported Esri Geodatabase exchange file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_ignore_attrs (bool) – Whether to ignore attribute information.
  • is_import_empty (bool) – Whether to import an empty data set, the default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_personal_gdb_vector(source_file, output, out_dataset_name=None, import_mode=None, is_import_3d_as_2d=False, is_import_empty=False, progress=None)

Import Esri Personal Geodatabase vector files. Only supports Windows.

Parameters:
  • source_file (str) – The imported Esri Personal Geodatabase vector file
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_import_3d_as_2d (bool) – Whether to import 3D objects as 2D objects.
  • is_import_empty (bool) – Whether to import an empty data set, the default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.import_tems_text_labels(source_file, output, out_dataset_name=None, import_mode=None, is_import_empty=False, progress=None)

Import telecom vector feature labeling.

Parameters:
  • source_file (str) – The imported telecom vector feature annotation.
  • output (Datasource or DatasourceConnectionInfo or str) – result data source
  • out_dataset_name (str) – result data set name
  • import_mode (ImportMode or str) – import mode
  • is_import_empty (bool) – Whether to import an empty data set.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

the imported result data set or the name of the result data set

Return type:

list[DatasetVector] or list[str]

iobjectspy.conversion.export_to_bmp(data, output, is_over_write=False, world_file_path=None, progress=None)

Export dataset to BMP file

Parameters:
  • data (DatasetImage or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • world_file_path (str) – The coordinate file path of the exported image data
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_gif(data, output, is_over_write=False, world_file_path=None, progress=None)

Export dataset to GIF file

Parameters:
  • data (DatasetImage or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • world_file_path (str) – The coordinate file path of the exported image data
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_grd(data, output, is_over_write=False, progress=None)

Export dataset to GRD file

Parameters:
  • data (DatasetGrid or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_img(data, output, is_over_write=False, progress=None)

Export dataset to IMG file

Parameters:
  • data (DatasetImage or DatasetGrid or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_jpg(data, output, is_over_write=False, world_file_path=None, compression=None, progress=None)

Export dataset to JPG file

Parameters:
  • data (DatasetImage or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • world_file_path (str) – The coordinate file path of the exported image data
  • compression (int) – compression rate of the image file, unit: percentage
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_png(data, output, is_over_write=False, world_file_path=None, progress=None)

Export dataset to PNG file

Parameters:
  • data (DatasetImage or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • world_file_path (str) – The coordinate file path of the exported image data
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_sit(data, output, is_over_write=False, password=None, progress=None)

Export dataset to SIT file

Parameters:
  • data (DatasetImage or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • password (str) – password
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tif(data, output, is_over_write=False, export_as_tile=False, export_transform_file=True, progress=None)

Export dataset to TIF file

Parameters:
  • data (DatasetImage or DatasetGrid or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • export_as_tile (bool) – Whether to export in blocks, the default is False
  • export_transform_file (bool) – Whether to export the affine transformation information to an external file, the default is True, that is, export to an external tfw file, otherwise the projection information will be exported to a tiff file
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_csv(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, is_export_field_names=True, is_export_point_as_wkt=False, progress=None)

Export dataset to csv file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • is_export_field_names (bool) – Whether to write out the field names.
  • is_export_point_as_wkt (bool) – Whether to write the point in WKT mode.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_dbf(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to dbf file

Parameters:
  • data (DatasetVector or str) – The exported dataset, only supports exporting attribute table dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_dwg(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, cad_version=CADVersion.CAD2007, is_export_border=False, is_export_xrecord=False, is_export_external_data=False, style_map_file=None, progress=None)

Export dataset to DWG file. Linux platform does not support export dataset as DWG file.

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • cad_version (CADVersion or str) – The version of the exported DWG file.
  • is_export_border (bool) – Whether to export borders when exporting CAD faces like or rectangular objects.
  • is_export_xrecord (bool) – Whether to export user-defined fields and attribute fields as extended records
  • is_export_external_data (bool) – whether to export extended fields
  • style_map_file (str) – The path of the style comparison table
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_dxf(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, cad_version=CADVersion.CAD2007, is_export_border=False, is_export_xrecord=False, is_export_external_data=False, progress=None)

Export dataset to DXF file, Linux platform does not support exporting dataset as DXF file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • cad_version (CADVersion or str) – The version of the exported DWG file.
  • is_export_border (bool) – Whether to export borders when exporting CAD faces like or rectangular objects.
  • is_export_xrecord (bool) – Whether to export user-defined fields and attribute fields as extended records
  • is_export_external_data (bool) – whether to export extended fields
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_e00(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, double_precision=False, progress=None)

Export dataset to E00 file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • double_precision (bool) – Whether to export E00 in double precision, the default is False.
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_kml(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to KML file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – The exported dataset collection
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_kmz(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to KMZ file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – The exported dataset collection
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_geojson(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to GeoJson file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – The exported dataset collection
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_mif(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to MIF file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_shape(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to Shape file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_simplejson(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to SimpleJson file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tab(data, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, style_map_file=None, progress=None)

Export dataset to TAB file

Parameters:
  • data (DatasetVector or str) – The exported dataset
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • style_map_file (str) – The path of the exported style map
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_vct(data, config_path, version, output, is_over_write=False, attr_filter=None, ignore_fields=None, target_file_charset=None, progress=None)

Export dataset to VCT file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or str) – The exported dataset collection
  • config_path (str) – VCT configuration file path
  • version (VCTVersion or str) – VCT version
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • attr_filter (str) – Export the filtering information of the target file
  • ignore_fields (list[str]) – fields to ignore
  • target_file_charset (Charset or str) – The character set type of the file to be exported
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_egc(data, output, is_over_write=False, progress=None)

Export data set to EGC file

Parameters:
  • data (DatasetGrid or str) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_gjb(data, output, is_over_write=False, progress=None)

Export data set to GJB file

Parameters:
  • data (dict[GJBLayerType,DatasetVector] or dict[GJBLayerType, list[DatasetVector]]) –

    The exported data set and layer information. GJB files have fixed layers, and you need to specify the layer type and the exported data set when exporting.

    -GJBLayerType.GJB_S metadata layer can only set one attribute table dataset to export bit metadata -GJBLayerType.GJB_R metadata layer can only set multiple text datasets -Other layers can set multiple point, line and surface data sets

  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tems_vector(data, output, is_over_write=False, progress=None)

Export data set to telecom vector line data file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tems_clutter(data, output, is_over_write=False, progress=None)

Export data sets to telecom industry image files

Parameters:
  • data (DatasetGrid or str) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tems_text_labels(data, output, label_field, is_over_write=False, progress=None)

Export data set to telecom vector line data file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – the exported data set
  • output (str) – result file path
  • label_field (str) – The name of the text field to be exported
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_tems_building_vector(data, output, is_over_write=False, progress=None)

Export data set to telecom vector surface data file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_file_gdb_vector(data, output, is_over_write=False, progress=None)

Export data set to ESRI GDB interchange format file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.conversion.export_to_personal_gdb_vector(data, output, is_over_write=False, progress=None)

Export data set to ESRI Personal GDB file

Parameters:
  • data (DatasetVector or str or list[DatasetVector] or list[str]) – the exported data set
  • output (str) – result file path
  • is_over_write (bool) – Whether to force overwrite when there is a file with the same name in the export directory. The default is False
  • progress (function) – progress information processing function, please refer to StepEvent
Returns:

whether the export was successful

Return type:

bool

iobjectspy.data module

class iobjectspy.data.DatasourceConnectionInfo(server=None, engine_type=None, alias=None, is_readonly=None, database=None, driver=None, user=None, password=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

datasource connection information class. It includes all the information for datasource connection, such as the name of the server to be connected, database name, user name, password, etc. When a workspace is saved, the connection information for the datasources in the workspace is stored in the workspace file. For different types of datasources, the connection information is different. So when using the members contained in this class, please pay attention to the datasource type that the member applies to.

E.g.::
>>> conn_info = DatasourceConnectionInfo('E:/data.udb')
>>> print(conn_info.server)
'E:\data.udb'
>>> print(conn_info.type)
EngineType.UDB

Create OraclePlus database connection information:

>>> conn_info = (DatasourceConnectionInfo().
>>> set_type(EngineType.ORACLEPLUS).
>>> set_server('server').
>>> set_database('database').
>>> set_alias('alias').
>>> set_user('user').
>>> set_password('password'))
>>> print(conn_info.database)
'server'

Construct datasource connection object

Parameters:
  • server (str) –

    database server name, file name or service address:

    -For MEMORY, it is’:memory:’ -For UDB files, it is the absolute path of the file. Note: When the length of the absolute path exceeds the 260-byte length of the UTF-8 encoding format, the datasource cannot be opened. -For Oracle database, its server name is its TNS service name; -For SQL Server databases, the server name is the DSN (Database Source Name) name of the system; -For PostgreSQL database, the server name is “IP: Port Number”, and the default port number is 5432; -For the DB2 database, it has been cataloged, so there is no need to configure the server; -For Kingbase database, its server name is its IP address; -For the GoogleMaps datasource, its service address is set to “http://maps.google.com” by default and cannot be changed; -For SuperMapCloud datasource, its service address; -For the MAPWORLD datasource, its service address is set to “http://www.tianditu.cn” by default and cannot be changed; -For OGC and REST datasources, its service address. -If the user is set to IMAGEPLUGINS, the parameter of this method is set to the map cache configuration file (SCI) name, then the user can load the map cache

  • engine_type (EngineType or str) – The engine type of the datasource connection, you can use the EngineType enumeration value and name
  • alias (str) – datasource alias. The alias is the unique identification of the datasource. The logo is not case sensitive
  • is_readonly (bool) – Whether to open the datasource in read-only mode. If you open the datasource in read-only mode, the related information of the datasource and the data in it cannot be modified.
  • database (str) – The name of the database connected to the datasource
  • driver (str) –

    Driver name required for datasource connection:

    -For SQL Server database, it uses ODBC connection, and the returned driver is named SQL Server or SQL Native Client. -For the WMTS service published by iServer, the driver name returned is WMTS.

  • user (str) – The username for logging in to the database. Applicable to database type datasources.
  • password (str) – The password for logging in to the database or file connected to the datasource. For the GoogleMaps datasource, if you open a datasource based on an earlier version, the password returned is the key obtained by the user after registering on the Google official website
alias

str – datasource alias, which is the unique identifier of the datasource. The identifier is not case sensitive

database

str – The name of the database to which the datasource is connected

driver

str – the driver name required for datasource connection

from_dict(values)

Read the database datasource connection information from the dict object. The value in the current object will be overwritten after reading.

Parameters:values (dict) – A dict containing datasource connection information.
Returns:self
Return type:DatasourceConnectionInfo
static from_json(value)

Construct the datasource connection information object from the json string.

Parameters:value (str) – json string
Returns:datasource connection information object
Return type:DatasourceConnectionInfo
is_readonly

bool – Whether to open the datasource in read-only mode

is_same(other)

Determine whether the current object and the specified database connection information object point to the same datasource object. If two database connection information points to the same datasource, you must:

-The database engine type (type) is the same -The database server name, file name or service address (server) is the same -The name of the database connected to the database (database) is the same. Set if necessary. -The user name (user) of the database is the same, set if necessary. -The password (password) of the database or file connected to the datasource is the same, set if necessary. -Whether to open in read-only mode (is_readonly) is the same, set if necessary.
Parameters:other (DatasourceConnectionInfo) – The database connection information object to be compared.
Returns:Return True to indicate that the connection information with the specified database points to the same datasource object. Otherwise return False
Return type:bool
static load_from_dcf(file_path)

Load the database connection information from the dcf file and return a new database connection information object.

Parameters:file_path (str) – dcf file path.
Returns:datasource connection information object
Type:DatasourceConnectionInfo
static load_from_xml(xml)

Load the database connection information from the specified xml string and return a new database connection information object.

Parameters:xml (str) – the xml string of the connection information of the imported datasource
Returns:datasource connection information object
Type:DatasourceConnectionInfo
static make(value)

Construct a database connection information object.

Parameters:value (str or DatasourceConnectionInfo or dict) –

An object containing datasource connection information:

-If it is a DatasourceConnectionInfo object, return the object directly. -If it is a dict, refer to make_from_dict -If it is str, it can be:

-‘:memory:’, Return the database connection information of the memory datasource engine -udb or udd file, Return the database connection information of the UDB datasource engine -dcf file, refer to save_as_dcf -xml string, refer to to_xml
Returns:datasource connection information object
Return type:DatasourceConnectionInfo
static make_from_dict(values)

Construct a datasource connection object from the dict object. Return a new database connection information object.

Parameters:values (dict) – A dict containing datasource connection information.
Returns:datasource connection information object
Return type:DatasourceConnectionInfo
password

str – the password of the database or file to which the datasource is connected

save_as_dcf(file_path)

Save the current dataset connection information object to the DCF file.

Parameters:file_path (str) – dcf file path.
Returns:Return True if saved successfully, otherwise False
Type:bool
server

str – database server name, file name or service address

set_alias(value)

Set datasource alias

Parameters:value (str) – The alias is the unique identifier of the datasource. The logo is not case sensitive
Returns:self
Return type:DatasourceConnectionInfo
set_database(value)

Set the name of the database to which the datasource is connected. Applicable to database type datasources

Parameters:value (str) – The name of the database connected to the datasource.
Returns:self
Return type:DatasourceConnectionInfo
set_driver(value)

Set the driver name required for datasource connection.

Parameters:value (str) –

Driver name required for datasource connection:

-For SQL Server database, it uses ODBC to connect, and the set driver is named SQL Server or SQL Native Client. -For the WMTS service published by iServer, the set driver name is WMTS, and this method must be called to set its driver name.

Returns:self
Return type:DatasourceConnectionInfo
set_password(value)

Set the password of the database or file connected to the login datasource

Parameters:value (str) – The password of the database or file to which the login datasource is connected. For the GoogleMaps datasource, if you open a datasource based on an earlier version, you need to enter a password. The password is the key obtained by the user after registering on the Google official website.
Returns:self
Return type:DatasourceConnectionInfo
set_readonly(value)

Set whether to open the datasource in read-only mode.

Parameters:value (bool) – Specify whether to open the datasource in read-only mode. For UDB datasource, if its file attribute is read-only, it must be set to read-only before it can be opened.
Returns:self
Return type:DatasourceConnectionInfo
set_server(value)

Set the database server name, file name or service address

Parameters:value (str) –

database server name, file name or service address:

-For MEMORY, it is’:memory:’ -For UDB files, it is the absolute path of the file. Note: When the length of the absolute path exceeds the 260-byte length of the UTF-8 encoding format, the datasource cannot be opened. -For Oracle database, its server name is its TNS service name; -For SQL Server databases, the server name is the DSN (Database Source Name) name of the system; -For PostgreSQL database, the server name is “IP: Port Number”, and the default port number is 5432; -For the DB2 database, it has been cataloged, so there is no need to configure the server; -For Kingbase database, its server name is its IP address; -For the GoogleMaps datasource, its service address is set to “http://maps.google.com” by default and cannot be changed; -For SuperMapCloud datasource, its service address; -For the MAPWORLD datasource, its service address is set to “http://www.tianditu.cn” by default and cannot be changed; -For OGC and REST datasources, its service address. -If the user is set to IMAGEPLUGINS, the parameter of this method is set to the map cache configuration file (SCI) name, then the user can load the map cache

Returns:self
Return type:DatasourceConnectionInfo
set_type(value)

Set the type of engine connected to the datasource.

Parameters:value (EngineType or str) – Engine type of datasource connection
Returns:self
Return type:DatasourceConnectionInfo
set_user(value)

Set the user name for logging in to the database. Applicable to database type datasources

Parameters:value (str) – user name to log in to the database
Returns:self
Return type:DatasourceConnectionInfo
to_dict()

Output the current datasource connection information as a dict object.

Returns:a dict containing datasource connection information
Return type:dict
Example::
>>> conn_info = (DatasourceConnectionInfo().
>>> set_type(EngineType.ORACLEPLUS).
>>> set_server('oracle_server').
>>> set_database('database_name').
>>> set_alias('alias_name').
>>> set_user('user_name').
>>> set_password('password_123'))
>>>
>>> print(conn_info.to_dict())
{'type':'ORACLEPLUS','alias':'alias_name','server':'oracle_server','user':'user_name','is_readonly': False,'password':'password_123','database' :'database_name'}
to_json()

Output as json format string

Returns:json format string
Return type:str
to_xml()

Output the current dataset connection information as an xml string

Returns:The XML string converted from the current datasource connection information object.
Return type:str
type

EngineType – datasource type

user

str – the user name to log in to the database

class iobjectspy.data.Datasource

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The datasource defines a consistent data access interface and specifications. The physical storage of the datasource can be either a file or a database. The main basis for distinguishing different storage methods is the type of data engine used: When using the UDB engine, the datasource is stored in the form of files (*.udb, *.udd)– file-type datasource use .udb files to store spatial data. When using the spatial database engine, the datasource is stored in the specified DBMS. Each datasource exists in a workspace, and different datasources are distinguished by datasource aliases. Through the datasource object, you can create, delete, and copy the dataset.

Use create_vector_dataset to quickly create vector dataset:

>>> ds = Datasource.create('E:/data.udb')
>>> location_dt = ds.create_vector_dataset('location','Point')
>>> print(location_dt.name)
location

Append data to point dataset:

>>> location_dt.append([Point2D(1,2), Point2D(2,3), Point2D(3,4)])
>>> print(location_dt.get_record_count())
3

The datasource can directly write geometric objects, feature objects, point data, etc.:

>>> rect = location_dt.bounds
>>> location_coverage = ds.write_spatial_data([rect],'location_coverage')
>>> print(location_coverage.get_record_count())
1
>>> ds.delete_all()
>>> ds.close()
alias

str – The alias of the datasource. The alias is used to uniquely identify the datasource in the workspace, and the datasource can be accessed through it. The alias of the datasource is creating data Given when the source or datasource is opened. Different aliases can be used to open the same datasource.

change_password(old_password, new_password)

Modify the password of the opened datasource

Parameters:
  • old_password (str) – old password
  • new_password (str) – new password
Returns:

Return True if successful, otherwise False

Return type:

bool

close()

Close the current datasource.

Returns:Return True if closed successfully, otherwise return False
Return type:bool
connection_info

DatasourceConnectionInfo – datasource connection information

contains(name)

Check whether there is a dataset with the specified name in the current datasource

Parameters:name (str) – dataset name
Returns:The current datasource contains the specified name of the dataset Return True, otherwise it Return False
Return type:bool
copy_dataset(source, out_dataset_name=None, encode_type=None, progress=None)

Copy the dataset. Before copying the dataset, you must ensure that the current datasource is open and writable. When copying a dataset, the encoding method of the dataset can be modified through the EncodeType parameter. For the encoding method of dataset storage, please refer to EncodeType enumeration type. Since the CAD dataset does not support any encoding, the EncodeType set when copying the CAD dataset is invalid

Parameters:
  • source (Dataset or str) –

    The source dataset to be copied. It can be a dataset object, or a combination of a datasource alias and a dataset name. The combination of the datasource name and the dataset name can use any of “|”, “”, “/”. E.g.:

    >>> source ='ds_alias/point_dataset'
    

    or:

    >>> source ='ds_alias|point_dataset'
    
  • out_dataset_name (str) – The name of the target dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
  • encode_type (EncodeType or str) – The encoding method of the dataset. Can be: py:class:EncodeType enumeration value or name.
  • progress (function) – A function for processing progress information, please refer to StepEvent for details.
Returns:

Return the result dataset object if the copy is successful, otherwise it Return None

Return type:

Dataset

static create(conn_info)

Create a new datasource based on the specified datasource connection information.

Parameters:conn_info (str or dict or DatasourceConnectionInfo) – datasource connection information, please refer to: py:meth:DatasourceConnectionInfo.make
Returns:datasource object
Return type:Datasource
create_dataset(dataset_info, adjust_name=False)

Create a dataset. Create a dataset based on the specified dataset information. If the name of the dataset is invalid or already exists, the creation of the dataset will fail. Adjust_name can be set to True to automatically get a valid data set name

Parameters:
  • dataset_info (DatasetVectorInfo or DatasetImageInfo or DatasetGridInfo) – dataset information
  • adjust_name (bool) – When the dataset name is invalid, whether to automatically adjust the dataset name and use a legal dataset name. The default is False.
Returns:

Return the result dataset object if the creation is successful, otherwise it Return None

Return type:

Dataset

create_dataset_from_template(template, name, adjust_name=False)

Create a new dataset object based on the specified template dataset.

Parameters:
  • template (Dataset) – template dataset
  • name (str) – dataset name
  • adjust_name (bool) – When the dataset name is invalid, whether to automatically adjust the dataset name and use a legal dataset name. The default is False.
Returns:

Return the result dataset object if the creation is successful, otherwise it Return None

Return type:

Dataset

create_mosaic_dataset(name, prj_coordsys=None, adjust_name=False)

Create a mosaic dataset object based on the dataset name and projection information.

Parameters:
  • name (str) – dataset name
  • prj_coordsys (int or str or PrjCoordSys or PrjCoordSysType) – Specify the projection information of the mosaic dataset. Support epsg encoding, PrjCoordSys object, PrjCoordSysType type, xml or wkt or projection information file. Note that if an integer value is passed in, it must be in epsg encoding and cannot be an integer value of type PrjCoordSysType.
  • adjust_name (bool) – When the dataset name is invalid, whether to automatically adjust the dataset name and use a legal dataset name. The default is False.
Returns:

Return the result dataset object if the creation is successful, otherwise it Return None

Return type:

DatasetMosaic

create_vector_dataset(name, dataset_type, adjust_name=False)

Create a vector dataset object based on the name and type of the dataset.

Parameters:
  • name (str) – dataset name
  • dataset_type (DatasetType or str) – The dataset type, which can be an enumeration value or name of the dataset type. Support TABULAR, POINT, LINE, REGION, TEXT, CAD, POINT3D, LINE3D, REGION3D
  • adjust_name (bool) – When the dataset name is invalid, whether to automatically adjust the dataset name and use a legal dataset name. The default is False.
Returns:

Return the result dataset object if the creation is successful, otherwise it Return None

Return type:

DatasetVector

datasets

list[Dataset] – All dataset objects in the current datasource

delete(item)

Delete the specified dataset, which can be the name or serial number of the dataset

Parameters:item (str or int) – The name or serial number of the dataset to be deleted
Returns:Return True if the dataset is deleted successfully, otherwise False
Return type:bool
delete_all()

Delete all dataset under the current datasource

description

str – Return the description information about the datasource added by the user

field_to_point_dataset(source_dataset, x_field, y_field, out_dataset_name=None)

Create a point dataset from the X and Y coordinate fields in the attribute table of a vector dataset. That is, use the X and Y coordinate fields in the attribute table of the vector dataset as the X and Y coordinates of the dataset to create a point dataset.

Parameters:
  • source_dataset (DatasetVector or str) – a vector dataset with coordinate fields in the associated attribute table
  • x_field (str) – A field representing the abscissa of a point.
  • y_field (str) – A field representing the vertical coordinate of a point.
  • out_dataset_name (str) – The name of the target dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

Return a point dataset successfully, otherwise return None

Return type:

DatasetVector

flush(dataset_name=None)

Save the data in the memory that has not yet been written into the database to the database

Parameters:dataset_name (str) – The name of the dataset to be refreshed. When an empty string or None is passed in, it means refresh all dataset; otherwise, refresh the dataset with the specified name.
Returns:Return True if successful, otherwise False
Return type:bool
static from_json(value)

Open the datasource from the json format string. The json string format is the json string format of DatasourceConnectionInfo. Specific parameters DatasourceConnectionInfo.to_json() and to_json()

Parameters:value (str) – json string format
Returns:datasource object
Return type:Datasource
get_available_dataset_name(name)

Return the name of an unused dataset in a datasource. Dataset name restriction: The length of the dataset name is limited to 30 characters (that is, it can be 30 English letters or 15 Chinese characters), the characters that make up the dataset name can be letters, Chinese characters, numbers and underscores. The dataset name cannot start with numbers and underscores. The dataset name cannot conflict with the reserved keywords of the database.

Parameters:name (str) – dataset name
Returns:valid dataset name
Return type:str
get_count()

Get the number of dataset

Return type:int
get_dataset(item)

Get the dataset object according to the dataset name or serial number

Parameters:item – the name or serial number of the dataset
Type:str or int
Returns:dataset object
Return type:Dataset
index_of(name)

Return the index value of the dataset corresponding to the given dataset name in the dataset collection

Parameters:name (str) – dataset name
Return type:int
inner_point_to_dataset(source_dataset, out_dataset_name=None)

Create the interior point dataset of the vector dataset, and copy the attributes of the geometric objects in the vector dataset to the corresponding point dataset attribute table

Parameters:
  • source_dataset (DatasetVector or str) – the vector dataset to calculate the inlier dataset
  • out_dataset_name (str) – The name of the target dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

Return the interior point dataset after creation. Create failed and return None

Return type:

DatasetVector

is_available_dataset_name(name)

Determine whether the name of the dataset passed in by the user is legal. When creating a dataset, check the validity of its name.

Parameters:name (str) – the name of the dataset to be checked
Returns:If the dataset name is valid, return True, otherwise return False
Return type:bool
is_opened()

Return whether the datasource is open, if the datasource is open, Return true, if the datasource is closed, Return false.

Returns:Whether the datasource is open
Return type:bool
is_readonly()

Return whether the datasource is opened in read-only mode. For file datasources, if they are opened in read-only mode, they are shared and can be opened multiple times; if they are opened in non-read-only mode, they can only be opened once. For the image datasource (IMAGEPLUGINS engine type), it can only be opened in read-only mode.

Returns:Whether the datasource is opened as read-only
Return type:bool
label_to_text_dataset(source_dataset, text_field, text_style=None, out_dataset_name=None)

Used to generate a text dataset from the attribute field of the dataset. The text objects in the text dataset generated by this method all use the inner points of their corresponding spatial objects as the corresponding anchor points. The corresponding space object, that is, the content of the current text object, comes from the attribute value of the corresponding space object.

Parameters:
  • source_dataset (DatasetVector or str) – the vector dataset to calculate the inlier dataset
  • text_field (str) – The name of the attribute field to be converted.
  • text_style (TextStyle) – the style of the result text object
  • out_dataset_name (str) – The name of the target dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

successfully Return a text dataset, otherwise it Return None

Return type:

DatasetVector

last_date_updated()

Get the last update time of the datasource

Return type:datetime.datetime
static open(conn_info)

Open the datasource according to the datasource connection information. If the connection information set is UDB type datasource. It will return directly. It does not support directly opening the memory datasource. To use the memory datasource, you need to use create().

Parameters:conn_info (str or dict or DatasourceConnectionInfo) – datasource connection information, please refer to: py:meth:DatasourceConnectionInfo.make
Returns:datasource object
Return type:Datasource
prj_coordsys

PrjCoordSys – Get projection information of datasource

refresh()

Refresh the datasource of the database type

set_description(description)

Set the description information about the datasource added by the user. Users can add any information you want to add in the description information, such as the person who created the datasource, the source of the data, the main content of the data, accuracy information, and quality of the data, which are of great significance for maintaining the data

Parameters:description (str) – The description information about the datasource added by the user
set_prj_coordsys(prj)

Set the projection information of the datasource

Parameters:prj (PrjCoordSys) – projection information
to_json()

Return the datasource as a json format string. Specifically return the json string of the datasource connection information, that is, use: py:class:DatasourceConnectionInfo.to_json.

Return type:str
type

EngineType – datasource engine type

workspace

Workspace – Workspace object to which the current datasource belongs

write_attr_values(data, out_dataset_name=None)

Write the attribute data to the attribute dataset (DatasetType.TABULAR).

Parameters:
  • data (list or tuple) –

    The data to be written. data must be a list or tuple or set, each element item in the list (or tuple), it can be a list or tuple, at this time data is equivalent to a two-dimensional array, for example:

    >>> data = [[1,2.0,'a1'], [2,3.0,'a2'], [3,4.0,'a3']]
    

    or:

    >>> data = [(1,2.0,'a1'), (2,3.0,'a2'), (3,4.0,'a3')]
    

    If the element item in data is not a list or tuple, it will be treated as an element. E.g.:

    >>> data = [1,2,3]
    

    or:

    >>> data = ['test1','test2','test3']
    

    Then the final result dataset will contain a dataset with 1 column and 3 rows. For the element item in data as dict, each dict object will be written as a string:

    >>> data = [{1:'a'}, {2:'b'}, {3:'c'}]
    

    It is equivalent to writing:

    >>> data = ["{1:'a'}", "{2:'b'}", "{3:'c'}"]
    

    In addition, the user needs to ensure that each element in the list has the same structure. The program will automatically sample up to 20 records, and calculate the reasonable field type according to the sampled field value type. The specific corresponding is:

    -int: FieldType.INT64 -str: FieldType.WTEXT -float: FieldType.DOUBLE -bool: FieldType.BOOLEAN -datetime.datetime: FieldType.DATETIME -bytes: FieldType.LONGBINARY -bytearray: FieldType.LONGBINARY -Other: FieldType.WTEXT
  • out_dataset_name (str) – The name of the result dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

Return DatasetVector when writing data successfully, otherwise return None

Return type:

DatasetVector

write_features(data, out_dataset_name=None)

Write the feature object to the dataset.

Parameters:
  • data (list[Feature] or tuple[Feature]) – The set of feature objects to be written. The user needs to ensure that the structure of all elements in the collection must be the same, including the same geometric object type and field information.
  • out_dataset_name (str) – The name of the result dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

Return the result dataset object if writing is successful, otherwise it Return None

Return type:

DatasetVector

write_recordset(source, out_dataset_name=None)

Write a record set object or dataset object to the current datasource.

Parameters:
  • source – the record set or dataset object to be written
  • out_dataset_name (str) – The name of the result dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
Returns:

Return DatasetVector when writing data successfully, otherwise return None

Return type:

DatasetVector

write_spatial_data(data, out_dataset_name=None, values=None)

Write spatial data (Point2D, Point3D, Rectangle, Geometry) into the vector dataset.

Parameters:
  • data (list o tuple) – The data to be written. data must be a list or tuple or each element item in a set, list (or tuple), and can be Point2D, Point, GeoPoint, GeoLine, GeoRegion, Rectangle: -If all elements in data are Point2D or GeoPoint, a point dataset will be created -If all elements in data are Point3D or GeoPoint3D, a 3D point dataset will be created -If all elements in data are GeoLine, a line dataset will be created -If all elements in data are Rectangle or GeoRegion, a polygon dataset will be created -Otherwise, a CAD dataset will be created.
  • out_dataset_name (str) – The name of the result dataset. When the name is empty or illegal, a legal dataset name will be automatically obtained
  • values

    Spatial data to be written to the attribute field value of the dataset. If it is not None, it must be a list or tuple, and the length must be the same as the data length. Each element item in values can be a list or tuple. In this case, values is equivalent to a two-dimensional array, for example:

    >>> values = [[1,2.0,'a1'], [2,3.0,'a2'], [3,4.0,'a3']]
    

    or

    >>> values = [(1,2.0,'a1'), (2,3.0,'a2'), (3,4.0,'a3')]
    

    If the element item in values is not a list or tuple, it will be treated as an element. E.g.:

    >>> values = [1,2,3]
    

    or:

    >>> values = ['test1','test2','test3']
    

    Then the final result dataset will contain a dataset with 1 column and 3 rows. For the element item in data as dict, each dict object will be written as a string:

    >>> data = [{1:'a'}, {2:'b'}, {3:'c'}]
    

    It is equivalent to writing:

    >>> values = ["{1:'a'}", "{2:'b'}", "{3:'c'}"]
    

    In addition, the user needs to ensure that each element in the list has the same structure. The program will automatically sample up to 20 records, and calculate the reasonable field type according to the sampled field value type. The specific corresponding is:

    -int: FieldType.INT64 -str: FieldType.WTEXT -float: FieldType.DOUBLE -bool: FieldType.BOOLEAN -datetime.datetime: FieldType.DATETIME -bytes: FieldType.LONGBINARY -bytearray: FieldType.LONGBINARY -Other: FieldType.WTEXT
Returns:

Return DatasetVector when writing data successfully, otherwise return None

Return type:

DatasetVector

class iobjectspy.data.Dataset

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The base class of dataset (vector dataset, raster dataset, image dataset, etc.), providing common attributes and methods of various dataset. A dataset is generally a collection of related data stored together; according to the different types of data, it is divided into vector dataset and raster datasets, and those designed to handle specific problems such as topology dataset, network dataset, etc. Dataset is the smallest unit of GIS data organization. The vector dataset is a collection of spatial features of the same type, so it can also be called a feature set. According to the different spatial characteristics of the features, the vector dataset are subdivided into point dataset, line dataset, surface dataset, etc. Each vector dataset is a collection of data with the same spatial characteristics and properties and organized together. The raster dataset is composed of pixel array, which is less expressive than the vector dataset, but the positional relationship of spatial phenomena can be well represented.

bounds

Rectangle – Return the smallest bounding rectangle that contains all objects in the dataset. For a vector dataset, it is the smallest bounding rectangle of all objects in the dataset; for a raster dataset, represents the geographic range of the current grid or image.

close()

Used to close the current dataset

datasource

Datasource – Return the datasource object to which the current dataset belongs

description

str – Return the description information of the dataset added by the user.

encode_type

EncodeType – Return the encoding method of this dataset when data is stored. Using compression encoding for dataset can reduce the space occupied by data storage and reduce the network load and server load during data transmission. The encoding methods supported by the vector dataset are Byte, Int16, Int24, Int32, SGL, LZW, DCT, and can also be specified as not using encoding methods. The encoding methods supported by raster data are DCT, SGL, LZW, Or no encoding. For details, see: py:class:EncodeType type

static from_json(value)

Get the dataset from the json string of the dataset. If the datasource is not opened, the datasource will be opened automatically.

Parameters:value (str) – json string
Returns:dataset object
Return type:Dataet
is_open()

Judge whether the dataset has been opened, return True when the dataset is opened, otherwise return False

Return type:bool
is_readonly()

Determine whether the dataset is read-only. If the dataset is read-only, no operations can be performed to rewrite the dataset. If the dataset is read-only, return True, otherwise return False.

Return type:bool
name

str – Return the name of the dataset

open()

Open the dataset, return True if the dataset is opened successfully, otherwise return False

Return type:bool
prj_coordsys

PrjCoordSys – Return the projection information of the dataset.

rename(new_name)

Modify dataset name

Parameters:new_name (str) – new dataset name
Returns:Return True if the modification is successful, otherwise False
Return type:bool
set_bounds(rc)

Set the minimum bounding rectangle that contains all objects in the dataset. For a vector dataset, it is the smallest bounding rectangle of all objects in the dataset; for a raster dataset, it represents the geographic range of the current grid or image.

Parameters:rc (Rectangle) – The smallest bounding rectangle that contains all objects in the dataset.
Returns:self
Return type:Dataset
set_description(value)

Set the description information of the dataset added by the user.

Parameters:value (str) – The description of the dataset added by the user.
set_prj_coordsys(value)

Set the projection information of the dataset

Parameters:value (PrjCoordSys) – projection information
table_name

str – Return the table name of the dataset. For database datasources, Return the name of the data table corresponding to this dataset in the database; for file datasource, Return the table name of the storage attribute of this dataset.

to_json()

Output the information of the dataset to the json string. The json string content of the dataset includes the connection information of the datasource and the name of the dataset.

Return type:str
Example::
>>> ds = Workspace().get_datasource('data')
>>> print(ds[0].to_json())
{"name": "location", "datasource": {"type": "UDB", "alias": "data", "server": "E:/data.udb", "is_readonly": false}}
type

DatasetType – Return the dataset type

class iobjectspy.data.DatasetVector

Bases: iobjectspy._jsuperpy.data.dt.Dataset

Vector dataset class. It is used to describe vector dataset and manage and operate them accordingly. Operations on vector dataset mainly include data query, modification, deletion, and indexing.

append(data, fields=None)

Add records to the current dataset. The written data can be:

-Recordset or list of Recordset. It is necessary to ensure that the dataset type and the attribute table structure of the Recordset being written are the same as the current dataset, otherwise, the attribute data writing may fail. -DatasetVector or a list of DatasetVector. It is necessary to ensure that the dataset type and the attribute table structure of the DatasetVector being written are the same as the current dataset, otherwise, the attribute data writing may fail. -A list of point2D or point2D. Setting Fields is not supported when writing data to Point2D.The current dataset must be a point dataset or CAD dataset. -List of Rectangle or Rectangle. Setting fields is not supported when the data to be written is Rectangle. The current dataset must be a surface dataset or a CAD dataset. -List of Geometry or Geometry. When the writing data is Geometry, the fields are not supported. When the Geometry type is:

-Point, the current dataset must be a point dataset or a CAD dataset. -Line, the current dataset must be a line dataset or a CAD dataset -Surface, the current dataset must be face dataset or CAD dataset -Text, the current dataset must be a text dataset or a CAD dataset
-Feature or Feature list. When fields are set, it must be ensured that the field type of Feature in the fields matches the field of the dataset, otherwise it may cause the write attribute to be lost

Lost. When fields are empty, you must ensure that the fields in Feature exactly match the attribute fields of the current dataset. When Feature contains spatial objects, the types of spatial objects are:

-Point, the current dataset must be a point dataset or a CAD dataset. -Line, the current dataset must be a line dataset or a CAD dataset -Face, the current dataset must be face dataset or CAD dataset -Text, the current dataset must be a text dataset or a CAD dataset
Parameters:
Returns:

Return True if data is written successfully, otherwise False

Return type:

bool

append_fields(source, source_link_field, target_link_field, source_fields, target_fields=None)

Add fields from the source dataset to the target dataset, and assign values to the fields based on the query results of the associated fields.

Note

  • If a field in the field name set appended to the target dataset does not exist in the source dataset, this field will be ignored and only the fields existing in the source dataset will be appended;
  • If the set of field names corresponding to the additional field in the target dataset is specified, the additional field will be created in the target dataset according to the specified field name; when the specified field name already exists in the target dataset, it will be automatically added _x(1, 2, 3…) to create fields;
  • If you fail to create a field in the target dataset, ignore this field and continue to add other fields;
  • Must specify the source field name collection, otherwise the addition will not succeed;
  • It is not necessary to specify the target field name set. Once the target field name set is specified, the field names in this set must correspond to the field names in the source field name set.
Parameters:
  • source (DatasetVector or str) – source dataset
  • source_link_field (str) – The associated field in the source dataset and the target dataset.
  • target_link_field (str) – The associated field in the target dataset and the source dataset.
  • source_fields (list[str] or str) – The set of field names in the source dataset to be appended to the target dataset.
  • target_fields (list[str] or str) – The set of field names corresponding to the additional fields in the target dataset.
Returns:

A boolean value, indicating whether the appending field is successful, it Return true if it succeeds, otherwise it Return false.

Return type:

bool

build_field_index(field_names, index_name)

Create an index for the non-spatial fields of the dataset

Parameters:
  • field_names (list[str] or str) – non-spatial field names
  • index_name (str) – index name
Returns:

return true if created successfully, otherwise return false

Return type:

bool

build_spatial_index(spatial_index_info)

Create a spatial index for the vector dataset according to the specified spatial index information or index type. .. note:

-The point dataset in the database does not support quad tree (QTree) index and R tree index (RTree);
-The network dataset does not support any type of spatial index;
-The attribute dataset does not support any type of spatial index;
-The routing dataset does not support tile index (TILE);
-The composite dataset does not support multi-level grid index;
-The index can be created only when the database records are greater than 1000.
Parameters:spatial_index_info (SpatialIndexInfo or SpatialIndexType) – spatial index information, or spatial index type, when it is a spatial index type, it can be an enumeration value or a name
Returns:Return True if the index is created successfully, otherwise False.
Return type:bool
charset

Charset – the character set of the vector dataset

child_dataset

DatasetVector – Sub-dataset of vector dataset. Mainly used for network dataset

compute_bounds()

Recalculate the spatial extent of the dataset.

Return type:Rectangle
create_field(field_info)

Create field

Parameters:field_info (FieldInfo) – Field information. If the field type is a required field, the default value must be set. If the default value is not set, the addition fails.
Return type:bool
Returns:Return True if the field is created successfully, otherwise False
create_fields(field_infos)

Create multiple fields

Parameters:field_infos (list[FieldInfo]) – field information collection
Returns:Return True if the field is created successfully, otherwise False
Return type:bool
delete_records(ids)

Delete records in the dataset through the ID array.

Parameters:ids (list[int]) – ID array of records to be deleted
Returns:Return True if the deletion is successful, otherwise False
Return type:bool
drop_field_index(index_name)

Specify the field according to the index name, delete the index of the field

Parameters:index_name (str) – field index name
Returns:Return True if the deletion is successful, otherwise False
Return type:bool
drop_spatial_index()

Delete the spatial index, return True if the delete succeeds, otherwise return False

Return type:bool
field_infos

list[FieldInfo] – All field information of the dataset

get_available_field_name(name)

Generate a legal field name based on the incoming parameters.

Parameters:name (str) – field name
Return type:str
get_features(attr_filter=None, has_geometry=True, fields=None)

Obtain feature objects according to specified attribute filter conditions

Parameters:
  • attr_filter (str) – attribute filter condition, the default is None, that is, all feature objects in the current dataset are returned
  • has_geometry (bool) – Whether to get the geometry object, when it is False, only the field value will be returned
  • fields (list[str] or str) – result field name
Returns:

all feature objects that meet the specified conditions

Return type:

list[Feature]

get_field_count()

Return the number of all fields in the dataset

Return type:int
get_field_indexes()

Return the relationship mapping object between the index built in the attribute table of the current dataset and the indexed field. The key value is the index value, and the mapping value is the field where the index is located.

Return type:dict
get_field_info(item)

Get the field of the specified name or serial number

Parameters:item (int or str) – field name or serial number
Returns:field information
Return type:FieldInfo
get_field_name_by_sign(field_sign)

Obtain the field name according to the field ID.

Parameters:field_sign (FieldSign or str) – field sign type
Returns:field type
Return type:str
get_field_values(fields)

Get the field value of the specified field

Parameters:fields (str or list[str]) – list of field names
Returns:Get the field value, each field name corresponds to a list
Return type:dist[str,list]
get_geometries(attr_filter=None)

Get geometric objects according to specified attribute filter conditions

Parameters:attr_filter (str) – attribute filter condition, the default is None, that is, all geometric objects in the current dataset are returned
Returns:all geometric objects that meet the specified conditions
Return type:list[Geometry]
get_record_count()

Return the number of all records in the vector dataset.

Return type:int
get_recordset(is_empty=False, cursor_type=CursorType.DYNAMIC, fields=None)

Return an empty record set or a record set object including all records according to the given parameters.

Parameters:
  • is_empty (bool) – Whether to return empty recordset parameters. When true, an empty record set is returned. When it is false, it Return a record collection object containing all records.
  • cursor_type (CursorType or str) – The cursor type, so that the user can control the attributes of the query set. When the cursor type is dynamic, the record set can be modified. When the cursor type is static, the record set is read-only. Can be enumerated value or name
  • fields (list[str] or str) – The name of the result field that needs to be output, if it is None, all fields are reserved
Returns:

Record set object that meets the conditions

Return type:

Recordset

get_spatial_index_type()

Get spatial index type

Return type:SpatialIndexType
get_tolerance_dangle()

Get short suspension tolerance

Return type:float
get_tolerance_extend()

Obtain long suspension tolerance

Return type:float
get_tolerance_grain()

Get particle tolerance

Return type:float
get_tolerance_node_snap()

Get node tolerance

Return type:float
get_tolerance_small_polygon()

Get minimum polygon tolerance

Return type:float
index_of_field(name)

Get the serial number of the specified field name

Parameters:name (str) – field name
Returns:If the field exists, return the serial number of the field, otherwise return -1
Return type:int
is_available_field_name(name)

Determine whether the specified field name is legal and not occupied

Parameters:name (str) – field name
Returns:Return True if the field name is legal and not occupied, otherwise it Return False
Return type:bool
is_file_cache()

Return whether to use file cache. File caching can improve browsing speed. Note: The file cache is only valid for the vector dataset of the created map frame index under the Oracle datasource.

Return type:bool
is_spatial_index_dirty()

Determine whether the spatial index of the current dataset needs to be rebuilt. Because after the process of modifying the data, the spatial index may need to be rebuilt. .. note:

-When there is no spatial index in the vector dataset, if the number of records has reached the requirements for establishing a spatial index, it will return True and it is recommended that the user create a spatial index; otherwise, it will return False.
-If the vector dataset has a spatial index (except for the library index), but the number of records has not reached the requirement for establishing a spatial index, it return True.
Return type:bool
is_spatial_index_type_supported(spatial_index_type)

Determine whether the current dataset supports the specified type of spatial index.

Parameters:spatial_index_type (SpatialIndexType or str) – spatial index type, which can be an enumerated value or a name
Returns:If the specified spatial index type is supported, the return value is true, otherwise it is false.
Return type:bool
parent_dataset

DatasetVector – The parent dataset of the vector dataset. Mainly used for network dataset

query(query_param=None)

The vector dataset is queried by setting query conditions. This method queries spatial information and attribute information by default.

Parameters:query_param (QueryParameter) – query conditions
Returns:The result record set that meets the query conditions
Return type:Recordset
query_with_bounds(bounds, attr_filter=None, cursor_type=CursorType.DYNAMIC)

Query record set based on geographic scope

Parameters:
  • bounds (Rectangle) – known spatial extent
  • attr_filter (str) – query filter conditions, equivalent to the Where clause in the SQL statement
  • cursor_type (CursorType or str) – cursor type, can be enumerated value or name
Returns:

The result record set that meets the query conditions

Return type:

Recordset

query_with_distance(geometry, distance, unit=None, attr_filter=None, cursor_type=CursorType.DYNAMIC)

Used to query records that are concentrated in the buffer of the specified space object and meet certain conditions.

Parameters:
  • geometry (Geometry or Point2D or Rectangle) – The spatial object used for query.
  • distance (float) – query radius
  • unit (Unit or str) – The unit of the query radius, if it is None, the unit of the query radius is the same as the unit of the dataset.
  • attr_filter (str) – query filter conditions, equivalent to the Where clause in the SQL statement
  • cursor_type (CursorType or str) – cursor type, can be enumerated value or name
Returns:

The result record set that meets the query conditions

Return type:

Recordset

query_with_filter(attr_filter=None, cursor_type=CursorType.DYNAMIC, result_fields=None, has_geometry=True)

Query the record set according to the specified attribute filter condition

Parameters:
  • attr_filter (str) – query filter conditions, equivalent to the Where clause in the SQL statement
  • cursor_type (CursorType or str) – cursor type, can be enumerated value or name
  • result_fields (list[str] or str) – result field name
  • has_geometry (bool) – Whether to include geometric objects
Returns:

The result record set that meets the query conditions

Return type:

Recordset

query_with_ids(ids, id_field_name='SmID', cursor_type=CursorType.DYNAMIC)

According to the specified ID array, query the record set that meets the record

Parameters:
  • ids (list[int]) – ID array
  • id_field_name (str) – The field name used to represent the ID in the dataset. The default is “SmID”
  • cursor_type (CursorType or str) – cursor type
Returns:

The result record set that meets the query conditions

Return type:

Recordset

re_build_spatial_index()

Rebuild on the basis of the original spatial index. If the original spatial index is damaged, it can still be used after the reconstruction is successful.

Returns:Return True if the index is successfully rebuilt, otherwise False.
Return type:bool
remove_field(item)

Delete the specified field

Parameters:item (int or str) – field name or serial number
Returns:Return True if the deletion is successful, otherwise False
Return type:bool
reset_tolerance_as_default()

Set all tolerances to default values, and the unit is the same as the unit of the vector dataset coordinate system:

-The default value of node tolerance is 1/1000000 of the width of the dataset; -The default value of the particle tolerance is 1/1000 of the width of the dataset; -The default value of the short suspension tolerance is 1/10000 of the width of the dataset; -The default value of the long suspension tolerance is 1/10000 of the width of the dataset; -The default value of the minimum polygon tolerance is 0.
set_charset(value)

Set the character set of the dataset

Parameters:value (Charset or str) – character set of the dataset
set_file_cache(value)

Set whether to use file cache.

Parameters:value (bool) – Whether to use file cache
set_tolerance_dangle(value)

Set short suspension tolerance

Parameters:value (float) – short overhang tolerance
set_tolerance_extend(value)

Set long suspension tolerance

Parameters:value (float) – long suspension tolerance
set_tolerance_grain(value)

Set particle tolerance

Parameters:value (float) – particle tolerance
set_tolerance_node_snap(value)

Set node tolerance

Parameters:value (float) – node tolerance
set_tolerance_small_polygon(value)

Set minimum polygon tolerance

Parameters:value (float) – minimum polygon tolerance
stat(item, stat_mode)

Perform statistics on the specified fields in a given way. The current version provides 6 statistical methods. The maximum, minimum, average, sum, standard deviation, and variance of statistical fields. The statistical field types supported by the current version are Boolean, byte, double precision, single precision, 16-bit integer, and 32-bit integer.

Parameters:
  • item (str or int) – field name or serial number
  • stat_mode (StatisticMode or str) – field statistics mode
Returns:

statistical results

Return type:

float

truncate()

Clear all records in the vector dataset.

Returns:Whether clearing the record is successful, it Return True if it succeeds, and False if it fails.
Return type:bool
update_field(item, value, attr_filter=None)

According to the specified field name to be updated, the field value of all records that meet the attributeFilter condition is updated with the specified field value for updating. The field that needs to be updated cannot be a system field. That is to say, the field to be updated cannot be a field starting with sm (except smUserID).

Parameters:
  • item (str or int) – field name or serial number
  • value (int or float or str or datetime.datetime or bytes or bytearray) – Specify the field value for update.
  • attr_filter (str) – The query condition of the record to be updated, if the attributeFilter is an empty string, all records in the table are updated
Returns:

Return True if the field is updated successfully, otherwise Return False

Return type:

bool

update_field_express(item, express, attr_filter=None)

According to the specified field name to be updated, use the specified expression calculation result to update the field values of all records that meet the query conditions. The fields need to be updated cannot be the system fields, which means they cannot be fields starting with Sm (except smUserID).

Parameters:
  • item (str or int) – field name or serial number
  • express (str) – The specified expression, the expression can be a field operation or a function operation. For example: “SMID”, “abs(SMID)”, “SMID+1”, “‘string’”.
  • attr_filter (str) – The query condition of the record to be updated, if the attributeFilter is an empty string, all records in the table are updated
Returns:

Return True if the field is updated successfully, otherwise Return False

Return type:

bool

class iobjectspy.data.DatasetVectorInfo(name=None, dataset_type=None, encode_type=None, is_file_cache=True)

Bases: object

Vector dataset information class. Include the information of the vector dataset, such as the name of the vector dataset, the type of the dataset, the encoding method, whether to use file cache, etc. File caching is only for map frame index

Construct vector dataset information class

Parameters:
  • name (str) – dataset name
  • dataset_type (DatasetType or str) – dataset type
  • encode_type (EncodeType or str) – The compression encoding method of the dataset. Supports four compression encoding methods, namely single-byte, double-byte, three-byte and four-byte encoding methods
  • is_file_cache (bool) – Whether to use file cache. File cache is only useful for map frame index
encode_type

EncodeType – The compression encoding method of the dataset. Supports four compression encoding methods, namely single-byte, double-byte, three-byte and four-byte encoding the way

from_dict(values)

Read DatasetVectorInfo object from dict

Parameters:values (dict) – dict object containing vector dataset information, see: py:meth:to_dict
Return type:self
Return type:DatasetVectorInfo
is_file_cache

bool – Whether to use file cache. File cache is only useful for image frame indexing.

static make_from_dict(values)

Construct DatasetVectorInfo object from dict

Parameters:values (dict) – dict object containing vector dataset information, see: py:meth:to_dict
Return type:DatasetVectorInfo
name

**str* – dataset name, limitation of the name of the dataset* – The length of the dataset name is limited to 30 characters (that is, it can be 30 English letters or 15 Chinese characters), the characters composing the dataset name can be letters, Chinese characters, numbers and underscores. The dataset name cannot start with numbers and underscores. If it starts with a letter, the dataset name cannot conflict with the reserved keywords of the database.

set_encode_type(value)

Set the compression encoding method of the dataset

Parameters:value (EncodeType or str) – The compression encoding method of the dataset
Returns:self
Return type:DatasetVectorInfo
set_file_cache(value)

Set whether to use file cache. The file cache is only useful for map frame indexing.

Parameters:value (bool) – Whether to use file cache
Returns:self
Return type:DatasetVectorInfo
set_name(value)

Set the name of the dataset

Parameters:value (str) – dataset name
Returns:self
Return type:DatasetVectorInfo
set_type(value)

Set the type of dataset

Parameters:value (DatasetType or str) – dataset type
Returns:self
Return type:DatasetVectorInfo
to_dict()

Output the information of the current object as a dict object

Return type:dict
type

DatasetType – Dataset Type

class iobjectspy.data.DatasetImageInfo(name=None, width=None, height=None, pixel_format=None, encode_type=None, block_size_option=BlockSizeOption.BS_256, band_count=None)

Bases: object

Image dataset information class, which is used to set the creation information of the image dataset, including name, width, height, number of bands, and storage block size.

When setting the creation information of the image dataset through this class, you need to pay attention to:

-Need to specify the number of image bands, the number of bands can be set to 0, after creation, you can add bands to the image; -All bands are set to the same pixel format and encoding method. After the image is successfully created, you can set different pixel formats and encoding types for each band according to your needs.

Construct image dataset information object

Parameters:
  • name (str) – dataset name
  • width (int) – the width of the dataset, in pixels
  • height (int) – the height of the dataset, in pixels
  • pixel_format (PixelFormat or str) – the pixel format stored in the dataset
  • encode_type (EncodeType or str) – encoding method of dataset storage
  • block_size_option (BlockSizeOption) – the pixel block type of the dataset
  • band_count (int) – number of bands
band_count

int – number of bands

block_size_option

BlockSizeOption – The pixel block type of the dataset

bounds

Rectangle – The geographic extent of the image dataset.

encode_type

EncodeType – Return the encoding method of the image dataset when data is stored. Using compression encoding for the dataset can reduce the space occupied by data storage and reduce data transmission Network load and server load at the time. The encoding methods supported by the raster data are DCT, SGL, LZW or not using the encoding method

from_dict(values)

Read DatasetImageInfo information from dict

Parameters:values (dict) –
Returns:self
Return type:DatasetImageInfo
height

int – The height of the image data of the image dataset. The unit is pixel

static make_from_dict(values)

Read information from dict to build DatasetImageInfo object

Parameters:values (dict) –
Return type:DatasetImageInfo
name

str – dataset name

pixel_format

PixelFormat – The pixel format of image data storage. Each pixel is represented by a different byte, and the unit is bit.

set_band_count(value)

Set the number of bands of the image dataset. When creating an image dataset, the number of bands can be set to 0. At this time, the settings of pixel format (pixel_format) and encoding format (encode_type) are invalid. Because this information is for the band, so it cannot be saved when the band is 0. The pixel format and encoding format of this image dataset will be based on the relevant information of the first band added to it.

Parameters:value (int) – The number of bands.
Returns:self
Return type:DatasetImageInfo
set_block_size_option(value)

Set the pixel block type of the dataset. Store in blocks in a square manner. In the process of segmenting, if the image data is not enough for complete segmentation, then the space is used to supplement the complete storage. The default value is BlockSizeOption.BS_256.

Parameters:value (BlockSizeOption or str) – The pixel block of the image dataset
Returns:self
Return type:DatasetImageInfo
set_bounds(value)

Set the geographic extent of the image dataset.

Parameters:value (Rectangle) – The geographic extent of the image dataset.
Returns:self
Return type:DatasetImageInfo
set_encode_type(value)

Set the encoding method of image dataset data storage. Using compression encoding for dataset can reduce the space occupied by data storage and reduce the network load and server load during data transmission. The encoding methods supported by raster data are DCT, SGL, LZW or not using encoding methods.

Parameters:value (EncodeType or str) – like the encoding method of dataset data storage
Returns:self
Return type:DatasetImageInfo
set_height(value)

Set the height of the image data of the image dataset. The unit is pixel.

Parameters:value (int) – The height of the image data of the image dataset. The unit is pixel.
Returns:self
Return type:DatasetImageInfo
set_name(value)

Set the name of the dataset

Parameters:value (str) – dataset name
Returns:self
Return type:DatasetImageInfo
set_pixel_format(value)

Set the storage pixel format of the image dataset. The image dataset does not support DOUBLE, SINGLE, BIT64 type pixel formats.

Parameters:value (PixelFormat or str) – the pixel format of the image dataset storage
Returns:self
Return type:DatasetImageInfo
set_width(value)

Set the width of the image data of the image dataset. The unit is pixel.

Parameters:value (int) – The width of the image data of the image dataset. The unit is pixel.
Returns:self
Return type:DatasetImageInfo
to_dict()

Output current object information as dict

Return type:dict
width

int – The width of the image data of the image dataset. The unit is pixel

class iobjectspy.data.DatasetGridInfo(name=None, width=None, height=None, pixel_format=None, encode_type=None, block_size_option=BlockSizeOption.BS_256)

Bases: object

Raster dataset information class. This category includes returning and setting the corresponding setting information of the raster dataset, such as the name, width, height, pixel format, encoding method, storage block size, and null value of the raster dataset.

Construct a raster dataset information object

Parameters:
  • name (str) – dataset name
  • width (int) – the width of the dataset, in pixels
  • height (int) – the height of the dataset, in pixels
  • pixel_format (PixelFormat or str) – the pixel format stored in the dataset
  • encode_type (EncodeType or str) – encoding method of dataset storage
  • block_size_option (BlockSizeOption) – the pixel block type of the dataset
block_size_option

BlockSizeOption – The pixel block type of the dataset

bounds

Rectangle – The geographic extent of the raster dataset.

encode_type

EncodeType – Return the encoding method of raster dataset data storage. Using compression encoding method for the dataset can reduce the space occupied by data storage and reduce the data Network load and server load during transmission. The encoding methods supported by the raster data are DCT, SGL, LZW or not using the encoding method

from_dict(values)

Read DatasetGridInfo information from dict

Parameters:values (dict) –
Returns:self
Return type:DatasetGridInfo
height

int – The height of the raster data of the raster dataset. The unit is pixel

static make_from_dict(values)

Read information from dict to build DatasetGridInfo object

Parameters:values (dict) –
Return type:DatasetGridInfo
max_value

float – the maximum value in the grid rows and columns of the raster dataset

min_value

float – the minimum value in the grid rows and columns of the grid dataset

name

str – Dataset name

no_value

float – The null value of the raster dataset. When this dataset is null, the user can use -9999 to represent

pixel_format

PixelFormat – The pixel format of raster data storage. Each pixel is represented by a different byte, and the unit is bit.

set_block_size_option(value)

Set the pixel block type of the dataset. Store in blocks in a square manner. In the process of segmenting, if the raster data is not enough to be completely divided into blocks, then use spaces to fill in complete storage. The default value is BlockSizeOption.BS_256.

Parameters:value (BlockSizeOption or str) – pixel block of raster dataset
Returns:self
Return type:DatasetGridInfo
set_bounds(value)

Set the geographic extent of the raster dataset.

Parameters:value (Rectangle) – The geographic extent of the raster dataset.
Returns:self
Return type:DatasetGridInfo
set_encode_type(value)

Set the encoding method of grid dataset data storage. Using compression encoding for dataset can reduce the space occupied by data storage and reduce the network load and server load during data transmission. The encoding methods supported by raster data are DCT, SGL, LZW or not using encoding methods.

Parameters:value (EncodeType or str) – The encoding method of raster dataset data storage
Returns:self
Return type:DatasetGridInfo
set_height(value)

Set the height of the raster data of the raster dataset. The unit is pixel.

Parameters:value (float) – the height of the raster data of the raster dataset
Returns:self
Return type:DatasetGridInfo
set_max_value(value)

Set the maximum value in the rows and columns of the raster dataset.

Parameters:value (float) – the maximum value in the rows and columns of the raster dataset
Returns:self
Return type:DatasetGridInfo
set_min_value(value)

Set the minimum value in the rows and columns of the raster dataset

Parameters:value (float) – the minimum value in the rows and columns of the raster dataset
Returns:self
Return type:DatasetGridInfo
set_name(value)

Set the name of the dataset

Parameters:value (str) – dataset name
Returns:self
Return type:DatasetGridInfo
set_no_value(value)

Set the null value of the raster dataset. When this dataset is null, the user can use -9999 to indicate it.

Parameters:value (float) – the null value of the raster dataset
Returns:self
Return type:DatasetGridInfo
set_pixel_format(value)

Set the stored pixel format of the raster dataset

Parameters:value (PixelFormat or str) – the stored pixel format of the raster dataset
Returns:self
Return type:DatasetGridInfo
set_width(value)

Set the width of the raster data of the raster dataset. The unit is pixel.

Parameters:value (int) – The width of the raster data of the raster dataset. The unit is pixel.
Returns:self
Return type:DatasetGridInfo
to_dict()

Output current object information as dict

Return type:dict
width

int – the width of the raster data of the raster dataset. The unit is pixel

class iobjectspy.data.DatasetImage

Bases: iobjectspy._jsuperpy.data.dt.Dataset

Image dataset class. Image dataset class, which is used to describe image data and does not have attribute information, such as image maps, multi-band images, and physical maps. The raster data is organized in a grid form and recorded using pixel values of a two-dimensional raster. The grid value can describe various data information. Each raster in the image dataset stores a color value or color index value (RGB value).

add_band(datasets, indexes=None)

Add multiple bands to the specified multi-band image dataset according to the specified index

Parameters:
  • datasets (list[DatasetImage] or DatasetImage) – image dataset
  • indexes (list[int]) – The band index to be appended. It is only valid when the input is a single DatasetImage data.
Returns:

the number of bands added

Return type:

int

band_count

int – Return the number of bands

block_size_option

BlockSizeOption – The pixel block type of the dataset

build_pyramid(progress=None)

Create a pyramid for the image dataset. The purpose is to improve the display speed of the image dataset. Pyramids can only be created for the original data; pyramids can only be created for one dataset at a time, when the image dataset is displayed, all the pyramids that have been created will be visited.

Parameters:progress (function) – progress information processing function, refer to:py:class:.StepEvent
Returns:Whether the creation is successful, it Return True if it succeeds, and False if it fails
Return type:bool
build_statistics()

Perform statistical operations on the image dataset, and return the statistical result object of the image dataset. The statistical results include the maximum, minimum, mean, median, mode, rare, variance, standard deviation, etc. of the image dataset.

Returns:Return a dict. The dict contains the statistical results of each band. The statistical results are dict objects containing the maximum, minimum, mean, median, mode, rare, variance, and standard deviation. The key value in dict:

-average: average -majority: majority -minority: rare number -max: maximum value -median: median -min: minimum value -stdDev: standard deviation -var: variance -is_dirty: Is it “dirty” data

Return type:dict[dict]
calculate_extremum(band=0)

Calculate the extreme values of the specified band of the image data, namely the maximum and minimum values.

Parameters:band (int) – The band number of the image data whose extreme value is to be calculated.
Returns:Return true if the calculation is successful, otherwise Return false.
Return type:bool
clip_region

GeoRegion – the display area of the image dataset

delete_band(start_index, count=1)

Delete a band according to the specified index number

Parameters:
  • start_index (int) – Specify the start index number of the deleted band.
  • count (int) – The number of bands to be deleted.
Returns:

Return true if the deletion is successful; otherwise, Return false.

Return type:

bool

get_band_index(name)

Get the serial number of the specified band name

Parameters:name (str) – band name
Returns:the serial number of the band
Return type:int
get_band_name(band)

Return the name of the band with the specified sequence number.

Parameters:band (int) – band number
Returns:band name
Return type:str
get_max_value(band=0)

Get the maximum pixel value of the specified band of the image dataset

Parameters:band (int) – Specified band index number, starting from 0.
Returns:The maximum pixel value of the specified band of the image dataset
Return type:float
get_min_value(band=0)

Get the minimum pixel value of the specified band of the image dataset

Parameters:band (int) – Specified band index number, starting from 0.
Returns:The minimum pixel value of the specified band of the image dataset
Return type:float
get_no_value(band=0)

Return the no value of the specified band of the image dataset.

Parameters:band (int) – Specified band index number, starting from 0
Returns:No value for the specified band in the image dataset
Return type:float
get_palette(band=0)

Get the color palette of the specified band of the image dataset

Parameters:band (int) – Specified band index number, starting from 0.
Returns:The color palette of the specified band of the image dataset
Return type:Colors
get_pixel_format(band)

Return the pixel format of the specified band of the image dataset.

Parameters:band (int) – Specified band index number, starting from 0.
Returns:The pixel format of the specified band of the image dataset.
Return type:PixelFormat
get_value(col, row, band)

Return the pixel value corresponding to the grid of the image dataset according to the given number of rows and columns. Note: The number of rows and columns of parameter values of this method is counted from zero.

Parameters:
  • col (int) – The column of the specified image dataset.
  • row (int) – The row of the specified image dataset.
  • band (int) – Specified number of bands
Returns:

The corresponding pixel value in the image dataset.

Return type:

float or tuple

has_pyramid()

Whether the image dataset has created pyramids.

Return type:bool
height

int – The height of the image data of the image dataset. The unit is pixel

image_to_xy(col, row)

According to the specified number of rows and columns, the corresponding image points are converted into points in the geographic coordinate system, namely X, Y coordinates.

Parameters:
  • col (int) – the specified column
  • row (int) – the specified row
Returns:

the corresponding point coordinates in the geographic coordinate system.

Return type:

Point2D

remove_pyramid()

Image created pyramid

Return type:bool
set_band_name(band, name)

Set the name of the band with the specified serial number.

Parameters:
  • band (int) – band number
  • name (str) – band name
set_clip_region(region)

Set the display area of the image dataset. When the user sets this method, the image grid dataset will be displayed according to the given area, and nothing outside the area will be displayed.

note:

-When the geographic range of the image dataset set by the user (that is, calling the set_geo_reference() method) and the set cropping area do not overlap, the image dataset is not displayed. -When resetting the geographic range of the image dataset, the cropping area of the image dataset is not automatically modified.

Parameters:region (GeoRegion or Rectangle) – The display area of the image dataset.
set_geo_reference(rect)

Correspond to the image dataset to the specified geographic range in the geographic coordinate system.

Parameters:rect (Rectangle) – the specified geographic range
set_no_value(value, band)

Set the no value of the specified band of the image dataset.

Parameters:
  • value (float) – No value specified.
  • band (int) – Specified band index number, starting from 0.
set_palette(colors, band)

Set the color palette of the specified band of the image dataset

Parameters:
  • colors (Colors) – color palette.
  • band (int) – Specified band index number, starting from 0.
set_value(col, row, value, band)

Set the corresponding pixel value of the image dataset according to the given number of rows and columns. Note: The number of rows and columns of parameter values of this method is counted from zero.

Parameters:
  • col (int) – The column of the specified image dataset.
  • row (int) – The row of the specified image dataset.
  • value (tuple or float) – The corresponding pixel value of the specified image dataset.
  • band (int) – Specified band number
Returns:

The corresponding pixel value before modification in the image dataset.

Return type:

float

update(dataset)

Update according to the specified image dataset. Note: The encoding method (EncodeType) and pixel type (PixelFormat) of the specified image dataset and the updated image dataset must be consistent.

Parameters:dataset (DatasetImage or str) – The specified image dataset.
Returns:If the update is successful, return True, otherwise return False.
Return type:bool
update_pyramid(rect)

Update the image pyramid of the image dataset in the specified range.

Parameters:rect (Rectangle) – Update the specified image range of the pyramid
Returns:If the update is successful, return True, otherwise return False.
Return type:bool
width

int – The width of the image data of the image dataset. The unit is pixel

xy_to_image(point)

Convert the point (XY) in the geographic coordinate system to the corresponding pixel value in the image dataset.

Parameters:point (Point2D) – point in geographic coordinate system
Returns:the corresponding image point of the image dataset
Return type:tuple[int]
class iobjectspy.data.DatasetGrid

Bases: iobjectspy._jsuperpy.data.dt.Dataset

The raster dataset class. Raster dataset class, which is used to describe raster data, such as elevation dataset and land use maps. Raster data is organized in grid form and uses the pixel value of a two-dimensional grid to record data. Each cell represents a pixel element, and the grid value can describe various data information. Each grid (cell) in the raster dataset stores the attribute value representing the feature. The attribute value can be soil type, density value, elevation, temperature, humidity, etc.

block_size_option

BlockSizeOption – The pixel block type of the dataset

build_pyramid(resample_method=None, progress=None)

Create a specified type of pyramid for raster data, the purpose is to improve the display speed of raster data. Pyramids can only be created for original data; users can only create pyramids once for a dataset, if you want to create it again, you need to delete the original created pyramid first. When the raster dataset is displayed, all the created pyramids will be accessed. The figure below shows the process of building pyramids at different scales.

Parameters:
  • resample_method (ResamplingMethod or str) – type of pyramid building method
  • progress (function) – progress information processing function, refer to:py:class:.StepEvent
Returns:

Whether the creation is successful, it Return True if it succeeds, and False if it fails

Return type:

bool

build_statistics()

Perform statistical operations on the raster dataset and return the statistical result object of the raster dataset. The statistical results include the maximum, minimum, mean, median, mode, rare, variance, standard deviation, etc. of the raster dataset.

Returns:A dict object containing the maximum, minimum, mean, median, mode, rare, variance, and standard deviation. The key value in dict:

-average: average -majority: majority -minority: rare number -max: maximum value -median: median -min: minimum value -stdDev: standard deviation -var: variance -is_dirty: Is it “dirty” data

Return type:dict
build_value_table(out_data=None, out_dataset_name=None)

Create a raster value attribute table whose type is the attribute table dataset type TABULAR. The pixel format of the raster dataset is SINGLE and DOUBLE, and the attribute table cannot be created, that is, calling this method Return None. The returned attribute table dataset contains system fields and two fields that record raster information. GRIDVALUE records the raster value, and GRIDCOUNT records the number of pixels corresponding to the raster value.

Parameters:
Returns:

result dataset or dataset name

Return type:

DatasetVector or str

calculate_extremum()

Calculate the extreme values of the raster dataset, namely the maximum and minimum values. Suggestion: After some analysis or operation of raster dataset, it is recommended to call this interface to calculate the maximum and minimum values.

Returns:Return true if the calculation is successful, otherwise Return false.
Return type:bool
clip_region

GeoRegion – the display area of the raster dataset

color_table

Colors – Color table of the dataset

column_block_count

int – The total number of columns obtained after the raster dataset is divided into blocks.

get_value(col, row)

Return the cell value corresponding to the grid of the raster dataset according to the given number of rows and columns. Note: The number of rows and columns of parameter values of this method is counted from zero.

Parameters:
  • col (int) – The column of the specified raster dataset.
  • row (int) – Specify the row of the raster dataset.
Returns:

the grid value corresponding to the grid of the raster dataset.

Return type:

float

grid_to_xy(col, row)

The grid points corresponding to the specified number of rows and columns are converted into points in the geographic coordinate system, namely X, Y coordinates.

Parameters:
  • col (int) – the specified column
  • row (int) – the specified row
Returns:

the corresponding point coordinates in the geographic coordinate system.

Return type:

Point2D

has_pyramid()

Whether the raster dataset has created pyramids.

Return type:bool
height

int – The height of the raster data of the raster dataset. The unit is pixel

max_value

float – The maximum value of the grid value in the raster dataset.

min_value

float – the minimum value of the grid value in the raster dataset

no_value

float – the null value of the raster dataset, when the dataset is null, the user can use -9999 to represent

pixel_format

PixelFormat – The pixel format of raster data storage. Each pixel is represented by a different byte, and the unit is bit.

remove_pyramid()

Delete the created pyramid

Return type:bool
row_block_count

int – The total number of rows obtained after the raster data is divided into blocks.

set_clip_region(region)

Set the display area of the raster dataset. When the user sets this method, the raster dataset will be displayed according to the given area, and nothing outside the area will be displayed.

Note

-When the geographic range of the raster dataset set by the user (that is, the set_geo_reference() method is called) has no overlap with the set clipping area, the raster dataset will not be displayed. -When resetting the geographic extent of the raster dataset, the clipping area of the raster dataset is not automatically modified.

Parameters:region (GeoRegion or Rectangle) – The display area of the raster dataset.
set_color_table(colors)

Set the color table of the dataset

Parameters:colors (Colors) – color collection
set_geo_reference(rect)

Map the raster dataset to the specified geographic range in the geographic coordinate system.

Parameters:rect (Rectangle) – the specified geographic range
set_no_value(value)

Set the null value of the raster dataset. When the dataset is null, the user can use -9999 to indicate

Parameters:value (float) – null value
set_value(col, row, value)

Set the grid value corresponding to the grid of the grid dataset according to the given number of rows and columns. Note: The number of rows and columns of parameter values of this method is counted from zero.

Parameters:
  • col (int) – The column of the specified raster dataset.
  • row (int) – Specify the row of the raster dataset.
  • value (float) – The grid value corresponding to the specified grid dataset.
Returns:

The raster value corresponding to the raster of the raster dataset before modification.

Return type:

float

update(dataset)

Update according to the specified raster dataset. Note: The encoding method (EncodeType) and pixel type (PixelFormat) of the specified raster dataset and the updated raster dataset must be consistent

Parameters:dataset (DatasetGrid or str) – The specified raster dataset.
Returns:If the update is successful, return True, otherwise return False.
Return type:bool
update_pyramid(rect)

Update the image pyramid of the raster dataset in the specified range.

Parameters:rect (Rectangle) – Update the specified image range of the pyramid
Returns:If the update is successful, return True, otherwise return False.
Return type:bool
width

int – The width of the raster data of the raster dataset. The unit is pixel

xy_to_grid(point)

Convert the point (XY) in the geographic coordinate system to the corresponding grid in the raster dataset.

Parameters:point (Point2D) – point in geographic coordinate system
Returns:the grid corresponding to the raster dataset, return columns and rows respectively
Return type:tuple[int]
class iobjectspy.data.DatasetTopology

Bases: iobjectspy._jsuperpy.data.dt.Dataset

class iobjectspy.data.DatasetMosaic

Bases: iobjectspy._jsuperpy.data.dt.Dataset

Mosaic dataset. Used for efficient management and display of massive image data. Nowadays, the acquisition of images has become more and more convenient and efficient, and the demand for the management and service release of massive images has become more and more common. In order to complete this work more conveniently and efficiently, SuperMap GIS provides a solution based on mosaic dataset. The mosaic dataset is managed by the way of metadata + original image files. When adding image data to the mosaic dataset, only the path, contour, resolution and other meta-information of the image file will be recorded, and load the required image file according to the meta-information when it is used. Compared with the traditional warehouse management method, this mode greatly improves the speed, but also reduces the disk usage.

add_files(directory_paths, extension=None, clip_file_extension=None)

Adding images to the mosaic dataset is essentially adding and recording the file names of all images with the specified extension under the given path. That is, the mosaic dataset does not copy the image files into the database, but records the full path of the image ( Absolute path) information.

Parameters:
  • directory_paths (str or list[str]) – Specify the path to add images, that is, the folder path (absolute path) where the image to be added is located or the full path (absolute path) list of multiple image files to be added.
  • extension (str) – The extension of the image file. When directory_paths is a folder path (that is, when the type of directory_paths is str), it is used to filter the image files in the folder. When the type of directory_paths is list, this parameter has no effect.
  • clip_file_extension (str) – The suffix name of the crop shape file, such as .shp, the object in the file will be the cropped display range of the image. The cropped image display is generally used for: When the image after the correction is generated no value region; The effective value region of the image is drawn by clipping the shape, and the purpose of removing the no-value region is achieved after clipping the display. In addition, the image and the crop shape have a one-to-one correspondence. Therefore, the crop shape file must be stored in the path specified by the directory_path parameter that is, the crop shape file and the image file are in the same directory.
Returns:

Whether the image is added successfully, True means success; False means failure.

Return type:

bool

bound_count

int – mosaic dataset band number

boundary_dataset

DatasetVector – the boundary subdataset of the mosaic dataset

build_pyramid(resample_type=PyramidResampleType.NONE, is_skip_exists=True)

Create image pyramids for all images in the mosaic dataset.

Parameters:
  • resample_type (PyramidResampleType or str) – Pyramid resampling method
  • is_skip_exists (bool) – A Boolean value indicating whether to ignore the pyramid if the image has already been created. True means ignore, that is, do not recreate the pyramid; False means re-create a pyramid from an image of an existing pyramid..
Returns:

return True if successful creation; otherwise return False

Return type:

bool

clip_dataset

DatasetVector – the cropped sub-dataset of the mosaic dataset

footprint_dataset

DatasetVector – the contour subset of the mosaic dataset

list_files()

Get all the raster files of the mosaic dataset

Returns:all raster files of the mosaic dataset
Return type:list[str]
pixel_format

PixelFormat – the bit depth of the mosaic dataset

class iobjectspy.data.DatasetVolume

Bases: iobjectspy._jsuperpy.data.dt.Dataset

class iobjectspy.data.Colors(seq=None)

Bases: object

Color collection class. The main function of this class is to provide color sequences. It provides the generation of various gradient colors and random colors, as well as the generation of SuperMap predefined gradient colors.

append(value)

Add a color value to the color collection

Parameters:value (tuple[int] or int) – RGB color value or RGBA color value
clear()

Clear all color values

extend(iterable)

Add a collection of color values

Parameters:iterable (range[int] or range[tuple]) – color value collection
index(value, start=None, end=None)

Return the sequence number of the color value

Parameters:
  • value (tuple[int] or int) – RGB color value or RGBA color value
  • start (int) – start to find the position
  • end (int) – end search position
Returns:

The location of the color value that meets the condition

Return type:

int

insert(index, value)

Add color to the specified position

Parameters:
  • index (int) – the specified position
  • value (tuple[int] or int) – RGB color value or RGBA color value
static make_gradient(count, gradient_type, reverse=False, gradient_colors=None)

Given the number of colors and control colors, generate a set of gradient colors, or generate a preset gradient color of the generation system. gradient_colors and gradient_type cannot be valid at the same time. However, gradient_type is preferred when it is valid to use gradient_type to generate system-defined gradients.

Parameters:
  • count (int) – The total number of gradient colors to be generated.
  • gradient_type (ColorGradientType or str) – The type of gradient color.
  • reverse (bool) – Whether to reversely generate gradient colors, that is, whether to generate gradient colors from the end color to the start color. It only works when gradient_type is valid.
  • gradient_colors (Colors) – gradient color set. That is, the control color of the gradient color is generated.
Returns:

Return type:

static make_random(count, colors=None)

Used to generate a certain number of random colors.

Parameters:
  • count (int) – number of interval colors
  • colors (Colors) – Control color set.
Returns:

A random color table generated by the number of interval colors and the set of control colors.

Return type:

Colors

pop(index=None)

Delete the color value at the specified position and return the color value. Delete the last color value when index is None

Parameters:index (int) – the specified position
Returns:the color value to be deleted
Return type:tuple
remove(value)

Delete the specified color value

Parameters:value (tuple[int] or int) – the color value to be deleted
values()

Return all color values

Return type:list[tuple]
class iobjectspy.data.JoinItem(name=None, foreign_table=None, join_filter=None, join_type=None)

Bases: object

Connection information class. Used to connect vector dataset with external tables. The external table can be a DBMS table corresponding to another vector dataset (there is no spatial geometric information in the pure attribute dataset), or it can also be a user-created business table. It should be noted that the vector dataset and the external table must belong to the same datasource. When a connection is established between the two tables, by operating the main table, you can query the external table and make thematic map and do some analysis etc. When there is a one-to-one or many-to-one relationship between the two tables, join can be used. When it is a many-to-one relationship, it is allowed to specify the association between multiple fields. Instances of this type can be created.

There are two ways to establish a connection between dataset tables, one is join and the other is link. The related settings of the join are implemented through the JoinItem class, and the link settings are through LinkItem class. In addition, the two dataset tables used to establish a connection must be under the same datasource, and the two dataset tables used to establish an linked relationship may not be under the same datasource.

The following example of the query is used to illustrate the difference between connect and link. Suppose the dataset table used for query is DatasetTableA, and the table to be linked or connected is DatasetTableB. The connected or linked relationship between DatasetTableA and DatasetTableB is used to query the records in DatasetTableA that meet the query conditions:

-Join Set the connection information to connect DatasetTableB to DatasetTableA, that is, establish the JoinItem class and set its properties. When the query operation of DatasetTableA is executed, according to the connection conditions and query conditions, the system will form a query result table between the content in DatasetTableA and DatasetTableB that meets the conditions, and this query table is stored in the memory. When the result needs to be returned, the corresponding content is retrieved from the memory.
-Association (link)
Set the association information that associates DatasetTableB (secondary table) to DatasetTableA (main table), that is, establish the LinkItem class and set its properties, DatasetTableA and DatasetTableB use the foreign key (LinkItem.foreign_keys) of the primary table DatasetTableA and the primary key of the secondary table DatasetTableB (LinkItem.primary_keys method) to realize the association. When the query operation of DatasetTableA is executed, according to the filter conditions and query conditions in the associated information, the system will query the content that meets the conditions in DatasetTableA and DatasetTableB respectively. The query results of DatasetTableA and DatasetTableB are regarded as independent and stored in the memory as two result table. When the results need to be returned, SuperMap will splice the two results and return them. Therefore, from the perspective of the application layer, connect and link operations are very similar.

-LinkItem only supports left connection, UDB, PostgreSQL and DB2 datasources do not support LinkItem, that is, setting LinkItem for UDB, PostgreSQL and DB2 data engines does not work;

-JoinItem currently supports left joins and inner joins, but does not support full joins and right joins. UDB engine does not support inner joins;

-Constraints for using LinkItem: Spatial data and attribute data must have associated conditions, that is, there are associated fields between the main spatial dataset and the external attribute table. Main spatial dataset: The dataset used to associate with external tables. External attribute table: a data table created by the user through Oracle or SQL Server, or a DBMS table corresponding to another vector dataset.

Example:

>>> ds = Workspace().get_datasource('data')
>>> dataset_world = ds['World']
>>> dataset_capital = ds['Capital']
>>> foreign_table_name = dataset_capital.table_name
>>>
>>> join_item = JoinItem()
>>> join_item.set_foreign_table(foreign_table_name)
>>> join_item.set_join_filter('World.capital=%s.capital'% foreign_table_name)
>>> join_item.set_join_type(JoinType.LEFTJOIN)
>>> join_item.set_name('Connect')

>>> query_parameter = QueryParameter()
>>> query_parameter.set_join_items([join_item])
>>> recordset = dataset_world.query(query_parameter)
>>> print(recordset.get_record_count())
>>> recordset.close()

Construct a JoinItem object

Parameters:
  • name (str) – the name of the connection information object
  • foreign_table (str) – the name of the foreign table
  • join_filter (str) – The join expression with the external table, that is, set the associated field between the two tables
  • join_type (str) – the type of connection between the two tables
foreign_table

str – the name of the external table

from_dict(values)

Read JoinItem information from dict

Parameters:values (dict) – dict containing JoinItem information, see to_dict for details
Returns:self
Return type:JoinItem
static from_json(value)

Parse the JoinItem information from the json string to construct a new JoinItem object

Parameters:value (str) – json string
Return type:JoinItem
join_filter

str – The join expression with the external table, that is, set the associated field between the two tables

join_type

JoinType – The type of connection between the two tables

static make_from_dict(values)

Read information from dict to construct JoinItem object.

Parameters:values (dict) – dict containing JoinItem information, see to_dict for details
Return type:JoinItem
name

str – The name of the connection information object

set_foreign_table(value)

Set the connection information external table name

Parameters:value (str) – external table name
Returns:self
Return type:JoinItem
set_join_filter(value)

Set the connection expression with the external table, that is, set the associated fields between the two tables. For example, connect the District field of a house’s polygon dataset (Building) to the Region field of a Homeowner’s pure property dataset (Owner), and the table names corresponding to the two dataset are Table_Building and Table_Owner respectively, and the connection expression is ‘Table_Building.district = Table_Owner.region’, when multiple fields are connected, use AND to connect multiple expressions.

Parameters:value (str) – The connection expression with the external table, that is, set the associated field between the two tables
Returns:self
Return type:JoinItem
set_join_type(value)

Set the type of connection between the two tables. The connection type is used to query the two connected tables and determines the condition of the records returned.

Parameters:value (JoinType or str) – the type of connection between the two tables
Returns:self
Return type:JoinItem
set_name(value)

Set the name of the connection information object

Parameters:value (str) – connection information name
Returns:self
Return type:JoinItem
to_dict()

Output current object information as dict

Return type:dict
to_json()

Output the current object as a json string

Return type:str
class iobjectspy.data.LinkItem(foreign_keys=None, name=None, foreign_table=None, primary_keys=None, link_fields=None, link_filter=None, connection_info=None)

Bases: object

The Link information class is used to associate the vector dataset with other dataset. The linked dataset can be the DBMS table corresponding to another vector dataset (where there is no spatial geometric information in the pure attribute dataset). User-created Business tables need to be plugged into the SuperMap datasource. It should be noted that the vector dataset and the linked dataset can belong to different datasources. There are two ways to establish the connection between the dataset tables, one is join, one is link. The related settings of the connection are implemented through the JoinItem class, and the related settings of the link are implemented through the LinkItem class. Besides, the two dataset tables must be under the same datasource, and the two dataset tables used to establish the linked relationship may not be under the same datasource. The following example of the query is used to illustrate the difference between connect and link. Suppose the dataset table used for query is DatasetTableA, and the table to be linked or connected is DatasetTableB. The connection or link relationship between DatasetTableA and DatasetTableB is used to query the records in DatasetTableA that meet the query conditions:

-Join Set the connection information to connect DatasetTableB to DatasetTableA, that is, establish the JoinItem class and set its properties. When the query operation of DatasetTableA is executed, according to the connection conditions and query conditions, the system will form a query result table between the content in DatasetTableA and DatasetB that meets the conditions, and this query table is stored in the memory. When the result needs to be returned, the corresponding content is retrieved from the memory.
-Association (link)
Set the linked information that link DatasetTableB (secondary table) to DatasetTableA (main table), that is, establish the LinkItem class and set its properties. DatasetTableA and DatasetTableB use the foreign key (LinkItem.foreign_keys) of the primary table DatasetTableA and the primary key of the secondary table DatasetTableB (LinkItem.primary_keys method) to realize the association, when the query operation of DatasetTableA is executed, the system will according to the filter conditions and query conditions in the associated information, Query the content that meets the conditions in DatasetTableA and DatasetTableB respectively. The query results of DatasetTableA and DatasetTableB are regarded as independent and the two result tables are stored in the memory. When the results need to be returned, SuperMap will splice the two results and return them. Therefore, from the perspective of the application layer, the connect and link operations are very similar.

-LinkItem only supports left connection, UDB, PostgreSQL and DB2 datasources do not support LinkItem, that is, setting LinkItem for UDB, PostgreSQL and DB2 data engines does not work;

-JoinItem currently supports left joins and inner joins, but does not support full joins and right joins. UDB engine does not support inner joins;

-Constraints for using LinkItem: Spatial data and attribute data must have associated conditions, that is, there are associated fields between the main spatial dataset and the external attribute table. Main spatial dataset: The dataset used to associate with external tables. External attribute table: a data table created by the user through Oracle or SQL Server, or a DBMS table corresponding to another vector dataset.

Example

# The’source’ dataset is the main dataset, the field used for the source dataset is’LinkID’, the’lind_dt’ dataset is the external dataset, that is, the dataset to be linked, and the field used for the link in link_dt is’ ID’

>>> ds_db1 = Workspace().get_datasource('data_db_1')
>>> ds_db2 = Workspace().get_datasource('data_db_2')
>>> source_dataset = ds_db1['source']
>>> linked_dataset = ds_db2['link_dt']
>>> linked_dataset_name = linked_dataset.name
>>> linked_dataset_table_name = linked_dataset.table_name
>>>
>>> link_item = LinkItem()
>>>
>>> link_item.set_connection_info(ds_db2.connection_info)
>>> link_item.set_foreign_table(linked_dataset_name)
>>> link_item.set_foreign_keys(['LinkID'])
>>> link_item.set_primary_keys(['ID'])
>>> link_item.set_link_fields([linked_dataset_table_name+'.polulation'])
>>> link_item.set_link_filter('ID <100')
>>> link_item.set_name('link_name')

Construct LinkItem object

Parameters:
  • foreign_keys (list[str]) – The main dataset is used to correlate the fields of the foreign table
  • name (str) – the name of the associated information object
  • foreign_table (str) – The name of the dataset of the foreign table, that is, the name of the associated dataset
  • primary_keys (list[str]) – the associated fields in the external table dataset
  • link_fields (list[str]) – the name of the field being queried in the external table dataset
  • link_filter (str) – query condition of external table dataset
  • connection_info (DatasourceConnectionInfo) – connection information of the datasource where the external table dataset is located
connection_info

DatasourceConnectionInfo – Connection information of the datasource where the external table dataset is located

foreign_keys

list[str] – The main dataset is used to associate external table fields

foreign_table

str – The name of the dataset of the external table, that is, the name of the associated dataset

from_dict(values)

Read information from dict to construct LinkItem object.

Parameters:values (dict) – dict containing LinkItem information, see to_dict for details
Returns:self
Return type:LinkItem
static from_json(value)

Construct LinkItem object from json string

Parameters:value (str) – json string information
Return type:LinkItem

list[str] – The name of the field being queried in the external table dataset

str: query condition of external table dataset

static make_from_dict(values)

Read information from dict to construct LinkItem object, construct a new LinkItem object

Parameters:values (dict) – dict containing LinkItem information, see to_dict for details
Return type:LinkItem
name

str – the name of the associated information object

primary_keys

list[str] – the associated fields in the external table dataset

set_connection_info(value)

Set the connection information of the datasource where the external dataset is located

Parameters:value (DatasourceConnectionInfo) – Connection information of the datasource where the external table dataset is located
Returns:self
Return type:LinkItem
set_foreign_keys(value)

Set the main dataset to be used to correlate the fields of the external table

Parameters:value (list[str]) – The main dataset is used to associate the fields of the external table
Returns:self
Return type:LinkItem
set_foreign_table(value)

Set the name of the dataset of the external table, that is, the name of the associated dataset

Parameters:value (str) – The name of the dataset of the external table, that is, the name of the associated dataset
Returns:self
Return type:LinkItem

Set the name of the field to be queried in the external table dataset

Parameters:value (list[str]) – The name of the field being queried in the external table dataset
Returns:self
Return type:LinkItem

Set the query conditions of the external table dataset

Parameters:value (str) – query condition of external table dataset
Returns:self
Return type:LinkItem
set_name(value)

Set the name of the associated information object

Parameters:value (str) – the name of the associated information object
Returns:self
Return type:LinkItem
set_primary_keys(value)

Set the associated fields in the external dataset

Parameters:value (list[str]) – The associated fields in the external table dataset
Returns:self
Return type:LinkItem
to_dict()

Output information of current object to dict

Return type:dict
to_json()

Output current object information to json string, see to_dict for details.

Return type:str
class iobjectspy.data.SpatialIndexInfo(index_type=None)

Bases: object

Spatial index information class. This class provides the information needed to create a spatial index, including the type of spatial index, the number of leaf nodes, map fields, map width and height, and multi-level grid size.

Construct the spatial index information class of the dataset.

Parameters:index_type (SpatialIndexType or str) – dataset spatial index type
from_dict(values)
Read SpatialIndexInfo information from dict
Parameters:values (dict) –
Returns:self
Return type:SpatialIndexInfo
grid_center

Point2D – The center point of the grid index. Generally the center point of the dataset.

grid_size0

float – the size of the first level grid of the multi-level grid index

grid_size1

float – the size of the second level grid of the multi-level grid index

grid_size2

float – the size of the third level grid of the multi-level grid index

leaf_object_count

int – the number of leaf nodes in the R-tree spatial index

static make_from_dict(values)

Read information from the dict to construct a SpatialIndexInfo object.

Parameters:values (dict) –
Return type:SpatialIndexInfo
static make_mgrid(center, grid_size0, grid_size1, grid_size2)

Build a multi-level grid index

Multi-level grid index, also called dynamic index. Multi-level grid index combines the advantages of R-tree index and quad-tree index, provides very good concurrent editing support, and has good universality. If you are not sure what kind of spatial index the data is suitable for, you can create multiple levels Grid index for it. Organize and manage data by dividing multi-layer grids. The basic method of grid indexing is to divide the dataset into equal or unequal grids according to certain rules, and record the position of each geographic object. The regular grid is commonly used in GIS. When a user performs a spatial query, first calculate the grid where the user query object is located, and quickly query the selected geographic object through the grid, which can optimize the query operation.

Parameters:
  • center (Point2D) – the specified grid center point
  • grid_size0 (float) – The size of the first level grid. The unit is the same as the dataset
  • grid_size1 (float) – The size of the secondary grid. The unit is the same as the dataset
  • grid_size2 (float) – The size of the three-level grid. The unit is the same as the dataset
Returns:

Multi-level grid index information

Return type:

SpatialIndexInfo

static make_qtree(level)

Build quadtree index information. Quadtree index. Quadtree is an important hierarchical dataset structure, which is mainly used to express the spatial hierarchical relationship under two-dimensional coordinates. In fact, it is an extension of one-dimensional binary tree in two-dimensional space. So, the quadtree index is a map divided into four equal parts, and then divide it into four equal parts in each grid, subdividing it layer by layer until it can no longer be divided. Now in SuperMap, the quadtree can be divided into up to 13 layers. Based on Hilbert (Hilbert) coding ordering rules, from the quadtree, it is possible to determine which minimum range the indexed attribute value of each object instance in the index class belongs to. Thereby improving retrieval efficiency

Parameters:level (int) – The level of the quadtree, the maximum is 13 levels
Returns:Quadtree index information
Return type:SpatialIndexInfo
static make_rtree(leaf_object_count)

Build R-tree index information. R-tree index is a disk-based index structure. It is a natural expansion of B-tree (one-dimensional) in high-dimensional space. It is easy to integrate with existing database systems and can support various types of spatial query processing operations. It is widely used and is currently one of the most popular spatial indexing methods. R-tree spatial index is to design a more rectangular objects comprising a space, enclose some target objects that are spatially close to each other in this rectangle. These rectangles are used as spatial indexes, which contain pointers to the contained spatial objects.

When performing a spatial search, first determine which rectangles fall in the search window, and then further determine which objects are the content to be searched. This can increase the retrieval speed.

Parameters:leaf_object_count (int) – the number of leaf nodes in the R-tree spatial index
Returns:R-tree index information
Return type:SpatialIndexInfo
static make_tile(tile_width, tile_height)

Construct map frame index information. In SuperMap, the spatial objects are classified according to a certain attribute field of the dataset or according to a given range, and the classified spatial objects are managed through the index to improve the query and retrieval speed

Parameters:
  • tile_width (float) – tile width
  • tile_height (float) – tile height
Returns:

sheet index information

Return type:

SpatialIndexInfo

quad_level

int – the level of the quadtree index

set_grid_center(value)

Set the center point of the grid index. It is generally the center point of the dataset.

Parameters:value (Point2D) – the center point of the grid index
Returns:self
Return type:SpatialIndexInfo
set_grid_size0(value)

Set the size of the first-level grid in the multi-level grid index.

Parameters:value (float) – The size of the first level grid in the multi-level grid index.
Returns:self
Return type:SpatialIndexInfo
set_grid_size1(value)

Set the size of the second-level index grid of the multi-level grid index. The unit is consistent with the unit of the dataset

Parameters:value (float) – The size of the second level index grid of the multilevel grid index
Returns:self
Return type:SpatialIndexInfo
set_grid_size2(value)

Set the size of the third-level grid in the multi-level grid index.

Parameters:value (float) – The size of the three-level grid. The unit is the same as the dataset
Returns:self
Return type:SpatialIndexInfo
set_leaf_object_count(value)

Set the number of leaf nodes in the R-tree spatial index.

Parameters:value (int) – the number of leaf nodes in the R-tree spatial index
Returns:self
Return type:SpatialIndexInfo
set_quad_level(value)

Set the level of the quadtree index, the maximum value is 13

Parameters:value (int) – the level of the quadtree index
Returns:self
Return type:SpatialIndexInfo
set_tile_height(value)

Set the height of the space index. The unit is consistent with the unit of the dataset range

Parameters:value (float) – the height of the space index
Returns:self
Return type:SpatialIndexInfo
set_tile_width(value)

Set the width of the space index. The unit is consistent with the unit of the dataset range.

Parameters:value (float) – the width of the space index
Returns:self
Return type:SpatialIndexInfo
set_type(value)

Set the spatial index type

Parameters:value (SpatialIndexType or str) – spatial index type
Returns:self
Return type:SpatialIndexInfo
tile_height

float – the height of the map frame of the spatial index

tile_width

float – the width of the spatial index frame

to_dict()

Output current object to dict

Return type:dict
type

SpatialIndexType – the type of spatial index

class iobjectspy.data.QueryParameter(attr_filter=None, cursor_type=CursorType.DYNAMIC, has_geometry=True, result_fields=None, order_by=None, group_by=None)

Bases: object

Query parameter class. Used to describe the restrictive conditions of a conditional query, such as the SQL statements, cursor methods, and spatial data position relationship condition settings. Conditional query, is to query all the elements of a certain condition of the record, and the result of the query is a record set. The query parameter class is used to set the query conditions of the conditional query to obtain the record set. Conditional query includes two main query methods, one is SQL query, also known as attribute query, is to select records by constructing SQL conditional statements containing attribute fields, operation symbols and values to obtain a record set; the other is spatial query, which is based on geographic or spatial features to query the record to get the record set.

QueryParameter contains the following parameters:

-attribute_filter: str

The SQL conditional statement constructed by the query is the SQL WHERE clause statement. SQL query is also called attribute query, which uses one or more SQL conditional statements to query records. SQL statements are conditional statements that include attribute fields, operators, and values. For example, if you want to query a clothing store in a business district whose annual sales exceeded 300,000 last year, the SQL query statement constructed is:

>>> attribute_filter ='Sales> 30,0000 AND SellingType ='Garment''

For datasources of different engines, the applicable conditions and usage of different functions are different. For database-type datasources (Oracle Plus, SQL Server Plus, PostgreSQL and DB2 datasources), please refer to the database related documents for the usage of the functions.

-cursor_type: CursorType
The type of cursor used by the query. SuperMap supports two types of cursors, dynamic cursors and static cursors. When using dynamic cursor query, the record set will be dynamically refreshed, consuming a lot of resources. When using a static cursor, the query is a static copy of the record set, which is more efficient. It is recommended to use a static cursor when querying. The record set obtained by using a static cursor is not editable. For details, see CursorType type. The DYNAMIC type is used by default.
-has_geometry: bool
Whether the query result contains geometric object fields. If the spatial data is not taken during the query, that is, only the attribute information is queried, then in the returned Recordset, all the methods that operate on the spatial objects of the record set will be invalid. For example, calling Recordset.get_geometry() will return null.
-result_fields: list of str
Set the query result field collection. For the record set of the query result, you can set the fields contained in it. If it is empty, all fields will be queried.
-order_by: list of str

SQL query sort field. The records in the record set obtained by SQL query can be sorted according to the specified field, and can be specified as ascending or descending order, where asc means ascending order and desc means descending order. The field used for sorting must be numeric. For example, to sort by SmID in descending order, you can set it as:

>>> query_paramater.set_order_by(['SmID desc'])
-group_by: list of str

SQL query field for grouping conditions. For each field in the record set obtained by SQL query, you can group according to the specified field, and the records with the same specified field value will be placed together. note:

-Spatial query does not support group_by, otherwise the results of spatial query may be incorrect -Group_by is valid only when cursor_type is STATIC
-spatial_query_mode: SpatialQueryMode
Spatial query mode
-spatial_query_object: DatasetVector or Recordset or Geometry or Rectangle or Point2D
The search object of the spatial query. If the search object is a dataset or record set type, it must be consistent with the geographic coordinate system of the dataset corresponding to the layer being searched. When there are overlapping objects in the search dataset/record set, the results of the spatial query may be incorrect. It is recommended to traverse the search dataset/record set, and use a single-object query for spatial query one by one.
-time_conditions: list of TimeCondition
Temporal model query conditions. See: py:class:TimeCondition description for details.
-link_items: list of LinkItem
Related query information. When the vector dataset being queried has an associated external table, the result of the query will contain records that meet the conditions in the associated external table. Specific view: py:class:LinkItem description
-join_items: list of JoinItem
Connect to query information. When the vector dataset being queried has a connected external table, the result of the query will contain records that meet the conditions in the connected external table. Specific view: py:class:JoinItem description
Example::

# Perform SQL query

>>> parameter = QueryParameter('SmID <100','STATIC', False)
>>> ds = Datasource.open('E:/data.udb')
>>> dt = ds['point']
>>> rd = dt.query(parameter)
>>> print(rd.get_record_count())
99
>>> rd.close()

# Perform spatial query

>>> geo = dt.get_geometries('SmID = 1')[0]
>>> query_geo = geo.create_buffer(10, dt.prj_coordsys,'meter')
>>> parameter.set_spatial_query_mode('contain').set_spatial_query_object(query_geo)
>>> rd2 = dt.query(parameter)
>>> print(rd2.get_record_count())
10
>>> rd2.close()
attribute_filter

str – SQL conditional statement constructed by the query, that is, SQL WHERE clause statement.

cursor_type

CursorType – The cursor type used by the query

from_dict(values)

Read query parameters from dict

Parameters:values (dict) – A dict object containing query parameters. Specific view: py:meth:to_dict
Returns:self
Return type:QueryParameter
static from_json(value)

Construct dataset query parameter object from json string

Parameters:value (str) – json string
Return type:QueryParameter
group_by

list[str] – SQL query group condition field

has_geometry

bool – Whether the query result contains geometric object fields

join_items

**list[JoinItem]* – Join query information. When the vector dataset being queried has connected external tables, the query results will contain the corresponding The records that meet the conditions in the connected external table. For details, see* – py:class:JoinItem description

**list[LinkItem]* – Related query information. When the vector dataset being queried has related external tables, the query results will contain related The records that meet the conditions in the linked external table. For details, see* – py:class:LinkItem description

static make_from_dict(values)

Construct dataset query parameter object from dict

Parameters:values (dict) – A dict object containing query parameters. Specific view: py:meth:to_dict
Return type:QueryParameter
order_by

list[str] – SQL query sort field.

result_fields

list[str] – query result field collection. For the query result record set, you can set the fields contained in it, if it is empty, then all fields will be queried .

set_attribute_filter(value)

Set attribute query conditions

Parameters:value (str) – Attribute query conditions
Returns:self
Return type:QueryParameter
set_cursor_type(value)

Set the cursor type used by the query. The default is DYNAMIC

Parameters:value (CursorType or str) – cursor type
Returns:self
Return type:QueryParameter
set_group_by(value)

Set the field of the SQL query grouping condition.

Parameters:value (list[str]) – SQL query field for grouping conditions
Returns:self
Return type:QueryParameter
set_has_geometry(value)

Set whether to query geometric objects. If set to False, geometric objects will not be returned, the default is True

Parameters:value (bool) – Query whether to include geometric objects.
Returns:self
Return type:QueryParameter
set_join_items(value)

Set query conditions for connection query

Parameters:value (list[JoinItem]) – the query condition of the query
Returns:self
Return type:QueryParameter

Set query conditions for related queries

Parameters:value (list[LinkItem]) – query condition of related query
Returns:self
Return type:QueryParameter
set_order_by(value)

Set the SQL query sort field

Parameters:value (list[str]) – SQL query sort field
Returns:self
Return type:QueryParameter
set_result_fields(value)

Set query result fields

Parameters:value (list[str]) – query result field
Returns:self
Return type:QueryParameter
set_spatial_query_mode(value)

Set the spatial query mode, see: py:class:.SpatialQueryMode description

Parameters:value (SpatialQueryMode or str) – The query mode of the spatial query.
Returns:self
Return type:QueryParameter
set_spatial_query_object(value)

Set the search object of spatial query

Parameters:value (DatasetVector or Recordset or Geometry or Rectangle or Point2D) – Search object for spatial query
Returns:self
Return type:QueryParameter
set_time_conditions(value)

Set the query conditions of the time field for spatiotemporal query

Parameters:value (list[TimeCondition]) – query conditions for time-space query
Returns:self
Return type:QueryParameter
spatial_query_mode

SpatialQueryMode – Spatial query mode.

spatial_query_object

DatasetVector or Recordset or Geometry or Rectangle or Point2D – The search object for spatial query.

time_conditions

**list[TimeCondition]* – time-space model query conditions. For details, please refer to* – py:class:TimeCondition description.

to_dict()

Output dataset query parameter information to dict

Return type:dict
to_json()

Output query parameters as json string

Return type:str
class iobjectspy.data.TimeCondition(field_name=None, time=None, condition=None, back_condition=None)

Bases: object

Defines a single time field spatiotemporal model management query function interface

Construct the query condition object of the spatiotemporal model

Parameters:
  • field_name (str) – field name
  • time (datetime.datetime) – query time
  • condition (str) – Conditional operator for query time. For example: >, <, >=, <=, =
  • back_condition (str) – the specified latter condition operator, for example: and, or
back_condition

specifies the latter condition operator, for example – and, or

condition

**str* – Conditional operator for querying time. For example* – >, <, >=, <=, =

field_name

str – field name

from_dict(values)

Read TimeCondition information from the dict object.

Parameters:values (dict) – dict containing TimeCondition information, refer to to_dict for details
Return type:TimeCondition
Return type:TimeCondition
static from_json(value)

Construct TimeCondition from json string.

Parameters:value (str) – json string
Return type:TimeCondition
static make_from_dict(values)

Construct TimeCondition object from dict object

Parameters:values (dict) – dict containing TimeCondition information, refer to to_dict for details
Return type:TimeCondition
set_back_condition(value)

The next conditional operator of the query condition. For example: and, or

Parameters:value (str) – the last condition operator of the query condition
Returns:self
Return type:TimeCondition
set_condition(value)

Set the condition operator of the query condition.

Parameters:value (str) – Condition operator of query condition, >, <, >=, <=, =
Returns:self
Return type:TimeCondition
set_field_name(value)

Set the field name of the query condition

Parameters:value (str) – field name of query condition
Returns:self
Return type:TimeCondition
set_time(value)

Set the time value of the query condition

Parameters:value (datetime.datetime) – time value
Returns:self
Return type:TimeCondition
time

datetime.datetime – the time as the query condition

to_dict()

Output the current object as a dict object

Return type:dict
to_json()

Output the current object as a json string

Return type:str
iobjectspy.data.combine_band(red_dataset, green_dataset, blue_dataset, out_data=None, out_dataset_name=None)

Three single-band dataset into RGB dataset

Parameters:
  • red_dataset (Dataset or str) – Single band dataset R.
  • green_dataset (Dataset or str) – Single band dataset G
  • blue_dataset (Dataset or str) – single band dataset B
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located. If it is empty, use the datasource where the red_dataset dataset is located
  • out_dataset_name (str) – The name of the synthetic RGB dataset.
Returns:

Return the dataset object or dataset name if the synthesis is successful, Return None if it fails

Return type:

Dataset

class iobjectspy.data.Recordset

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Record set class. Through this class, the data in the vector dataset can be manipulated. Datasources include file type and database type. The spatial geometric information and attribute information in the database type data are stored in an integrated manner. Each vector dataset corresponds to a DBMS table, in which its geometric shape and attribute information are stored in an integrated manner. The geometric fields in the table store the spatial geometric information of the elements. For pure attribute dataset in vector dataset, there are no geometric fields, and the recordset is a subset of the DBMS table. The spatial geometry information and attribute information are stored separately in the file-type data, the application of the record set may be more confusing. During operation, the distinction between file type and database type data is shielded, and the data is regarded as a table of integrated storage of spatial information and attribute information, while the record set is a subset of which is taken out for operation. One record in a recordset, or a row, corresponds to an element, contains spatial geometric information and attribute information of the feature. A column in the record set corresponds to the information of a field.

The record set can directly obtain a record set from the vector dataset. There are two methods: the user can directly return from the vector dataset through the DatasetVector.get_recordset() method It can also be returned through a query statement. The difference is that the record set obtained by the former contains all the spatial geometric information and attribute information of this type of set, while the record set obtained by the latter is the record set filtered by the query statement.

The following code demonstrates reading data from a record set and batch writing data to a new record set:

>>> dt = Datasource.open('E:/data.udb')['point']
>>> rd = dt.query_with_filter('SmID <100 or SmID> 1000','STATIC')
>>> all_points = []
>>> while rd.has_next():
>>> geo = rd.get_geometry()
>>> all_points.append(geo.point)
>>> rd.move_next()
>>> rd.close()
>>>
>>> new_dt = dt.datasource.create_vector_dataset('new_point','Point', adjust_name=True)
>>> new_dt.create_field(FieldInfo('object_time', FieldType.DATETIME))
>>> new_rd = new_dt.get_recordset(True)
>>> new_rd.batch_edit()
>>> for point in all_points:
>>> new_rd.add(point, {'object_time': datetime.datetime.now()})
>>> new_rd.batch_update()
>>> print(new_rd.get_record_count())
>>> new_rd.close()
add(data, values=None)

Add a record to the record set. The record set must open the edit mode, see: py:meth:edit and batch_edit()

Parameters:
  • data (Point2D or Rectangle or Geometry or Feature) –

    The spatial object to be written. If the dataset of the record set is an attribute table, pass in None. If data is not empty, the type of the geometric object must match the type of the dataset to write successfully. E.g:

    -Point2D and GeoPoint support writing to point dataset and CAD dataset -GeoLine supports writing to line dataset and CAD dataset -Rectangle and GeoRegion support writing to surface dataset and CAD dataset -GeoText supports writing to text dataset and CAD dataset

  • values (dict) – The attribute field values to be written. It must be a dict. The key value of dict is the field name, and the value of dict is the field value. If data is Feature, this parameter is invalid because Feature already contains attribute field values.
Returns:

Return True if writing is successful, otherwise False

Return type:

bool

batch_edit()

The batch update operation begins. After the batch update operation is completed, you need to use batch_update() to submit the modified records. You can use: py:meth:set_batch_record_max to modify the maximum number of records submitted as a result of a batch update operation, see: py:meth:set_batch_record_max

batch_update()

Unified submission of batch update operations. After calling this method, the previous batch update operation will take effect, and the update status will become a single update. If you need to perform subsequent operations in batches, you need to call the batch_edit() method again.

bounds

Rectangle – Return the bounding rectangle of the geometric object corresponding to all records in the attribute data table of the record set.

close()

Release the record set. The record set must be released after the record set is no longer used.

dataset

DatasetVector – The dataset where the record set is located

datasource

Datasource – The datasource where the record set is located

delete()

Used to delete the current record in the dataset, and return true if successful.

Return type:bool
delete_all()

Physically delete all the records in the specified record set, that is, delete the records from the physical storage medium of the computer and cannot be restored.

Return type:bool
dispose()

Release the record set, the record set must be released after the completion of the operation and no longer in use, the same function as close

edit()

Lock and edit the current record of the record set, return True if successful. After editing with this method, you must use the update() method to update the record set, and brfore: py:meth:update, the current recorded position cannot be moved, otherwise the editing fails and the record set may be damaged.

Return type:bool
field_infos

list[FieldInfo] – All field information of the dataset

static from_json(value)

Parse from the json string to obtain the record set

Parameters:value (str) – json string
Return type:Recordset
get_batch_record_max()

int: Return the maximum number of records automatically submitted as a result of the batch update operation

get_feature()

Get the feature object of the current record, if the retrieval fails, return None

Return type:Feature
get_features()

Get all the feature objects of the record set. After calling this method, the position of the record set will move to the very beginning position.

Return type:list[Feature]
get_field_count()

Get the number of fields

Return type:int
get_field_info(value)

Get field information based on field name or serial number

Parameters:value (str or int) – field name or serial number
Returns:field information
Return type:FieldInfo
get_geometries()

Get all geometric objects in the record set. After calling this method, the position of the record set will move to the very beginning position.

Return type:list[Geometry]
get_geometry()

Get the geometric object of the current record, if there is no geometric object in the record set or get failed, return None

Return type:Geometry
get_id()

Return the ID number of the geometric object corresponding to the current record in the attribute table of the dataset (ie the value of the SmID field).

Return type:int
get_query_parameter()

Get the query parameters corresponding to the current record set

Return type:QueryParameter
get_record_count()

Return the number of records in the record set

Return type:int
get_value(item)

Get the field value of the specified attribute field in the current record

Parameters:item (str or int) – field name or serial number
Return type:int or float or str or datetime.datetime or bytes or bytearray
get_values(exclude_system=True, is_dict=False)

Get the attribute field value of the current record.

Parameters:
  • exclude_system (bool) – Whether to include system fields. All fields beginning with “Sm” are system fields. The default is True
  • is_dict (bool) – Whether to return in the form of a dict. If a dict is returned, the key of the dict is the field name and value is the attribute field value. Otherwise, the field value is returned as a list. The default is False
Returns:

attribute field value

Return type:

dict or list

has_next()

Whether there is another record in the record set that can be read, if yes, return True, otherwise return False

Return type:bool
index_of_field(name)

Get the serial number of the specified field name

Parameters:name (str) – field name
Returns:If the field exists, return the serial number of the field, otherwise return -1
Return type:int
is_bof()

Determine whether the current record position is before the first record in the record set (of course, there is no data before the first record), if it is, return True; otherwise, return False.

Return type:bool
is_close()

Determine whether the record set has been closed. It Return True if it is closed, otherwise it Return False.

Return type:bool
is_empty()

Determine whether there are records in the record set. True means there is no data in the record set

Return type:bool
is_eof()

Whether the record set reaches the end, if it reaches the end, return True, otherwise return False

Return type:bool
is_readonly()

Judge whether the record set is read-only, read-only return True, otherwise return False

Return type:bool
move(count)

Move the current record position by ‘count’ lines, and set the record at that position as the current record. Return True if successful. If count is less than 0, it means to move forward, if it is greater than 0, it means to move backward. If it is equal to 0, it will not move. If the number of rows moved is too many, beyond the scope of the Recordset, it will return False and the current record will not move.

Parameters:count (int) – the number of records moved
Return type:bool
move_first()

Used to move the current record position to the first record so that the first record becomes the current record. Return True if successful.

Return type:bool
move_last()

Used to move the current record position to the last record, making the last record the current record. Return True if successful

Return type:bool
move_next()

Move the current record position to the next record to make this record the current record. Return True if successful, otherwise False

Return type:bool
move_prev()

Move the current record position to the previous record to make this record the current record. Return True if successful.

Return type:bool
move_to(position)

It is used to move the current record position to the specified position, and the record at the specified position is regarded as the current record. Return True if successful.

Parameters:position (int) – the position to move to, which is the first record
Return type:bool
refresh()

Refresh the current record set to reflect the changes in the dataset. Return True if successful, otherwise return False. The difference between this method and: py:meth:update is that the update method is to submit the modified result, and the refresh method is to dynamically refresh the record set. In order to dynamically display the changes in the dataset during multi-user concurrent operations, the refresh method is often used.

Return type:bool
seek_id(value)

Search the record with the specified ID number in the record and locate the record as the current record. Return true if successful, otherwise false

Parameters:value (int) – ID number to search
Return type:bool
set(data, values=None)

Modify the current record. The record set must open the edit mode, see: py:meth:edit and batch_edit()

Parameters:
  • data (Point2D or Rectangle or Geometry or Feature) –

    The space object to be written. If the dataset of the record set is an attribute table, pass in None. If data is not empty, the type of the geometric object must match the type of the dataset to write successfully. E.g:

    -Point2D and GeoPoint support writing to point dataset and CAD dataset -GeoLine supports writing to line dataset and CAD dataset -Rectangle and GeoRegion support writing to surface dataset and CAD dataset -GeoText supports writing to text dataset and CAD dataset

  • values – The attribute field values to be written. It must be a dict. The key value of dict is the field name, and the value of dict is the field value. If data is Feature, this parameter is invalid because Feature already contains attribute field values. If data is empty, only the attribute field value will be written.
Returns:

Return True if writing is successful, otherwise False

Return type:

bool

set_batch_record_max(count)

Set the maximum number of records submitted as a result of a batch update operation. After the batch update of all records that need to be updated is completed, when the update result is submitted, if the number of updated records exceeds the maximum number of records, The system will submit the updated results in batches, that is, the maximum number of records submitted each time, until all updated records are submitted. For example, if the maximum number of records submitted is set to 1000, and the number of records to be updated is 3800, after the records are updated in batches, the system will submit the update results in four times, that is, 1,000 records for the first time, 1,000 for the second time, 1,000 for the third time, and 800 for the fourth time.

Parameters:count (int) – The maximum number of records submitted as a result of the batch update operation.
set_value(item, value)

Write the field value to the specified field. The record set must open the edit mode, see: py:meth:edit and batch_edit()

Parameters:
  • item (str or int) – field name or serial number, cannot be a system field.
  • value (bool or int or float or datetime.datetime or bytes or bytearray or str) –

    The field value to be written. The corresponding relationship between field type and value type is:

    -BOOLEAN: bool -BYTE: int -INT16: int -INT32: int -INT64: int -SINGLE: float -DOUBLE: float -DATETIME: datetime.datetime or int (time stamp in seconds) or a string in the format “%Y-%m-%d %H:%M:%S” -LONGBINARY: bytearray or bytes -TEXT: str -CHAR: str -WTEXT: str -JSONB: str

Returns:

Return True if successful, otherwise False

Return type:

bool

set_values(values)

Set the field value. The record set must open the edit mode, see: py:meth:edit and batch_edit()

Parameters:values (dict) – The attribute field values to be written. Must be a dict, the key value of dict is the field name, and the value of dict is the field value
Returns:Return the number of successfully written fields
Return type:int
statistic(item, stat_mode)

Through the field name or serial number, the specified field, such as maximum, minimum, average, sum, standard deviation, and variance statistics.

Parameters:
  • item (str or int) – field name or serial number
  • stat_mode (StatisticMode or str) – statistical mode
Returns:

Statistics results.

Return type:

float

to_json()

Output the dataset and query parameters of the current record set as a json string. Note that to_json using Recordset only saves the query parameters of the dataset information, and only applies to the result record set obtained by using the DatasetVector entry query, including DatasetVector.get_recordset(), DatasetVector.query(), DatasetVector.query_with_bounds(), DatasetVector.query_with_distance(), DatasetVector.query_with_filter() and DatasetVector.query_with_ids(). If it is a record set obtained by internal query of other functions, it may not be able to fully ensure whether the query parameters are consistent with the query parameters entered during the query.

Return type:str
update()

Used to submit changes to the record set, including operations such as adding, editing records, and modifying field values. After using edit() to modify the record set, you need to use update to submit the modification. Every time a record is modified, you need to call update once to commit the changes.

Return type:bool
exception iobjectspy.data.DatasourceReadOnlyError(message)

Bases: Exception

Exception when the datasource is read-only. Some functions need to write data to the datasource or modify the data in the datasource, and the exception information will be returned when the datasource is read-only.

exception iobjectspy.data.DatasourceOpenedFailedError(message)

Bases: Exception

datasource open failed exception

exception iobjectspy.data.ObjectDisposedError(message)

Bases: RuntimeError

The abnormal object after the object is released. This exception will be thrown after checking that the java object bound in the Python instance is released.

exception iobjectspy.data.DatasourceCreatedFailedError(message)

Bases: Exception

datasource creation failed exception

class iobjectspy.data.FieldInfo(name=None, field_type=None, max_length=None, default_value=None, caption=None, is_required=False, is_zero_length_allowed=True)

Bases: object

Field information class. The field information class is used to store related information such as the name, type, default value, and length of the field. Each field corresponds to a FieldInfo. For each field of the vector dataset, only the field The alias (caption) can be modified, and the modification of other attributes depends on whether the specific engine supports it.

Construct field information object

Parameters:
  • name (str) – field name. The name of a field can only consist of numbers, letters and underscores, but cannot start with numbers or underscores; When the user creates a new field, the field name cannot start with SM, because all SuperMap system fields are prefixed with SM. In addition, the field name cannot exceed 30 characters, and the field name is not case sensitive. The name is used to uniquely identify the field, so the field cannot have the same name.
  • field_type (FieldType or str) – field type
  • max_length (int) – The maximum length of the field value, only valid for text fields
  • default_value (int or float or datetime.datetime or str or bytes or bytearray) – the default value of the field
  • caption (str) – field alias
  • is_required (bool) – Is it a required field
  • is_zero_length_allowed (bool) – Whether to allow zero length. Only valid for text type (TEXT, WTEXT, CHAR) fields
caption

str – field alias

clone()

Copy a new FieldInfo object

Return type:FieldInfo
default_value

int or float or datetime.datetime or str or bytes or bytearray – the default value of the field

from_dict(values)

Read field information from the dict object

Parameters:values (dict) – dict object containing FieldInfo field information
Returns:self
Return type:FieldInfo
static from_json(value)

Construct FieldInfo object from json string

Parameters:value (str) – json string
Return type:FieldInfo
is_required

bool – Whether the field is required

is_system_field()

Determine whether the current object is a system field. All fields beginning with SM (not case sensitive) are system systems.

Return type:bool
is_zero_length_allowed

bool – Whether to allow zero length. Only valid for text type (TEXT, WTEXT, CHAR) fields

static make_from_dict(values)

Construct a new FieldInfo object from the dict object

Parameters:values (dict) – dict object containing FieldInfo field information
Return type:FieldInfo
max_length

int – the maximum length of the field value, only valid for text fields

name

str – field name, the field name can only consist of numbers, letters and underscores, but cannot start with numbers or underscores; when the user creates a new field, the field name Cannot use SM as the prefix, all SuperMap system fields are prefixed with SM. In addition, the field name cannot exceed 30 characters and the field name is not case sensitive. The name is used to uniquely identify the field, so the field cannot have the same name.

set_caption(value)

Set the alias of this field. The alias can be non-unique, that is, different fields can have the same alias, and the name is used to uniquely identify a field, so the name cannot be duplicated

Parameters:value (str) – Field alias.
Returns:self
Return type:FieldInfo
set_default_value(value)

Set the default value of the field. When adding a record, if the field is not assigned a value, the default value is used as the value of the field.

Parameters:value (bool or int or float or datetime.datetime or str or bytes or bytearray) – the default value of the field
Returns:self
Return type:FieldInfo
set_max_length(value)

The maximum length of the returned field value, which is only valid for text fields. Unit: Byte

Parameters:value (int) – the maximum length of the field value
Returns:self
Return type:FieldInfo
set_name(value)

Set the field name. The name of the field can only consist of numbers, letters and underscores, but cannot start with numbers or underscores; when creating a new field, the field name cannot be prefixed with SM and all SuperMap system fields are prefixed with SM. In addition, the field name cannot exceed 30 characters, and the field name is not case sensitive. The name is used to uniquely identify the field, so the field cannot have the same name.

Parameters:value (str) – field name
Returns:self
Return type:FieldInfo
set_required(value)

Set whether the field is required

Parameters:value (str) – field name
Returns:self
Return type:FieldInfo
set_type(value)

Set field type

Parameters:value (FieldType or str) – field type
Returns:self
Return type:FieldInfo
set_zero_length_allowed(value)

Set whether the field allows zero length. Only valid for text fields.

Parameters:value (bool) – Whether the field allows zero length. Allow field zero length to be set to True, otherwise it is False. The default value is True.
Returns:self
Return type:FieldInfo
to_dict()

Output the current object to the dict object

Return type:dict
to_json()

Output the current object as a json string

Return type:str
type

FieldType – Field Type

class iobjectspy.data.Point2D(x=None, y=None)

Bases: object

A two-dimensional point object, using two floating-point numbers to represent the positions of the x and y axes respectively.

Use the x and y values to construct a two-dimensional point object.

Parameters:
  • x (float) – x axis value
  • y (float) – y coordinate value
clone()

Copy the current object and return a new object

Return type:Point2D
distance_to(other)

Calculate the distance between the current point and the specified point

Parameters:other (Point2D) – target point
Returns:Return the geometric distance between two points
Return type:float
equal(other, tolerance=0.0)

Determine whether the current point and the specified point are equal within the tolerance range

Parameters:
  • other (Point2D) – point to be judged
  • tolerance (float) – tolerance
Return type:

bool

from_dict(value)

Read point information from dict

Parameters:value (dict) – a dict containing x and y values
Returns:self
Return type:Point2D
static from_json(value)

Construct two-dimensional point coordinates from json string

Parameters:value (str) – json string
Return type:Point2D
static make(p)

Construct a two-dimensional point object

Parameters:p (tuple[float,float] or list[float,float] or GeoPoint or Point2D or dict) – x and y values
Return type:Point2D
static make_from_dict(value)

Read point information from dict to construct two-dimensional point object

Parameters:value (dict) – a dict containing x and y values
Return type:Point2D
to_dict()

output as a dict object

Return type:dict
to_json()

Output the current two-dimensional point object as a json string

Return type:str
class iobjectspy.data.Point3D(x=None, y=None, z=None)

Bases: object

clone()

Copy the current object

Return type:Point3D
from_dict(value)

Read point information from dict

Parameters:value (dict) – a dict containing x, y and z values
Returns:self
Return type:Point3D
static from_json(value)

Construct 3D point coordinates from json string

Parameters:value (str) – json string
Return type:Point3D
static make(p)

Construct a 3D point object

Parameters:p (tuple[float,float,float] or list[float,float,float] or GeoPoint3D or Point3D or dict) – x, y and z values
Return type:Point3D
static make_from_dict(value)

Read point information from dict to construct 3D point coordinates

Parameters:value (dict) – a dict containing x, y and z values
Returns:self
Return type:Point3D
to_dict()

output as a dict object

Return type:dict
to_json()

Output current 3D point object as json string

Return type:str
class iobjectspy.data.PointM(x=None, y=None, m=None)

Bases: object

clone()

Copy the current object

Return type:PointM
from_dict(value)

Read point information from dict

Parameters:value (dict) – a dict containing x, y and m values
Returns:self
Return type:PointM
static from_json(value)

Construct routing point coordinates from json string

Parameters:value (str) – json string
Return type:PointM
static make(p)

Construct a routing point object

Parameters:p (tuple[float,float,float] or list[float,float,float] or PointM or dict) – x, y and m values
Return type:PointM
static make_from_dict(value)

Read point information from dict to construct routing point coordinates

Parameters:value (dict) – a dict containing x, y and m values
Returns:self
Return type:PointM
to_dict()

output as a dict object

Return type:dict
to_json()

Output the current route point object as a json string

Return type:str
class iobjectspy.data.Rectangle(left=None, bottom=None, right=None, top=None)

Bases: object

The rectangle object uses four floating point numbers to represent the extent of a rectangle. Where left represents the minimum value in the x direction, top represents the maximum value in the y direction, right represents the maximum value in the x direction, and bottom represents the minimum value in the y direction. When a rectangle is used to represent a geographic range, usually left represents the minimum longitude, right represents the maximum value of longitude, top represents the maximum value of latitude, and bottom represents the minimum value of latitude This type of object is usually used to determine the range, which can be used to represent the minimum bounding rectangle of a geometric object, the visible range of the map window, the range of the dataset, etc. In addition, this type of object is also used in rectangle selection, rectangle query, etc.

__eq__(other)

Determine whether the current rectangular object is the same as the specified rectangular object. Only when the upper, lower, left, and right boundaries are exactly the same can it be judged as the same.

Parameters:other (Rectangle) – The rectangle object to be judged.
Returns:If the current object is the same as the rectangle object, return True, otherwise return False
Return type:bool
__getitem__(item)

When the rectangular object uses four two-dimensional coordinate points to describe the specific coordinate position, the point coordinate value is returned.

Parameters:item (int) – 0, 1, 2, 3 value
Returns:According to the value of item, return the upper left point, upper right point, lower right point, and lower left point respectively
Return type:float
__len__()
Returns:When the rectangular object uses four two-dimensional coordinate points to describe the specific coordinate position, the number of points is returned. Fixed at 4.
Return type:int
bottom

float – Return the coordinate value of the lower boundary of the current rectangular object

center

Point2D – Return the center point of the current rectangle object

clone()

Copy the current object

Return type:Rectangle
contains(item)

Determine whether a point object or rectangular rectangle is inside the current rectangular object

Parameters:item (Point2D) – Two-dimensional point object (contains x and y attributes) or rectangular object. The rectangle object must be non-empty (whether the rectangle object is empty, please refer to @Rectangle.is_empty)
Returns:The object to be judged Return True within the current rectangle, otherwise False
Return type:bool
>>> rect = Rectangle(1.0, 20, 2.0, 3)
>>> rect.contains(Point2D(1.1,10))
True
>>> rect.contains(Point2D(0,0))
False
>>> rect.contains(Rectangle(1.0,10,1.5,5))
True
from_dict(value)

Read the boundary value of the rectangular object from a dictionary object. After reading successfully, the existing value of the rectangular object will be overwritten.

Parameters:value (dict) – dictionary object, the keys of the dictionary object must have’left’,’top’,’right’,’bottom’
Returns:return the current object, self
Return type:Rectangle
static from_json(value)

Construct a rectangle object from the json string.

Parameters:value (str) – json string
Returns:rectangle object
Return type:Rectangle
>>> s ='{"rectangle": [1.0, 1.0, 2.0, 2.0]}'
>>> Rectangle.from_json(s)
(1.0, 1.0, 2.0, 2.0)
has_intersection(item)

Judge whether a two-dimensional point, rectangular object or spatial geometric object intersects the current rectangular object. As long as the object to be judged has an intersection area or contact with the current rectangular object, it will be an intersection.

: param Point2D item: a two-dimensional point is determined to be an object, and the rectangle geometry object space, the space several HE supporting point line objects and text objects. :return: return True if the intersection is judged, otherwise return False :rtype: bool

>>> rc = Rectangle(1,2,2,1)
>>> rc.has_intersection(Rectangle(0,1.5,1.5,0))
True
>>> rc.has_intersection(GeoLine([Point2D(0,0),Point2D(3,3)]))
True
height

float – Return the height value of the current rectangle object

inflate(dx, dy)

Scale the current rectangular object vertically (y direction) and horizontally (x direction). After scaling, the current object will change the top and bottom or left and right values, but the center point remains unchanged.

Parameters:
  • dx (float) – zoom amount in horizontal direction
  • dy (float) – vertical zoom
Returns:

self

Return type:

Rectangle

>>> rc = Rectangle(1,2,2,1)
>>> rc.inflate(3,None)
(-2.0, 1.0, 5.0, 2.0)
>>> rc.left == -2
True
>>> rc.right == 5
True
>>> rc.top == 2
True
>>> rc.inflate(0, 2)
(-2.0, -1.0, 5.0, 4.0)
>>> rc.left == -2
True
>>> rc.top == 4
True
intersect(rc)

Specify the intersection of the rectangular object and the current object, and change the current rectangular object.

Parameters:rc (Rectangle) – the rectangle used for intersection operation
Returns:current object, self
Return type:Rectangle
>>> rc = Rectangle(1,1,2,2)
>>> rc.intersect(Rectangle(0,0,1.5,1.5))
(1.0, 1.0, 1.5, 1.5)
is_empty()

Determine whether the rectangle object is empty. When one of the upper, lower, left, and right boundary values of the rectangle is None, the rectangle is empty. The rectangle is empty when one of the upper, lower, right, and left bounds is -1.7976931348623157e+308.

Returns:If the rectangle is empty, return True, otherwise return False
Return type:bool
left

float – Return the coordinate value of the left boundary of the current rectangular object

static make(value)

Construct a two-dimensional rectangular object

Parameters:value (Rectangle or list or str or dict) – contains the left, bottom, right and top information of a two-dimensional rectangular object
Returns:
Return type:
static make_from_dict(value)

Construct a rectangular object from a dictionary object.

Parameters:value – dictionary object, the keys of the dictionary object must have’left’,’top’,’right’,’bottom’
Return type:Rectangle
offset(dx, dy)

Translate this rectangle by dx in the x direction and dy in the y direction. This method will change the current object.

Parameters:
  • dx (float) – The amount to offset the position horizontally.
  • dy (float) – The amount to offset the position vertically.
Returns:

self

Return type:

Rectangle

>>> rc = Rectangle(1,2,2,1)
>>> rc.offset(2,3)
(3.0,4.0, 4.0, 5.0)
points

tuple[Point2D] – Get the coordinates of the four vertices of the rectangle and return a tuple of 4 two-dimensional points (Point2D). The first point represents the upper left point, the second point represents the upper right point, the third point represents the lower right point, and the fourth point represents the lower left point.

>>> rect = Rectangle(1.0, 3, 2.0, 20)
>>> points = rect.points
>>> len(points)
4
>>> points[0] == Point2D(1.0,20)
True
>>> points[2] == Point2D(2.0,3)
True
right

float – Return the coordinate value of the right boundary of the current rectangular object

set_bottom(value)

Set the lower boundary value of the current rectangular object. If the upper and lower boundary values are both valid, and the lower boundary value is greater than the upper boundary value, the upper and lower boundary values will be swapped

Parameters:value (float) – lower boundary value
Returns:self
Return type:Rectangle
set_left(value)

Set the left boundary value of the current rectangular object. If the left and right boundary values are both valid, and the left boundary value is greater than the right boundary value, the left and right boundary values will be swapped

Parameters:value (float) – left boundary value
Returns:self
Return type:Rectangle
set_right(value)

Set the right boundary value of the current rectangular object. If the left and right boundary values are both valid, and the left boundary value is greater than the right boundary value, the left and right boundary values will be swapped

Parameters:value (float) – right boundary value
Returns:self
Return type:Rectangle
>>> rc = Rectangle(left=10).set_right(5.0)
>>> rc.right, rc.left
(10.0, 5.0)
set_top(value)

Set the upper boundary value of the current rectangular object. If the upper and lower boundary values are both valid, and the lower boundary value is greater than the upper boundary value, the upper and lower boundary values will be swapped

Parameters:value (float) – upper boundary value
Returns:self
Return type:Rectangle
>>> rc = Rectangle(bottom=10).set_top(5.0)
>>> rc.top, rc.bottom
(10.0, 5.0)
to_dict()

Return the rectangle object as a dictionary object

Return type:dict
to_json()

Get the json string form of the rectangle object

Return type:str
>>> Rectangle(1,1,2,2).to_json()
'{"rectangle": [1.0, 1.0, 2.0, 2.0]}'
to_region()

Use a geometric area object to represent the range of a rectangular object. The point coordinate order of the returned area object is: the first point represents the upper left point, the second point represents the upper right point, and the third point represents the lower right point. The fourth point represents the lower left point, and the fifth point has the same coordinates as the first point.

Returns:Return the geometric region object represented by the rectangle range
Return type:GeoRegion
>>> rc = Rectangle(2.0, 20, 3.0, 10)
>>> geoRegion = rc.to_region()
>>> print(geoRegion.area)
10.0
to_tuple()

Get a tuple object, the elements of the tuple object are the left, bottom, right, and top of the rectangle

Return type:tuple
top

float – Return the coordinate value of the upper boundary of the current rectangular object

union(rc)

The current rectangular object merges the specified rectangular object. After the merge is completed, the range of the rectangle will be the union of the rectangle before the merge and the specified rectangular object.

Parameters:rc (Rectangle) – the specified rectangle object for merging
Returns:self
Return type:Rectangle
>>> rc = Rectangle(1,1,2,2)
>>> rc.union(Rectangle(0,-2,1,0))
(0.0, -2.0, 2.0, 2.0)
width

float – Return the width value of the current rectangle object

class iobjectspy.data.Geometry

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The base class of geometric objects, used to represent the spatial characteristics of geographic entities, and provide related processing methods. According to the different spatial characteristics of geographic entities, they are described by points (GeoPoint), lines (GeoLine), regions (GeoRegion), etc.

bounds

Rectangle – Return the minimum bounding rectangle of a geometric object. The minimum bounding rectangle of a point object degenerates to a point, that is, the coordinate value of the left boundary of the rectangle is equal to the coordinate of its right boundary Value, the upper boundary coordinate value is equal to its The coordinate values of the lower boundary are the x coordinate and y coordinate of the point.

clone()

Copy object

Return type:Geometry
static from_geojson(geojson)

Read information from geojson to construct a geometric object

Parameters:geojson (str) – geojson string
Returns:geometric object
Return type:Geometry
static from_json(value)

Construct a geometric object from json. Reference: py:meth:to_json

Parameters:value (str) – json string
Return type:Geometry
from_xml(xml)

Reconstruct the geometric object based on the incoming XML string. The XML must conform to the GML3.0 specification. When calling this method, first clear the original data of the geometric object, and then reconstruct the geometric object according to the incoming XML string. GML (Geography Markup Language) is a geographic markup language. GML can represent spatial data and non-spatial attribute data of geographic spatial objects. GML is an XML-based spatial information encoding standard, proposed by the OpenGIS Consortium (OGC), has been strongly supported by many companies, such as Oracle, Galdos, MapInfo, CubeWerx, etc. As a spatial data coding specification, GML provides a set of basic tags, a common data model, and a mechanism for users to construct GML Application Schemas.

Parameters:xml (str) – string in XML format
Returns:Return True if the construction is successful, otherwise False.
Return type:bool
get_inner_point()

Get the inner point of a geometric object

Return type:Point2D
get_style()

Get the object style of the geometric object

Return type:GeoStyle
hit_test(point, tolerance)

Test whether the specified point is within the range of the geometric object within the allowable range of the specified tolerance. That is to judge whether the circle with the test point as the center of the circle and the specified tolerance as the radius has an intersection with the geometric object. If so, It Return True; otherwise, it Return False.

Parameters:
  • point (Point2D) – test point
  • tolerance (float) – tolerance value, the unit is the same as the unit of the dataset
Return type:

bool

id

int – Return the ID of the geometric object

is_empty()

Judge whether a geometric object is null. Different geometric objects have different conditions for null.

Return type:bool
linear_extrude(height=0.0, twist=0.0, scaleX=1.0, scaleY=1.0, bLonLat=False)

Linear stretch, support 2D and 3D vector faces, 2D and 3D circles, GeoRectangle :param height: stretch height :param twist: rotation angle :param scaleX: scale around the X axis :param scaleY: scale around the Y axis :param bLonLat: Whether it is latitude and longitude :return: return GeoModel3D object

offset(dx, dy)

Offset this geometric object by the specified amount.

Parameters:
  • dx (float) – the amount to offset the X coordinate
  • dy (float) – the amount to offset the Y coordinate
Returns:

self

Return type:

Geometry

resize(rc)

Scale this geometric object to make its minimum enclosing rectangle equal to the specified rectangular object. For geometric points, this method only changes its position and moves it to the center point of the specified rectangle; for text objects, this method will scale the text size.

Parameters:rc (Rectangle) – The range of the geometric object after resizing.
rotate(base_point, angle)

Use the specified point as the base point to rotate this geometric object by the specified angle, the counterclockwise direction is the positive direction, and the angle is in degrees.

Parameters:
  • base_point (Point2D) – The base point of the rotation.
  • angle (float) – the angle of rotation, in degrees
rotate_extrude(angle, slices)

Rotational extrusion, supports two-dimensional and three-dimensional vector faces, must be constructed in a plane coordinate system and cannot cross the Y axis :param angle: rotation angle :return: return GeoModel3D object

set_empty()

Empties the spatial data in the geometric object, but the identifier and geometric style of the geometric object remain unchanged.

set_id(value)

Set the ID value of the geometric object

Parameters:value (int) – ID value.
Returns:self
Return type:Geometry
set_style(style)

Set the style of geometric objects

Parameters:style (GeoStyle) – geometric object style
Returns:return the object itself
Return type:Geometry
to_geojson()

Return the current object information in geojson format. Only point, line and area objects are supported.

Return type:str
to_json()

Output the current object as a json string

Return type:str
to_xml()

According to the GML 3.0 specification, the spatial data of the geometric object is output as an XML string. Note: The XML string output by the geometric object only contains the geographic coordinate value of the geometric object, and does not contain the style and ID of the geometric object.

Return type:str
type

GeometryType – Return the type of geometry object

class iobjectspy.data.GeoPoint(point=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

Point geometry object class. This class is generally used to describe point geographic entities. Both Point2D and GeoPoint can be used to represent two-dimensional points. The difference is that GeoPoint describes a ground object, while Point2D describes a position. When given to GeoPoint with different geometric styles, it can be used to represent different ground objects, while Point2D is a coordinate point widely used for positioning

Construct a point geometry object.

Parameters:point (Point2D or GeoPoint or tuple[float] or list[float]) – point object
bounds

Rectangle – Get the geographic range of the point geometry object

create_buffer(distance, prj=None, unit=None)

Construct a buffer object at the current location

Parameters:
  • distance (float) – The radius of the buffer. If prj and unit are set, the unit of unit will be used as the unit of buffer radius.
  • prj (PrjCoordSys) – Describe the projection information of the point geometry object
  • unit (Unit or str) – buffer radius unit
Returns:

buffer radius

Return type:

GeoRegion

get_x()

Get the X coordinate value of the point geometry object

Return type:float
get_y()

Get the Y coordinate value of the point geometry object

Return type:float
point

Return the geographic location of the point geometry object

Return type:Point2D
set_point(point)

Set the geographic location of the point geometry object

Parameters:point (Point2D) – the geographic location of the point geometric object
Returns:self
Return type:GeoPoint
set_x(x)

Set the X coordinate of the point geometry object

Parameters:x (float) – X coordinate value
Returns:self
Return type:GeoPoint
set_y(y)

Set the Y coordinate of the point geometry object

Parameters:y (float) – Y coordinate value
Returns:self
Return type:GeoPoint
to_json()

Return the current point object in simple json format.

E.g.:

>>> geo = GeoPoint((10,20))
>>> print(geo.to_json())
{"Point": [10.0, 20.0], "id": 0}
Returns:simple json format string
Return type:str
class iobjectspy.data.GeoPoint3D(point=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

Point geometry object class. This class is generally used to describe point geographic entities. Both Point3D and GeoPoint3D can be used to represent three-dimensional points. The difference is that GeoPoint3D describes a ground object, while Point3D describes a position point. When different geometric styles are given to GeoPoint3D, it can be used to represent different ground objects, while Point3D is a coordinate point widely used for positioning

Construct a point geometry object.

Parameters:point (Point3D or GeoPoint3D or tuple[float] or list[float]) – point object
bounds

Rectangle – Get the geographic range of the point geometry object

get_x()

Get the X coordinate value of the point geometry object

Return type:float
get_y()

Get the Y coordinate value of the point geometry object

Return type:float
get_z()

Get the Z coordinate value of the point geometry object

Return type:float
point

Return the geographic location of the point geometry object

Return type:Point3D
set_point(point)

Set the geographic location of the point geometry object

Parameters:point (Point3D) – the geographic location of the point geometric object
Returns:self
Return type:GeoPoint
set_x(x)

Set the X coordinate of the point geometry object

Parameters:x (float) – X coordinate value
Returns:self
Return type:GeoPoint3D
set_y(y)

Set the Y coordinate of the point geometry object

Parameters:y (float) – Y coordinate value
Returns:self
Return type:GeoPoint3D
set_z(z)

Set the z coordinate of the point geometry object

Parameters:z (float) – Z coordinate value
Returns:self
Return type:GeoPoint3D
to_json()

Return the current point object in simple json format.

E.g.:

>>> geo = GeoPoint3D((10,20,15))
>>> print(geo.to_json())
{"Point3D": [10.0, 20.0], "id": 0}
Returns:simple json format string
Return type:str
class iobjectspy.data.GeoLine(points=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

The line geometry object class. This class is used to describe linear geographic entities, such as rivers, roads, contours, etc., and is generally represented by one or more ordered coordinate point sets. The direction of the line is determined by the order of the coordinate points. You can also call ‘reverse’ method to change the direction of the line. A line object is composed of one or more parts, each part is called a sub-object of the line object, and each sub-object is represented by an ordered set of coordinate points. You can add, delete, modify and other operations to the sub-objects.

Construct a line geometry object

Parameters:points (list[Point2D] or tuple[Point2D] or GeoLine or GeoRegion or Rectangle) – objects containing point string information, can be list[Point2D], tuple[Point2D], GeoLine, GeoRegion and Rectangle
add_part(points)

Add a sub-object to this line geometry object. The serial number of the added sub-object is returned successfully.

Parameters:points (list[Point2D]) – an ordered set of points
Return type:int
clone()

Copy object

Return type:GeoLine
convert_to_region()

Convert current line object to area geometry object -For unclosed line objects, the beginning and the ending points will be automatically connected -If the number of points of a sub-object of the GeoLine object instance is less than 3, it will fail

Return type:GeoRegion
create_buffer(distance, prj=None, unit=None)

Construct the buffer object of the current line object. The round head full buffer of the line object will be constructed.

Parameters:
  • distance (float) – The radius of the buffer. If prj and unit are set, the unit of unit will be used as the unit of buffer radius.
  • prj (PrjCoordSys) – Describe the projection information of the point geometry object
  • unit (Unit or str) – buffer radius unit
Returns:

buffer radius

Return type:

GeoRegion

find_point_on_line_by_distance(distance)

Find a point on the line at a specified distance, and the starting point of the search is the starting point of the line. -When ‘distance’ is greater than ‘Length’, the end of the last sub-object of the line is returned. -When ‘distance=0’, return the starting point of the line geometry object; -When a line geometry object has multiple sub-objects, search according to the sequence number of the sub-objects

Parameters:distance (float) – the distance to find the point
Returns:If search succeeds, return the point you are looking for, otherwise it Return None
Return type:Point2D
get_part(item)

Return the sub-object of the specified sequence number in this line geometry object, and Return the sub-object in the form of an ordered point collection. When the two-dimensional line object is a simple line object, if the parameter 0 is passed in, the set of nodes of this line object will be obtained.

Parameters:item (int) – The serial number of the sub-object.
Returns:the node of the sub-object
Return type:list[Point2D]
get_part_count()

Get the number of sub-objects

Return type:int
get_parts()

Get all point coordinates of the current geometric object. Each sub-object uses a list storage

>>> points = [Point2D(1,2),Point2D(2,3)]
>>> geo = GeoLine(points)
>>> geo.add_part([Point2D(3,4),Point2D(4,5)])
>>> print(geo.get_parts())
[[(1.0, 2.0), (2.0, 3.0)], [(3.0, 4.0), (4.0, 5.0)]]
Returns:contains a list of all point coordinates
Return type:list[list[Point2D]]
insert_part(item, points)

Insert a sub-object at the specified position in the geometric object of this line. Return True if successful, otherwise False

Parameters:
  • item (int) – insert position
  • points (list[Point2D]) – inserted ordered set of points
Return type:

bool

length

float – return the length of the line object

remove_part(item)

Delete the sub-object of the specified number in the geometric object of this line.

Parameters:item (int) – the serial number of the specified sub-object
Returns:Return true if successful, otherwise false
Return type:bool
to_json()

Output the current object as a Simple Json string

>>> points = [Point2D(1,2), Point2D(2,3), Point2D(1,5), Point2D(1,2)]
>>> geo = GeoLine(points)
>>> print(geo.to_json())
{"Line": [[[1.0, 2.0], [2.0, 3.0], [1.0, 5.0], [1.0, 2.0]]], "id": 0}
Return type:str
class iobjectspy.data.GeoLineM(points=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

Route object. It is a set of linear feature objects composed of points with X, Y coordinates and linear measurement values. The M value is the so-called Measure value, which is often used in traffic network analysis to mark the distance between different points of a route and a certain point. For example, milestones on highways. Traffic control departments often use milestones on highways to mark and manage highway conditions, vehicle speed limits and high-speed accident points, etc.

Construct a route object

Parameters:points (list[PointM], tuple[PointM], GeoLineM) – objects containing point string information, can be list[PointM], tuple[PointM], GeoLineM
add_part(points)

Append a sub-object to the current object. The serial number of the added sub-object is returned successfully.

Parameters:points (list[PointM]) – an ordered set of points
Return type:int
clone()

Copy object

Return type:GeoLineM
convert_to_line()

Convert the routing object into a two-dimensional line geometry object, and return the line geometry object successfully.

Return type:GeoLine
convert_to_region()

Convert current object to area geometry object -For objects that are not closed, the beginning and the ending will be automatically connected when converted to area objects -If the number of points of a sub-object of the GeoLineM object instance is less than 3, it will fail

Return type:GeoRegion
find_point_on_line_by_distance(distance)

Find a point on the line at a specified distance, and the starting point of the search is the starting point of the line. -When ‘distance’ is greater than ‘Length’, the end of the last sub-object of the line is returned. -When ‘distance=0’, return the starting point of the line geometry object; -When a line geometry object has multiple sub-objects, search according to the sequence number of the sub-objects

Parameters:distance (float) – the distance to find the point
Returns:If search succeeds, return the point you are looking for, otherwise it Return None
Return type:Point2D
get_distance_at_measure(measure, is_ignore_gap=True, sub_index=-1)

Return the distance from the point object corresponding to the specified M value to the starting point of the specified route sub-object.

Parameters:
  • measure (float) – The value of M specified.
  • is_ignore_gap (bool) – Specify whether to ignore the distance between sub-objects.
  • sub_index (int) – The index value of the specified route sub-object. If it is -1, start counting from the first sub-object, otherwise start counting from the specified sub-object
Returns:

Specify the distance from the point object corresponding to the M value to the starting point of the specified route sub-object. The unit is the same as the unit of the dataset to which the route object belongs.

Return type:

float

get_max_measure()

Return the maximum linear metric value

Return type:float
get_measure_at_distance(distance, is_ignore_gap=True, sub_index=-1)

Return the M value of the point object at the specified distance.

Parameters:
  • distance (float) – The specified distance. The distance refers to the distance to the starting point of the route. The unit is the same as the unit of the dataset to which the route object belongs.
  • is_ignore_gap (bool) – Whether to ignore the distance between sub-objects.
  • sub_index (int) – The sequence number of the route sub-object to be returned. If it is -1, the calculation starts from the first object, otherwise the calculation starts from the specified sub-object.
Returns:

The M value of the point object at the specified distance.

Return type:

float

get_measure_at_point(point, tolerance, is_ignore_gap)

Return the M value at the specified point of the route object.

Parameters:
  • point (Point2D) – The specified point object.
  • tolerance (float) – tolerance value. It is used to judge whether the specified point is on the routing object. If the distance from the point to the routing object is greater than this value, the specified point is considered invalid Do not return. The unit is the same as the unit of the dataset to which the route object belongs.
  • is_ignore_gap (bool) – Whether to ignore the gap between sub-objects.
Returns:

The M value at the specified point of the route object.

Return type:

float

get_min_measure()

Return the smallest linear metric value.

Return type:float
get_part(item)

Return the sub-object of the specified sequence number in the current object, and Return the sub-object in an ordered point collection. When the current object is a simple routing object and the 0 is passed in, the result is a collection of nodes of this object.

Parameters:item (int) – The serial number of the subobject.
Returns:the node of the sub-object
Return type:list[PointM]
get_part_count()

Get the number of sub-objects

Return type:int
get_parts()

Get all point coordinates of the current object. Each sub-object uses a list storage

Returns:contains a list of all point coordinates
Return type:list[list[PointM]]
insert_part(item, points)

Insert a sub-object at the specified position in the current pair. Return True if successful, otherwise False

Parameters:
  • item (int) – insert position
  • points (list[PointM]) – inserted ordered set of points
Return type:

bool

length

float – Return the length of the current object

remove_part(item)

Delete the sub-object of the specified serial number in this object.

Parameters:item (int) – the serial number of the specified sub-object
Returns:Return true if successful, otherwise false
Return type:bool
to_json()

Output the current object as a Simple Json string

Return type:str
class iobjectspy.data.GeoRegion(points=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

The surface geometry object class, derived from the Geometry class.

This class is used to describe planar geographic entities, such as administrative regions, lakes, residential areas, etc., and is generally represented by one or more ordered coordinate point sets. A surface geometry object is composed of one or more parts, and each part is called a a sub-object of the image, each sub-object is represented by an ordered set of coordinate points, and its start point and end point coincide. You can add, delete, and modify sub-objects.

Construct a surface geometry object

Parameters:points (list[Point2D] or tuple[Point2D] or GeoLine or GeoRegion or Rectangle) – objects containing point string information, can be list[Point2D], tuple[Point2D], GeoLine, GeoRegion and Rectangle
add_part(points)

Add a sub-object to this geometry object. The serial number of the added sub-object is returned successfully.

Parameters:points (list[Point2D]) – an ordered set of points
Return type:int
area

float – Return the area of the region

contains(point)

Determine whether the point is in the plane

Parameters:point (Point2D or GeoPoint) – the two-dimensional point object to be judged
Returns:Return True if the point is inside the surface, otherwise it Return False
Return type:bool
convert_to_line()

Convert current area object to line object

Return type:GeoLine
create_buffer(distance, prj=None, unit=None)

Build the buffer object of the current face object

Parameters:
  • distance (float) – The radius of the buffer. If prj and unit are set, the unit of unit will be used as the unit of buffer radius.
  • prj (PrjCoordSys) – Describe the projection information of the point geometry object
  • unit (Unit or str) – buffer radius unit
Returns:

buffer radius

Return type:

GeoRegion

get_part(item)

Return the sub-object of the specified sequence number in this geometry object, and Return the sub-object in the form of an ordered point collection.

Parameters:item (int) – The serial number of the sub-object.
Returns:the node of the sub-object
Return type:list[Point2D]
get_part_count()

Get the number of sub-objects

Return type:int
get_parts()

Get all point coordinates of the current geometric object. Each sub-object uses a list storage

>>> points = [Point2D(1,2), Point2D(2,3), Point2D(1,5), Point2D(1,2)]
>>> geo = GeoRegion(points)
>>> geo.add_part([Point2D(2,3), Point2D(4,3), Point2D(4,2), Point2D(2,3)])
>>> geo.get_parts()
[[(1.0, 2.0), (2.0, 3.0), (1.0, 5.0), (1.0, 2.0)],
[(2.0, 3.0), (4.0, 3.0), (4.0, 2.0), (2.0, 3.0)]]
Returns:contains a list of all point coordinates
Return type:list[list[Point2D]]
get_parts_topology()

Determine the island - hole relationship between the children of the face object. The island hole relational array is composed of two values, 1 and -1. The size of the array is the same as that of the sub-objects of the face object. Where, 1 means that the sub-object is an island and -1 means that the sub-object is a hole.

rtype:list[int]
get_precise_area(prj)

Accurately calculate the area of the polygon under the projection reference system

Parameters:prj (PrjCoordSys) – Specified projected coordinate system
Returns:the area of the 2D surface geometry object
Return type:float
insert_part(item, points)

Insert a sub-object at the specified position in this geometric object. Return True if successful, otherwise False

Parameters:
  • item (int) – insert position
  • points (list[Point2D]) – inserted ordered set of points
Return type:

bool

is_counter_clockwise(sub_index)

Determine the direction of the sub-object of the face object. true means the direction is counterclockwise, and false means the direction is clockwise.

Return type:bool
perimeter

float – Return the perimeter of the region

protected_decompose()

Protective decomposition of surface object. Different from the simple decomposition of the sub-objects of the combined object, the protective decomposition decomposes the complex area object with multiple nested islands and caves into the area object with only one level of nesting relationship. The rationality of decomposition cannot be guaranteed if there are subobjects partially overlapped in the surface object.

Returns:The object obtained after protective decomposition.
Return type:list[GeoRegion]
remove_part(item)

Delete the sub-objects of the specified sequence number in this geometry object.

Parameters:item (int) – the serial number of the specified sub-object
Returns:Return true if successful, otherwise false
Return type:bool
to_json()

Output the current object as a Simple Json string

>>> points = [Point2D(1,2), Point2D(2,3), Point2D(1,5), Point2D(1,2)]
>>> geo = GeoRegion(points)
>>> print(geo.to_json())
{"Region": [[[1.0, 2.0], [2.0, 3.0], [1.0, 5.0], [1.0, 2.0]]], "id": 0}
Return type:str
class iobjectspy.data.TextPart(text=None, anchor_point=None, rotation=None)

Bases: object

Text sub-object class. Used to represent a text object: py: class: GeoText the sub-object, the sub-object that stores the text of the present, the rotation angle, and to provide other information sub-anchor object associated method for processing performed.

Construct a text sub-object.

Parameters:
  • text (str) – The text content of the text sub-object instance.
  • anchor_point (Point2D) – The anchor point of the text sub-object instance.
  • rotation (float) – The rotation angle of the text sub-object, in degrees, counterclockwise is the positive direction.
anchor_point

The anchor point of the text sub-object instance. The alignment of the anchor point and the text together determines the display position of the text sub-object. Regarding the alignment of the anchor point and the text How to determine the display position of the text sub-object, See TextAlignment class.

clone()

Copy object

Return type:TextPart
from_dict(values)

Read information about text sub-objects from dict

Parameters:values (dict) – text sub-object
Returns:self
Return type:TextPart
static from_json(value)

Read information from json string to construct text sub-object

Parameters:value (str) – json string
Return type:TextPart
static make_from_dict(values)

Read information from dict to construct text sub-object

Parameters:values (dict) – text sub-object
Return type:TextPart
rotation

The rotation angle of the text sub-object, in degrees, counterclockwise is the positive direction.

set_anchor_point(value)

Set the anchor point of this text sub-object. The alignment of the anchor point and the text together determines the display position of the text sub-object. For how the alignment of the anchor point and the text determines the display position of the text sub-object, please refer to the TextAlignment class.

Parameters:value (Point2D) – the anchor point of the text subobject
Returns:self
Return type:TextPart
set_rotation(value)

Set the rotation angle of this text sub-object. Counterclockwise is the positive direction and the unit is degree. The rotation angle returned by the text sub-object after being stored by the data engine has an accuracy of 0.1 degree; for the text sub-object directly constructed by the constructor, the accuracy of the returned rotation angle remains unchanged.

Parameters:value (float) – the rotation angle of the text sub-object
Returns:self
Return type:TextPart
set_text(value)

Set the text sub-content of the text sub-object

Parameters:value (str) – text sub-content of text sub-object
Returns:self
Return type:TextPart
set_x(value)

Set the abscissa of the anchor point of this text sub-object

Parameters:value (float) – the abscissa of the anchor point of this text subobject
Returns:self
Return type:TextPart
set_y(value)

Set the ordinate of the anchor point of the text sub-object

Parameters:value (float) – the ordinate of the anchor point of the text object
Returns:self
Return type:TextPart
text

str – the text content of this text sub-object

to_dict()

Output current sub-object as dict

Return type:dict
to_json()

Output the current sub-object as a json string

Return type:str
x

float – the abscissa of the anchor point of the text sub-object, the default value is 0

y

float – the vertical coordinate of the anchor point of the text sub-object, the default value is 0

class iobjectspy.data.GeoText(text_part=None, text_style=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry

Text class, derived from Geometry class. This category is mainly used to identify features and necessary notes. A text object is composed of one or more parts, and each part is called a sub-object of the text object. Each sub-object is an instance of TextPart. All sub-objects of the same text object use the same style, that is, use the text style of the text object for display.

Construct a text object

Parameters:
  • text_part (TextPart) – Text sub-object.
  • text_style (TextStyle) – the style of the text object
add_part(text_part)

Add a text subobject

Parameters:text_part (TextPart) – text sub-object
Returns:Return the serial number of the sub-object when the addition is successful, and Return -1 when it fails.
Return type:int
get_part(index)

Get the specified text subobject

Parameters:index (int) – the serial number of the text subobject
Returns:
Return type:int
get_part_count()

Get the number of text sub-objects

Return type:int
get_parts()

Get all text sub-objects of the current text object

Return type:list[TextPart]
get_text()

The content of the text object. If the object has multiple sub-objects, its value is the sum of the sub-object strings.

Return type:str
remove_part(index)

Delete the text sub-object of the specified serial number of this text object.

Parameters:index (int) –
Returns:Return True if the deletion is successful, otherwise Return False.
Return type:bool
set_part(index, text_part)

Modify the sub-object of the specified number of this text object, that is, replace the original text sub-object with the new text sub-object.

Parameters:
  • index (int) – text sub-object number
  • text_part (TextPart) – text sub-object
Returns:

Return True if the setting is successful, otherwise return False.

Return type:

bool

set_text_style(text_style)

Set the text style of the text object. The text style is used to specify the font, width, height and color of the text object when it is displayed.

Parameters:text_style (TextStyle) – The text style of the text object.
Returns:self
Return type:GeoText
text_style

TextStyle – The text style of the text object. The text style is used to specify the font, width, height and color of the text object when it is displayed.

to_json()

Output the current object as a json string

Return type:str
class iobjectspy.data.TextStyle

Bases: object

The text style class. Used to set the style of: py:class:GeoText class object

alignment

TextAlignment – The alignment of the text.

back_color

tuple – the background color of the text, the default color is black

border_spacing_width

int – Return the distance between the edge of the text background rectangle and the edge of the text, in pixels

clone()

Copy object

Return type:TextStyle
font_height

float – The height of the text font. When the size is fixed, the unit is 1 mm, otherwise the geographic coordinate unit is used. The default value is 6.

font_name

str – Return the name of the text font. If a certain font is specified for the text layer in the map under the Windows platform, and the map data needs to be in Linux Application under the platform , then please make sure you the same font also exists under the Linux platform, otherwise, the font display effect of the text layer will be problematic. The default value of the name of the text font is “Times New Roman”.

font_scale

float – the scale of the annotation font

font_width

float – The width of the text. The width of the font is based on English characters, because one Chinese character is equivalent to two English characters. The unit is 1 when the size is fixed Millimeters, otherwise use geographic coordinate units.

fore_color

tuple – The foreground color of the text, the default color is black.

from_dict(values)

Read text style information from dict

Parameters:values (dict) – a dict containing text style information
Returns:self
Return type:TextStyle
static from_json(value)

Construct TextStyle object from json string

Parameters:value (str) –
Return type:TextStyle
is_back_opaque

bool – Whether the text background is opaque, True means the text background is opaque. The default opaque

is_bold

bool – Return whether the text is bold, True means bold

is_italic

bool – Whether the text is italicized, True means italicized

is_outline

bool – Return whether to display the background of the text in an outline way

is_shadow

bool – Whether the text has a shadow. True means to add a shadow to the text

is_size_fixed

bool – .false text size is fixed, the table shows the size of the text is not fixed to the text

is_strikeout

bool – Whether the text font should be strikethrough. True means strikethrough.

is_underline

bool – Whether the text font should be underlined. True means underline.

italic_angle

float – Return the font tilt angle, between positive and negative degrees, in degrees, accurate to 0.1 degrees. When the tilt angle is 0 degrees, it is the system default The font slant style. Positive and negative degrees refer to the vertical axis as the starting zero degree line, the left side of the vertical axis is positive, and the right side is negative. The maximum allowable angle is 60, and the minimum is -60. More than 60 will be treated as 60, and less than -60 will be treated as -60.

static make_from_dict(values)

Read text style information from dict to construct TextStyle

Parameters:values (dict) – a dict containing text style information
Return type:TextStyle
opaque_rate

int – Set the opaqueness of the annotation text. The range of opacity is 0-100.

outline_width

**float* – The width of the text outline, the unit of the value is* – pixel, the value range is any integer from 0 to 5.

rotation

float – The angle of text rotation. The counterclockwise direction is the positive direction, and the unit is degree.

set_alignment(value)

Set the alignment of the text

Parameters:value (TextAlignment or str) – the alignment of the text
Returns:self
Return type:TextStyle
set_back_color(value)

Set the background color of the text.

Parameters:value (int or tuple) – background color of the text
Returns:self
Return type:TextStyle
set_back_opaque(value)

Set whether the text background is opaque, True means the text background is opaque

Parameters:value (bool) – Whether the text background is opaque
Returns:self
Return type:TextStyle
set_bold(value)

Set whether the text is bold, True means bold

Parameters:value (bool) – whether the text is bold
Returns:self
Return type:TextStyle
set_border_spacing_width(value)

Set the distance between the edge of the text background rectangle and the edge of the text, in pixels.

Parameters:value (int) –
Returns:self
Return type:TextStyle
set_font_height(value)

Set the height of the text font. When the size is fixed, the unit is 1 mm, otherwise the geographic coordinate unit is used.

Parameters:value (float) – the height of the text font
Returns:self
Return type:TextStyle
set_font_name(value)

Set the name of the text font. If a certain font is specified for the text layer in the map under the Windows platform, and the map data needs to be applied under the Linux platform, please make sure the same font exists on your Linux platform, otherwise, the font display effect of the text layer will be problematic.

Parameters:value (str) – The name of the text font. The default value of the name of the text font is “Times New Roman”.
Returns:self
Return type:TextStyle
set_font_scale(value)

Set the zoom ratio of the annotation font

Parameters:value (float) – The zoom ratio of the annotation font
Returns:self
Return type:TextStyle
set_font_width(value)

Set the width of the text. The width of the font is based on English characters, because one Chinese character is equivalent to two English characters. When the size is fixed, the unit is 1 mm, otherwise the geographic coordinate unit is used.

Parameters:value (float) – the width of the text
Returns:self
Return type:TextStyle
set_fore_color(value)

Set the foreground color of the text

Parameters:value (int or tuple) – the foreground color of the text
Returns:self
Return type:TextStyle
set_italic(value)

Set whether the text is in italics, true means italics.

Parameters:value (bool) – whether the text is in italics
Returns:self
Return type:TextStyle
set_italic_angle(value)

Set the font tilt angle, between positive and negative degrees, in degrees, accurate to 0.1 degrees. When the tilt angle is 0 degrees, it is the default font tilt style of the system. Positive and negative degrees refer to the vertical axis as the starting zero degree line, the left side of the vertical axis is positive, and the right side is negative. The maximum allowable angle is 60, and the minimum is -60. More than 60 will be treated as 60, and less than -60 will be treated as -60.

Parameters:value (float) – The tilt angle of the font, between positive and negative degrees, in degrees, accurate to 0.1 degrees
Returns:self
Return type:TextStyle
set_opaque_rate(value)

Set the opacity of the annotation text. The range of opacity is 0-100.

Parameters:value (int) – The opacity of the annotation text
Returns:self
Return type:TextStyle
set_outline(value)

Set whether to display the background of the text in outline. false, which means that the background of the text is not displayed as an outline.

Parameters:value (bool) – Whether to display the background of the text in an outline way
Returns:self
Return type:TextStyle
set_outline_width(value)

Set the width of the text outline, the unit of the value is pixels, the value range is any integer from 0 to 5, where a value of 0 means no outline. Must pass method: py:meth:is_outline as True, the width setting of the text outline is valid.

Parameters:value (int) – The width of the text outline, the unit of the value is: pixel, the value range is any integer from 0 to 5, where a value of 0 means no outline.
Returns:self
Return type:TextStyle
set_rotation(value)

Set the angle of text rotation. The counterclockwise direction is the positive direction and the unit is degree.

Parameters:value (float) – the angle of text rotation
Returns:self
Return type:TextStyle
set_shadow(value)

Set whether the text has a shadow. True means to add shadow to the text

Parameters:value (bool) – whether the text has a shadow
Returns:self
Return type:TextStyle
set_size_fixed(value)

Set whether the text size is fixed. False, indicating that the text is of non-fixed size.

Parameters:value (bool) – Whether the text size is fixed. False, indicating that the text is of non-fixed size.
Returns:self
Return type:TextStyle
set_strikeout(value)

Set whether to add strikethrough in text font.

Parameters:value (bool) – Whether to add strikethrough in text font.
Returns:self
Return type:TextStyle
set_string_alignment(value)

Set the typesetting of the text, you can set left, right, center, and both ends of multi-line text

Parameters:value (StringAlignment) – How to format the text
Returns:self
Return type:TextStyle
set_underline(value)

Set whether the text font is underlined. True means underline

Parameters:value (bool) – Whether the text font is underlined. True means underline
Returns:self
Return type:TextStyle
set_weight(value)

Set the point size of the text font to indicate the specific value of bold. The value range is a whole hundred from 0 to 900. For example, 400 means normal display, 700 means bold, please refer to Microsoft MSDN help about Introduction to the LOGFONT class

Parameters:value (int) – The point size of the text font.
Returns:self
Return type:TextStyle
string_alignment

StringAlignment – How the text is formatted

to_dict()

Output current object as dict

Return type:dict
to_json()

Output as json string

Return type:str
weight

float – The point size of the text font, representing the specific value of bold. The value range is a whole hundred from 0-900, such as 400 Normal display, 700 is in bold, please refer to Microsoft MSDN help An introduction to the LOGFONT class. The default value is 400

class iobjectspy.data.GeoRegion3D(points=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

The surface geometry object class, derived from the Geometry3D class.

This class is used to describe planar geographic entities, such as administrative regions, lakes, residential areas, etc., and is generally represented by one or more ordered coordinate point sets. A surface geometry object is composed of one or more parts, and each part is called a sub-object of the surface geometry object, each sub-object is represented by an ordered set of coordinate points, and its start point and end point coincide. You can add, delete, and modify sub-objects.

Construct a surface geometry object

Parameters:points (list[Point2D] or tuple[Point2D] or GeoLine or GeoRegion or Rectangle) – objects containing point string information, can be list[Point2D], tuple[Point2D], GeoLine, GeoRegion and Rectangle
add_part(points)

Add a sub-object to this geometry object. The serial number of the added sub-object is returned successfully.

Parameters:points (list[Point2D]) – an ordered set of points
Return type:int
area

float – Return the area of the region object

contains(point)

Determine whether the point is in the plane :param point: point object to be judged :type point: Point3D or GeoPoint3D :return: Return True if the point is inside the surface, otherwise it Return False :return type: bool

get_part(item)

Return the sub-object of the specified sequence number in this geometry object, and Return the sub-object in the form of an ordered point collection.

Parameters:item (int) – The serial number of the sub-object.
Returns:the node of the sub-object
Return type:list[Point3D]
get_part_count()

Get the number of sub-objects :rtype: int

get_parts()

Get all point coordinates of the current geometric object. Each sub-object uses a list storage

>>> points = [Point3D(1,2,0), Point3D(2,3,0), Point3D(1,5,0), Point3D(1,2,0)]
>>> geo = GeoRegion(points)
>>> geo.add_part([Point3D(2,3,0), Point3D(4,3,0), Point3D(4,2,0), Point3D(2,3,0)])
>>> geo.get_parts()
:return: contains a list of all point coordinates
:rtype: list[list[Point2D]]
insert_part(item, points)

Insert a sub-object at the specified position in this geometric object. Return True if successful, otherwise False :param int item: insert position :param list[Point3D] points: inserted ordered set of points :rtype: bool

perimeter

float – Return the perimeter of the region

remove_part(item)

Delete the sub-objects of the specified sequence number in this geometry object. :param int item: the serial number of the specified sub-object :return: Return true if successful, otherwise false :rtype: bool

to_json()

Output the current object as a Simple Json string

>>> points = [Point2D(1,2), Point2D(2,3), Point2D(1,5), Point2D(1,2)]
>>> geo = GeoRegion(points)
>>> print(geo.to_json())
{"Region": [[[1.0, 2.0], [2.0, 3.0], [1.0, 5.0], [1.0, 2.0]]], "id": 0}
Return type:str
translate(dx=0.0, dy=0.0, dz=0.0)

Offset the area object, add the offset to each point in the area :param dx: offset in X direction :param dy: Y direction offset :param dz: Z direction offset :return:

class iobjectspy.data.GeoModel3D(poin3dValues=None, faceIndices=None, bLonLat=False)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

3D model object class

: Construct 3D model objects from vertices and vertex indexes :param poin3dValues: vertex package :param faceIndices: Face index collection, it should be noted that each element of the vertex index should be a list with a number greater than or equal to 4 (the first and last are the same) That is, the connection order of each Face vertex. It should be noted that Face should be a plane :return 3D model object For example, build a 3D model of a box with a center point at the origin 8 vertices point3ds = [Point3D(-1,-1,-1), Point3D(1,-1,-1), Point3D(1,-1,1), Point3D(-1,-1,1),Point3D(- 1,1,-1), Point3D(1, 1, -1), Point3D(1, 1, 1), Point3D(-1, 1, 1)] 6 sides faceIndices=[

[3,2,1,0,3],#front [0,1,5,4,0],#bottom [0,4,7,3,0],#right [1,5,6,2,1],#left [2,3,7,6,2],#top [5,4,7,6,5] #back ]

geo=GeoModel3D(point3ds, faceIndices)

IsLonLat
Mirror(plane)

:Mirror :param plane: mirror plane :return: return self’s model object about plane mirroring

SetMatrix(basePoint, matrix)

Model transformation :param basePoint: reference point :param matrix: transformation matrix :return: GeoModel3D after matrix transformation

max_z

Get the maximum value in the z direction of the model

mergeSkeleton()
min_z

Get the minimum value in the z direction of the model

set_IsLonLat(value)
set_color(value)

Set the model material color, all skeletons are of this color :param value: material color :return:

translate(x=0, y=0, z=0)

Model translation :param x: translation amount in X direction :param y: Y direction translation :param z: translation amount in Z direction :return: return the translated model

class iobjectspy.data.GeoBox(len=0.0, width=None, height=None, position=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

Cuboid geometric object class

height
length
set_height(value)
set_length(value)
set_width(value)
width
class iobjectspy.data.GeoLine3D(points=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

The line geometry object class. This class is used to describe linear geographic entities, such as rivers, roads, contours, etc., and is generally represented by one or more ordered coordinate point sets. The direction of the line is determined by the order of the ordered coordinate points. You can also call reverse method to change the direction of the line. A line object is composed of one or more parts, each part is called a sub-object of the line object, and each sub-object is represented by an ordered set of coordinate points. You can add, delete, modify and other operations to the sub-objects.

Construct a line geometry object

Parameters:points (list[Point3D] or tuple[Point3D] or GeoLine3d or GeoRegion) – objects containing point string information, can be list[Point3D], tuple[Point3D], GeoLine3D, GeoRegion3D
add_part(points)

Add a sub-object to this line geometry object. The serial number of the added sub-object is returned successfully.

Parameters:points (list[Point3D]) – an ordered set of points
Return type:int
clone()

Copy object

Return type:GeoLine
convert_to_region()

Convert current line object to area geometry object -For unclosed line objects, the beginning and the ending points will be automatically connected -If the number of points of a sub-object of the GeoLine object instance is less than 3, it will fail

Return type:GeoRegion3D
get_part(item)

Return the sub-object of the specified sequence number in this line geometry object, and Return the sub-object in the form of an ordered point collection. When the two-dimensional line object is a simple line object, if the parameter 0 is passed in, the set of nodes of this line object will be obtained.

Parameters:item (int) – The serial number of the sub-object.
Returns:the node of the sub-object
Return type:list[Point2D]
get_part_count()

Get the number of sub-objects

Return type:int
get_parts()

Get all point coordinates of the current geometric object. Each sub-object uses a list storage

>>> points = [Point3D(1,2,0),Point3D(2,3,0)]
>>> geo = GeoLine3D(points)
>>> geo.add_part([Point3D(3,4,0),Point3D(4,5,0)])
>>> print(geo.get_parts())
[[(1.0, 2.0, 0.0), (2.0, 3.0, 0.0)], [(3.0, 4.0, 0.0), (4.0, 5.0, 0.0)]]
Returns:contains a list of all point coordinates
Return type:list[list[Point3D]]
insert_part(item, points)

Insert a sub-object at the specified position in the geometric object of this line. Return True if successful, otherwise False

Parameters:
  • item (int) – insert position
  • points (list[Point2D]) – inserted ordered set of points
Return type:

bool

length

float – Return the length of the line object

remove_part(item)

Delete the sub-object of the specified number in the geometric object of this line.

Parameters:item (int) – the serial number of the specified sub-object
Returns:Return true if successful, otherwise false
Return type:bool
to_json()

Output the current object as a Simple Json string

>>> points = [Point3D(1,2,0), Point3D(2,3,0), Point3D(1,5,0), Point3D(1,2,0)]
>>> geo = GeoLine(points)
>>> print(geo.to_json())
{"Line3D": [[[1.0, 2.0, 0.0], [2.0, 3.0, 0.0], [1.0, 5.0, 0.0], [1.0, 2.0, 0.0]]], "id": 0}
Return type:str
class iobjectspy.data.GeoCylinder(topRadius=0.0, bottomRadius=None, height=1.0, position=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

The geometric object class of the truncated cone, which is inherited from the Geometry3D class. If the radius of the bottom circle is equal to the radius of the top circle, it is a cylindrical geometry object.

bottomRadius
height
set_bottomRadius(value)
set_height(value)
set_topRadius(value)
topRadius
class iobjectspy.data.GeoCircle3D(r=0.0, position=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

3D circular geometry object class

radius
set_radius(value)
class iobjectspy.data.GeoStyle3D

Bases: object

The style class of geometric objects in the 3D scene. This class is mainly used to set the display style of geometric objects in the 3D scene

clone()

Copy object

Return type:TextStyle
fillForeColor
from_dict(values)

Read style information from dict :param dict values: dict containing style information :return: self :rtype: GeoStyle3D

static from_json(value)

Construct GeoStyle3D object from json string :param str value: :rtype: GeoStyle3D

lineColor
lineWidth
static make_from_dict(values)

Read text style information from dict to construct GeoStyle3D :param dict values: a dict containing text style information :rtype: GeoStyle3D

markerColor
markerSize
set_fillForeColor(value)
set_lineColor(value)
set_lineWidth(value)
set_markerColor(value)
set_markerSize(value)
to_dict()

Output current object as dict :rtype: dict

to_json()

Output as json string :rtype: str

class iobjectspy.data.Plane(points=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Flat object class. This plane is an infinitely extending plane in the mathematical sense, which is mainly used for cross-sectional projection and plane projection of 3D models usage: p1 = Point3D(1, 1, 1) p2 = Point3D(0, 3, 4) p3 = Point3D(7, 4, 3) plane = Plane([p1, p2, p3]) or: plane = Plane(PlaneType.PLANEXY)

get_normal()
Get the normal vector of the face
Returns:return normal vector Point3D
set_normal(p)

Set the normal vector of the face :param p: Normal vector, can be Point3D, tuple[float,float,float] or list[float,float,float] :return:

set_type(planeType)

Set plane type :param planeType: :return:

class iobjectspy.data.Matrix(arrayValue=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

4X4 matrix type, mainly used for 3D model matrix transformation If you need continuous transformation, you should use the static method Multiply to multiply

static identity()
static invert(matrix)
static multiply(value, matrix)

: Matrix multiplication, the first parameter can be Point3D or Matrix :param value: can be Point3D or 4X4 matrix, :param matrix: matrix :return: When the first parameter is Point3D, the return value is Point3D; when it is Matrix, the return value is Matrix

static rotate(rotationX, rotationY, rotationZ)

: Rotation, unit: degree :param rotationX: the angle of rotation around the X axis :param rotationY: the angle of rotation around the Y axis :param rotationZ: the angle of rotation around the Z axis :return: return a new matrix with rotationX, rotationY, rotationZ scaling

static scale(scaleX, scaleY, scaleZ)

: Zoom :param scaleX: zoom in X direction :param scaleY: zoom in Y direction :param scaleZ:Z-direction zoom :return: return a new matrix with scaleX, scaleY, scaleZ

set_ArrayValue(value)

: Set matrix :param value: an array of length 16

static translate(translateX, translateY, translateZ)

: Pan :param translateX: translation in X direction :param translateY: Y direction translation :param translateZ:Z direction translation :return: return a new matrix with translateX, translateY, translateZ translation

class iobjectspy.data.Feature(geometry=None, values=None, id_value='0', field_infos=None)

Bases: object

Feature element object. Feature element object can be used to describe spatial information and attribute information, or it can only be used to describe attribute information.

Parameters:
  • geometry (Geometry) – geometric object information
  • values (list or tuple or dict) – The attribute field value of the feature object.
  • id_value (str) – feature object ID
  • field_infos (list[FieldInfo]) – attribute field information of feature element object con
add_field_info(field_info)

Add an attribute field. After adding an attribute field, if there is no attribute field without a default value, the attribute value will be set to None

Parameters:field_info (FieldInfo) – attribute field information
Returns:return True if added successfully, otherwise return False
Return type:bool
bounds

Rectangle – Get the geographic extent of the geometric object. If the geometric object is empty, return empty

clone()

Copy the current object

Return type:Feature
feature_id

str – Return Feature ID

field_infos

list[FieldInfo] – Return all field information of feature objects

static from_json(value)

Read information from json string to construct feature feature object

Parameters:value (dict) – json string containing feature feature object information
Return type:Feature
geometry

Geometry – return geometric object

get_field_info(item)

Get the field information of the specified name and serial number

Parameters:item (str or int) – field name or serial number
Return type:FieldInfo
get_value(item)

Get the field value of the specified attribute field in the current object

Parameters:item (str or int) – field name or serial number
Return type:int or float or str or datetime.datetime or bytes or bytearray
get_values(exclude_system=True, is_dict=False)

Get the property field value of the current object.

Parameters:
  • exclude_system (bool) – Whether to include system fields. All fields beginning with “Sm” are system fields. The default is True
  • is_dict (bool) – Whether to return in the form of a dict. If a dict is returned, the key of the dict is the field name and value is the attribute field value. Otherwise, the field value is returned as a list. The default is False
Returns:

attribute field value

Return type:

dict or list

remove_field_info(name)

Delete the field with the specified field name or serial number. After deleting the field, the field value will also be deleted

Parameters:name (int or str) – field name or serial number
Returns:Return True if the deletion is successful, otherwise False
Return type:bool
set_feature_id(fid)

Set Feature ID

Parameters:fid (str) – feature ID value, generally used to represent the unique ID value of a feature object
Returns:
Return type:
set_field_infos(field_infos)

Set attribute field information

Parameters:field_infos (list[FieldInfo]) – attribute field information
Returns:self
Return type:Feature
set_geometry(geo)

Set geometric objects

Parameters:geo (Geometry) – geometry object
Returns:self
Return type:Feature
set_value(item, value)

Set the field value of the specified attribute field in the current object

Parameters:
  • item (str or int) – field name or serial number
  • value (int or float or str or datetime.datetime or bytes or bytearray) – field value
Return type:

bool

set_values(values)

Set the field value.

Parameters:values (dict) – The attribute field values to be written. Must be a dict, the key value of dict is the field name, and the value of dict is the field value
Returns:Return the number of successfully written fields
Return type:int
to_json()

Output the current object as a json string

Return type:str
class iobjectspy.data.GeometriesRelation(tolerance=1e-10, gridding_level='NONE')

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The difference between the geometric object relationship judgment class and the spatial query is that this class is used for the judgment of geometric objects, not the dataset, and the realization principle is the same as the spatial query.

The following sample code shows the function of querying points by region. By inserting multiple regions (regions) into GeometriesRelation, you can determine which region object contains each point object. Then, all point objects contained in each area object can be obtained. When a large number of point objects need to be processed, this method has better performance:

>>> geos_relation = GeometriesRelation()
>>> for index in range(len(regions))
>>> geos_relation.insert(regions[index], index)
>>> results = dict()
>>> for point in points:
>>> region_values = geos_relation.matches(point,'Contain')
>>> for region_value in region_values:
>>> region = regions[region_value]
>>> if region in results:
>>> results[region].append(point)
>>> else:
>>> results[region] = [point]
>>> del geos_relation
Parameters:
  • tolerance (float) – node tolerance
  • gridding_level (GriddingLevel or str) – The gridding level of the area object.
get_bounds()

Get the geographic range of all inserted geometric objects in GeometriesRelation

Return type:Rectangle
get_gridding()

Get the grid level of the area object. Do not grid grid by default

Return type:GriddingLevel
get_sources_count()

Get the number of geometric objects inserted in GeometriesRelation

Return type:int
get_tolerance()

Get node tolerance

Return type:float
insert(data, value)

Insert a geometric object to be matched. The matched object is a query object in the spatial query mode. For example, to query a polygon containing point object, you need to insert a polygon object to In GeometriesRelation, then match in turn to obtain the surface objects that satisfy the containment relationship with the point objects.

Parameters:
  • data (Geometry or Point2D or Rectangle, Recordset, DatasetVector) – The geometric object to be matched must be a point, line or surface, or point, line, and surface record set or dataset
  • value (int) – The matched value is a unique value and must be greater than or equal to 0, such as the ID of a geometric object. If the incoming is Recordset or DatasetVector, Then value is the name of a field with a unique integer value representing the object and the value is greater than or equal to 0. If it is None, the SmID value of the object is used.
Returns:

Return True if the insert is successful, otherwise False.

Return type:

bool

intersect_extents(rc)

Return all objects that intersect the specified rectangular range, that is, the rectangular range of the object intersects.

Parameters:rc (Rectangle) – the specified rectangle range
Returns:the value of the object that intersects the specified rectangle
Return type:list[int]
is_match(data, src_value, mode)

Determine whether the object satisfies the spatial relationship with the specified object

Parameters:
Returns:

Return True if the specified object and the specified object satisfy the spatial relationship, otherwise False.

Return type:

bool

matches(data, mode, excludes=None)

Find out the values of all matched objects that satisfy the spatial relationship with the matched object.

Parameters:
  • data (Geometry or Point2D or Rectangle) – match space object
  • mode (SpatialQueryMode or str) – matching spatial query mode
  • excludes (list[int]) – The excluded value, that is, it does not participate in the matching operation
Returns:

the value of the matched object

Return type:

list[int]

set_gridding(gridding_level)

Set the gridding level of the area object. By default, the grid of area objects is not done.

Parameters:gridding_level (GriddingLevel or str) – gridding level
Returns:self
Return type:GeometriesRelation
set_tolerance(tolerance)

Set node tolerance

Parameters:tolerance (float) – node tolerance
Returns:self
Return type:GeometriesRelation
iobjectspy.data.aggregate_points_geo(points, min_pile_point_count, distance, unit='Meter', prj=None, as_region=False)

Perform density clustering on the point set. Density clustering algorithm introduction reference: py:meth:iobjectspy.aggregate_points

Parameters:
  • points (list[Point2D] or tuple[Point2D]) – set of input points
  • min_pile_point_count (int) – The threshold of the number of density cluster points, which must be greater than or equal to 2. The larger the threshold value, the harsher the conditions for clustering into a cluster. The recommended value is 4.
  • distance (float) – The radius of density clustering.
  • unit (Unit or str) – The unit of the density cluster radius. If the spatial reference coordinate system prjCoordSys is invalid, this parameter is also invalid
  • prj (PrjCoordSys) – The spatial reference coordinate system of the point collection
  • as_region (bool) – Whether to return the clustered region object
Returns:

When as_region is False, return a list. Each value in the list represents the cluster category of the point object. The cluster category starts from 1, and 0 means invalid cluster. When as_region is True, the polygon object gathered by each cluster point will be returned

Return type:

list[int] or list[GeoRegion]

iobjectspy.data.can_contain(geo_search, geo_target)

Determine whether the searched geometric object contains the searched geometric object. It Return True if it contains. Note that if there is a containment relationship, then:

  • The intersection of the exterior of the searched geometric object and the interior of the searched geometric object is empty;

  • The internal intersection of two geometric objects is not empty or the boundary of the searched geometric object and the internal intersection of the searched geometric object are not empty;

  • Check line, check surface, line check surface, there is no inclusion situation;

  • And is_within() is the inverse operation;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_CanContain.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects, support point, line and area types.
  • geo_target (Geometry) – The geometric object to be searched. It supports point, line and area types.
Returns:

Return True if the searched geometric object contains the searched geometric object; otherwise, it Return False.

Return type:

bool

iobjectspy.data.has_intersection(geo_search, geo_target, tolerance=None)

Determine whether the searched geometric object and the searched geometric object have an area intersection. Intersect Return true. note:

  • Both the searched geometric object and the searched geometric object must be a surface object;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_HasIntersection.png
Parameters:
  • geo_search (Geometry) – query object
  • geo_target (Geometry) – target object
  • tolerance (float) – Node tolerance
Returns:

Return True if the area of the two objects intersect, otherwise False

Return type:

bool

iobjectspy.data.has_area_intersection(geo_search, geo_target, tolerance=None)

Judge whether the area of the object intersects, at least one of the query object and the target object is a surface object, and the result of the intersection does not include only contact. Support point, line, area and text objects.

param Geometry geo_search:
 query object
param Geometry geo_target:
 target object
param float tolerance:
 node tolerance
return:Return True if the area of the two objects intersect, otherwise False
rtype:bool
iobjectspy.data.has_cross(geo_search, geo_target)

Determine whether the searched geometric object passes through the searched geometric object. It returns True if it crosses. Note that if two geometric objects have a crossing relationship:

  • The intersection of the interior of the searched geometric object and the interior of the searched geometric object is not empty and the intersection of the interior of the searched geometric object and the exterior of the searched geometric object is not empty.

  • When the searched geometric object is a line, the intersection between the searched geometric object and the searched geometric object is not empty but the boundary intersection is empty;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: lines;
    • The searched geometric objects: line and surface.
../_images/Geometrist_HasCross.png
Parameters:
  • geo_search (GeoLine) – Search for geometric objects. Only line types are supported.
  • geo_target (GeoLine or GeoRegion or Rectangle) – The searched geometric object, supports line and area types.
Returns:

Return True when searching for geometric objects to traverse the searched object; otherwise, return False.

Return type:

bool

iobjectspy.data.has_overlap(geo_search, geo_target, tolerance=None)

Determine whether the searched geometric object partially overlaps with the searched geometric object. If there is a partial overlap, it Return true. note:

  • There is no partial overlap between the point and any geometric object;

  • The dimensionality requirements of the searched geometric object and the searched geometric object are the same, that is, it can only be a line query line or a surface query surface;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: line, surface;
    • The searched geometric objects: line and surface.
../_images/Geometrist_HasOverlap.png
Parameters:
  • geo_search (GeoLine or GeoRegion or Rectangle) – Search for geometric objects. Only line and area types are supported.
  • geo_target (GeoLine or GeoRegion or Rectangle) – The searched geometric object, only supports line and area types
  • tolerance (float) – Node tolerance
Returns:

Return True if the searched geometric object and the searched geometric object partially overlap; otherwise, return False

Return type:

bool

iobjectspy.data.has_touch(geo_search, geo_target, tolerance=None)

Determine whether the boundary of the searched geometric object touches the boundary of the searched geometric object. When touching, the internal intersection of the searched geometric object and the searched geometric object is empty. note:

  • There is no boundary contact between point and point;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_HasTouch.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects.
  • geo_target (Geometry) – The geometric object to be searched.
Returns:

Return True if the boundary of the searched geometric object touches the boundary of the searched geometric object; otherwise, it Return False.

Return type:

bool

iobjectspy.data.has_common_point(geo_search, geo_target)

Determine whether the searched geometric object has a common node with the searched geometric object. If there are common nodes, return true.

../_images/Geometrist_HasCommonPoint.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects, support point, line and area types.
  • geo_target (Geometry) – The geometric object to be searched. It supports point, line and area types.
Returns:

Return true if the searched geometric object and the searched geometric object share nodes; otherwise, it Return false.

Return type:

bool

iobjectspy.data.has_common_line(geo_search, geo_target)

Determine whether the searched geometric object has a common line segment with the searched geometric object. Return True if there is a common line segment.

../_images/Geometrist_HasCommonLine.png
Parameters:
  • geo_search (GeoLine or GeoRegion) – Search for geometric objects. Only line and area types are supported.
  • geo_target (GeoLine or GeoRegion) – The searched geometric object, only supports line and area types.
Returns:

Return True if the searched geometric object and the searched geometric object have a common line segment; otherwise, return False.

Return type:

bool

iobjectspy.data.has_hollow(geometry)

Determine whether the specified surface object contains sub-objects of hole type

Parameters:geometry (GeoRegion) – The area object to be judged, currently only supports 2D area objects
Returns:Whether the surface object contains sub-objects of the hole type, True if it contains, otherwise False
Return type:bool
iobjectspy.data.is_disjointed(geo_search, geo_target)

Determine whether the searched geometric object is separated from the searched geometric object. Detach Return true. note:

  • The searched geometric object is separated from the searched geometric object, that is, there is no intersection;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_IsDisjointed.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects, support point, line and area types.
  • geo_target (Geometry) – The geometric object to be searched. It supports point, line and area types.
Returns:

Return True if two geometric objects are separated; otherwise, return False

Return type:

bool

iobjectspy.data.is_identical(geo_search, geo_target, tolerance=None)

Determine whether the searched geometric object is exactly equal to the searched geometric object. That is, the geometric objects completely overlap, the number of object nodes is equal, and the coordinate values corresponding to the positive or reverse order are equal. note:

  • The type of the searched geometric object and the searched geometric object must be the same;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_IsIdentical.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects, support point, line, and area types
  • geo_target (Geometry) – The geometric object to be searched. It supports point, line and area types.
  • tolerance (float) – node tolerance
Returns:

return True if two objects are exactly equal; otherwise return False

Return type:

bool

iobjectspy.data.is_within(geo_search, geo_target, tolerance=None)

Determine whether the searched geometric object is in the searched geometric object. If it is, it Return True. note:

  • There is no within condition for line query point, surface query line or surface query point;

  • And can_contain are inverse operations;

  • Types of geometric objects suitable for this relationship:

    • Search for geometric objects: point, line, surface;
    • The searched geometric objects: point, line, surface.
../_images/Geometrist_IsWithin.png
Parameters:
  • geo_search (Geometry) – Search for geometric objects, support point, line and area types.
  • geo_target (Geometry) – The geometric object to be searched, supports point, line and area types
  • tolerance (float) – node tolerance
Returns:

The searched geometric object Return True within the searched geometric object; otherwise it Return False

Return type:

bool

iobjectspy.data.is_left(point, start_point, end_point)

Determine whether the point is on the left side of the line.

Parameters:
  • point (Point2D) – the specified point to be judged
  • start_point (Point2D) – a point on the specified line
  • end_point (Point2D) – Another point on the specified line.
Returns:

If you click on the left side of the line, return True, otherwise return False

Return type:

bool

iobjectspy.data.is_right(point, start_point, end_point)

Determine whether the point is on the right side of the line.

Parameters:
  • point (Point2D) – the specified point to be judged
  • start_point (Point2D) – a point on the specified line
  • end_point (Point2D) – Another point on the specified line.
Returns:

If you click to the right of the line, return True, otherwise return False

Return type:

bool

iobjectspy.data.is_on_same_side(point1, point2, start_point, end_point)

Determine whether two points are on the same side of the line.

Parameters:
  • point1 (Point2D) – the specified point to be judged
  • point2 (Point2D) – another point specified to be judged
  • start_point (Point2D) – A point on the specified line.
  • end_point (Point2D) – Another point on the specified line.
Returns:

If the point is on the same side of the line, return True, otherwise return False

Return type:

bool

iobjectspy.data.is_parallel(start_point1, end_point1, start_point2, end_point2)

Determine whether the two lines are parallel.

Parameters:
  • start_point1 (Point2D) – The starting point of the first line.
  • end_point1 (Point2D) – The end point of the first line.
  • start_point2 (Point2D) – The start point of the second line.
  • end_point2 (Point2D) – The end point of the second line.
Returns:

return True in parallel; otherwise return False

Return type:

bool

iobjectspy.data.is_point_on_line(point, start_point, end_point, is_extended=True)

Judge whether the known point is on the known line segment (straight line), return True for the point on the line, otherwise return False.

Parameters:
  • point (Point2D) – known point
  • start_point (Point2D) – the starting point of the known line segment
  • end_point (Point2D) – the end point of the known line segment
  • is_extended (bool) – Whether to extend the line segment, if it is True, it will be calculated as a straight line, otherwise it will be calculated as the line segment
Returns:

Click on the line to return True; otherwise return False

Return type:

bool

iobjectspy.data.is_project_on_line_segment(point, start_point, end_point)

Determine whether the foot from the known point to the known line segment is on the line segment, if it is, return True, otherwise return False.

Parameters:
  • point (Point2D) – known point
  • start_point (Point2D) – the starting point of the known line segment
  • end_point (Point2D) – the end point of the known line segment
Returns:

Whether the vertical foot of the point and the line is on the line. If it is, return True, otherwise return False.

Return type:

bool

iobjectspy.data.is_perpendicular(start_point1, end_point1, start_point2, end_point2)

Determine whether the two straight lines are perpendicular.

Parameters:
  • start_point1 (Point2D) – The starting point of the first line.
  • end_point1 (Point2D) – The end point of the first line.
  • start_point2 (Point2D) – The start point of the second line.
  • end_point2 (Point2D) – The end point of the second line.
Returns:

Return True vertically; otherwise, return False.

Return type:

bool

iobjectspy.data.nearest_point_to_vertex(vertex, geometry)

Find the closest point from the geometric object to the given point.

Parameters:
Returns:

The point closest to the specified point on the geometric object.

Return type:

Point2D

iobjectspy.data.clip(geometry, clip_geometry, tolerance=None)

Generate a geometric object after the operated object is clipped by the operated object. note:

  • Only the part of the operated geometric object that falls within the operated geometric object will be output as the result geometric object;

  • Clip and intersect are the same in spatial processing. The difference lies in the processing of the properties of the resulting geometric object. Clip analysis is only used for clipping. The resulting geometric object only retains the non-system fields of the manipulated geometric object, while Intersect performs the intersection analysis As a result, the fields of the two geometric objects can be reserved according to the field settings.

  • Types of geometric objects suitable for this operation:

    • Manipulate geometric objects: surface;
    • The manipulated geometric objects: line and surface.
../_images/Geometrist_Clip.png
Parameters:
  • geometry (GeoLine or GeoRegion) – The operated geometric object, supports line and surface types.
  • clip_geometry (GeoRegion or Rectangle) – Operate geometric objects, which must be surface objects.
  • tolerance (float) – Node tolerance
Returns:

crop result object

Return type:

Geometry

iobjectspy.data.erase(geometry, erase_geometry, tolerance=None)

Erase the part that overlaps with the operated object on the operated object. note:

  • If all objects are erased, None will be returned;

  • The operation geometric object defines the erasing area. All the operated geometric objects falling in the operating geometric object area will be removed, and the feature elements falling outside the area will be output as the result geometric object, which is the opposite of Clip operation;

  • Types of geometric objects suitable for this operation:

    • Manipulate geometric objects: surface;
    • Operated geometric objects: point, line, surface.
../_images/Geometrist_Erase.png
Parameters:
  • geometry (GeoPoint or GeoLine or GeoRegion) – the operated geometric object, supports point, line and area object types
  • erase_geometry (GeoRegion or Rectangle) – Operate geometric objects, which must be area objects.
  • tolerance (float) – Node tolerance
Returns:

The geometric object after erasing operation.

Return type:

Geometry

iobjectspy.data.identity(geometry, identity_geometry, tolerance=None)

Perform the same operation on the operated object. That is, after the operation is executed, the operated geometric object contains the geometric shape from the operated geometric object. note:

  • The same operation is an operation in which the operated geometric object and the operated geometric object intersect first, and then the result of the intersection is merged with the operated geometric object.

    • If the operated geometric object is a point type, the result geometric object is the operated geometric object;
    • If the operated geometric object is of line type, the result geometric object is the operated geometric object, but the part that intersects with the operated geometric object will be interrupted;
    • If the operated geometric object is a face type, the resulting geometric object retains all the polygons within the controlled boundary with the operated geometric object, and divides the intersection with the operated geometric object into multiple objects.
  • Types of geometric objects suitable for this operation:

    Manipulate geometric objects: surface; The operated geometric objects: point, line, surface.

../_images/Geometrist_Identity.png
Parameters:
  • geometry (GeoPoint or GeoLine or GeoRegion) – The operated geometric object, which supports point, line and area objects.
  • identity_geometry (GeoRegion or Rectangle) – Operate geometric objects, which must be face objects.
  • tolerance (float) – Node tolerance
Returns:

the geometric object after the same operation

Return type:

Geometry

iobjectspy.data.intersect(geometry1, geometry2, tolerance=None)

Intersect two geometric objects and return the intersection of two geometric objects. Currently, only line-line intersection and face-to-face intersection are supported. At present, only the intersection of surface and surface and the intersection of line and line are supported, as shown in the following figure:

../_images/Geometrist_Intersect.png

Note that if two objects have multiple separate common parts, the result of the intersection will be a complex object.

Parameters:
  • geometry1 (GeoLine or GeoRegion) – The first geometric object to be intersected. It supports line and area types.
  • geometry2 (GeoLine or GeoRegion) – The second geometric object to perform the intersection operation. It supports line and surface types.
  • tolerance (float) – node tolerance, currently only supports line-line intersection.
Returns:

The geometric object after the intersection operation.

Return type:

Geometry

iobjectspy.data.intersect_line(start_point1, end_point1, start_point2, end_point2, is_extended)

Return the intersection of two line segments (straight lines).

Parameters:
  • start_point1 (Point2D) – The starting point of the first line.
  • end_point1 (Point2D) – The end point of the first line.
  • start_point2 (Point2D) – The start point of the second line.
  • end_point2 (Point2D) – The end point of the second line.
  • is_extended (bool) – Whether to extend the line segment, if it is True, it will be calculated as a straight line, otherwise it will be calculated as a line segment.
Returns:

The intersection of two line segments (straight lines).

Return type:

Point2D

iobjectspy.data.intersect_polyline(points1, points2)

Return the intersection of two polylines.

Parameters:
  • points1 (list[Point2D] or tuple[Point2D]) – The point string forming the first polyline.
  • points2 (list[Point2D] or tuple[Point2D]) – The point string forming the second polyline.
Returns:

The intersection point of the polyline formed by the point string.

Return type:

list[Point2D]

iobjectspy.data.union(geometry1, geometry2, tolerance=None)

Combine two objects. After merging, the two area objects are divided by polygons at the intersection. note:

  • The two geometric objects to be combined must be of the same type. The current version only supports the combination of surface and line types.

  • Types of geometric objects suitable for this operation:

    • Manipulate geometric objects: surface, line;
    • The manipulated geometric objects: surface, line.
../_images/Geometrist_Union.png
Parameters:
  • geometry1 (GeoLine or GeoRegion) – The operated geometric object.
  • geometry2 (GeoLine or GeoRegion) – Operate geometric objects.
  • tolerance (float) – Node tolerance
Returns:

The geometric object after the merge operation. Only supports generating simple line objects.

Return type:

Geometry

iobjectspy.data.update(geometry, update_geometry, tolerance=None)

Update the operated object. Using the manipulated geometric object to replace the overlapping part with the manipulated geometric object is a process of erasing and then pasting. Both the operating object and the operated object must be surface objects.

../_images/Geometrist_Update.png
Parameters:
  • geometry (GeoRegion or Rectangle) – The operated geometric object, that is, the updated geometric object, must be a region object.
  • update_geometry (GeoRegion or Rectangle) – Operate geometric objects. The geometric objects used for update operations must be surface objects.
  • tolerance (float) – Node tolerance
Returns:

The geometric object after the update operation.

Return type:

GeoRegion

iobjectspy.data.xor(geometry1, geometry2, tolerance=None)

XOR two objects. That is, for each operated geometric object, remove the part that intersects with the operated geometric object, and keep the remaining part. The two geometric objects for XOR operation must be of the same type, and only support surfaces.

../_images/Geometrist_XOR.png
Parameters:
  • geometry1 (GeoRegion or Rectangle) – The operated geometric object, only supports surface type.
  • geometry2 (GeoRegion or Rectangle) – Manipulate geometric objects, only surface types are supported.
  • tolerance (float) – Node tolerance
Returns:

The result geometric object of the XOR operation.

Return type:

GeoRegion

iobjectspy.data.compute_concave_hull(points, angle=45.0)

Calculate the concave closure of the point set.

Parameters:
  • points (list[Point2D] or tuple[Point2D]) – The specified point set.
  • angle (float) – The minimum angle in the concave package. The recommended value is 45 degrees to 75 degrees. The larger the angle, the more the concave hull will solve the shape of the convex hull, the smaller the angle, the sharper the angle between adjacent vertices of the concave polygon produced.
Returns:

Return the concave polygon that can contain all points in the specified point set.

Return type:

GeoRegion

iobjectspy.data.compute_convex_hull(points)

Calculate the convex closure of the geometric object, that is, the smallest circumscribed polygon. Return a simple convex polygon.

Parameters:points (list[Point2D] or tuple[Point2D] or Geometry) – point set
Returns:The smallest circumscribed polygon.
Return type:GeoRegion
iobjectspy.data.compute_geodesic_area(geometry, prj)

Calculate the latitude and longitude area.

note:

  • Use this method to calculate the latitude and longitude area. When specifying the projection coordinate system type object (PrjCoordSys) through the prj parameter, you must set the projection coordinate through the set_type method of the object to geographic longitude and latitude coordinate system (PrjCoordSysType.PCS_EARTH_LONGITUDE_LATITUDE), otherwise the calculation result is wrong.
Parameters:
  • geometry (GeoRegion) – The specified area object whose latitude and longitude area needs to be calculated.
  • prj (PrjCoordSys) – Specified projection coordinate system type
Returns:

latitude and longitude area

Return type:

float

iobjectspy.data.compute_geodesic_distance(points, major_axis, flatten)

Calculate the length of the geodesic. The geodesic line between two points on the surface is called the geodesic. The geodesic on the sphere is the great circle. The geodesic line, also known as the “geodetic line” or “geodetic line”, is the shortest curve between two points on the ellipsoid of the earth. On the earth line, the main curvature direction of each point is consistent with the surface normal at that point. It is a great arc on the surface of a sphere and a straight line on the plane. In geodetic surveying, the normal section line is usually replaced by the geodetic line for research and calculation of various problems on the ellipsoid.

A geodesic is a curve with zero geodesic curvature at each point on a curved surface.

Parameters:
  • points (list[Point2D] or tuple[Point2D]) – The latitude and longitude coordinate point string that constitutes the geodesic line.
  • major_axis (float) – The major axis of the ellipsoid where the geodesic line is located.
  • flatten (float) – The flattening of the ellipsoid where the geodesic line is located.
Returns:

The length of the geodesic.

Return type:

float

iobjectspy.data.compute_geodesic_line(start_point, end_point, prj, segment=18000)

Calculate the geodesic line according to the specified start and end points, and return the result line object.

Parameters:
  • start_point (Point2D) – The start point of the input geodesic.
  • end_point (Point2D) – The end point of the input geodesic.
  • prj (PrjCoordSys) – Spatial reference coordinate system.
  • segment (int) – the number of arc segments used to fit the semicircle
Returns:

The geodesic is constructed successfully, and the geodesic object is returned, otherwise it Return None

Return type:

GeoLine

iobjectspy.data.compute_geodesic_line2(start_point, angle, distance, prj, segment=18000)

Calculate the geodesic line according to the specified starting point, azimuth angle and distance, and return the result line object.

Parameters:
  • start_point (Point2D) – The start point of the input geodesic.
  • angle (float) – The azimuth angle of the input geodesic. It can be positive or negative.
  • distance (float) – The input geodesic length. The unit is meters.
  • prj (PrjCoordSys) – Spatial reference coordinate system.
  • segment (int) – the number of arc segments used to fit the semicircle
Returns:

The geodesic is constructed successfully, and the geodesic object is returned, otherwise it Return None

Return type:

GeoLine

iobjectspy.data.compute_parallel(geo_line, distance)

Find the parallel line of the known polyline based on the distance, and return the parallel line.

Parameters:
  • geo_line (GeoLine) – known polyline object.
  • distance (float) – The distance between the parallel lines to be sought.
Returns:

Parallel lines.

Return type:

GeoLine

iobjectspy.data.compute_parallel2(point, start_point, end_point)

Find a line that passes through a specified point and is parallel to a known line.

Parameters:
  • point (Point2D) – Any point outside the straight line.
  • start_point (Point2D) – A point on the line.
  • end_point (Point2D) – Another point on the line.
Returns:

parallel lines

Return type:

GeoLine

iobjectspy.data.compute_perpendicular(point, start_point, end_point)

Calculate the perpendicular from the known point to the known line.

Parameters:
  • point (Point2D) – a known point.
  • start_point (Point2D) – A point on the line.
  • end_point (Point2D) – Another point on the line.
Returns:

point to line perpendicular

Return type:

GeoLine

iobjectspy.data.compute_perpendicular_position(point, start_point, end_point)

Calculate the vertical foot from a known point to a known line.

Parameters:
  • point (Point2D) – a known point.
  • start_point (Point2D) – A point on the line.
  • end_point (Point2D) – Another point on the line.
Returns:

point on a straight line

Return type:

GeoLine

iobjectspy.data.compute_distance(geometry1, geometry2)

Find the distance between two geometric objects. Note: The types of geometric objects can only be points, lines and areas. The distance here refers to the shortest distance between the edges of two geometric objects. For example: the shortest distance from a point to a line is the vertical distance from the point to the line.

Parameters:
Returns:

the distance between two geometric objects

Return type:

float

iobjectspy.data.point_to_segment_distance(point, start_point, end_point)

Calculate the distance from a known point to a known line segment.

Parameters:
  • point (Point2D) – The known point.
  • start_point (Point2D) – The starting point of the known line segment.
  • end_point (Point2D) – The end point of the known line segment.
Returns:

The distance from the point to the line segment. If the vertical foot from the point to the line segment is not on the line segment, the distance from the point to the closer end of the line segment is returned.

Return type:

float

iobjectspy.data.resample(geometry, distance, resample_type='RTBEND')

Resample geometric objects. Resampling geometric objects is to remove some nodes according to certain rules to achieve the purpose of simplifying the data (as shown in the figure below). The results may be different due to the use of different resampling methods. SuperMap provides two methods for re-sampling geometric objects, namely the light barrier method and the Douglas-Puck method. For a detailed introduction to these two methods, please refer to the VectorResampleType class.

../_images/VectorResample.png
Parameters:
  • geometry (GeoLine or GeoRegion) – The specified geometric object to be resampled. Support line objects and area objects.
  • distance (float) – The specified resampling tolerance.
  • resample_type (VectorResampleType or str) – The specified resampling method.
Returns:

The geometric object after resampling.

Return type:

GeoLine or GeoRegion

iobjectspy.data.smooth(points, smoothness)

Smooth the specified point string object

For more information about smoothing, please refer to the introduction of the jsupepry.analyst.smooth() method.

Parameters:
  • points (list[Point2D] or tuple[Point2D] or GeoLine or GeoRegion) – The point string that needs to be smoothed.
  • smoothness (int) – smoothness coefficient. The valid range is greater than or equal to 2. Setting a value less than 2 will throw an exception. The greater the smoothness coefficient, the more the number of nodes on the boundary of the line object or the area object, and the smoother it is. The recommended value range is [2,10].
Returns:

Smooth processing result point string.

Return type:

list[Point2D] or GeoLine or GeoRegion

iobjectspy.data.compute_default_tolerance(prj)

Calculate the default tolerance of the coordinate system.

Parameters:prj (PrjCoordSys) – Specified projected coordinate system type object.
Returns:Return the default tolerance value of the specified projected coordinate system type object.
Return type:float
iobjectspy.data.split_line(source_line, split_geometry, tolerance=1e-10)

Use point, line or area objects to split (break) line objects. This method can be used to break or segment line objects using point, line, and area objects. Below is a simple line object to illustrate these three situations:

  • Point the object to interrupt the line object. Use the point object to break the line object, and the original line object is broken into two line objects at the point object position. As shown in the figure below, use a dot (black) to break the line (blue), and the result will be two line objects (red line and green line).
../_images/PointSplitLine.png
  • Use point, line or area objects to split (break) line objects. This method can be used to break or segment line objects using point, line, and area objects. Take a simple line object below to explain these three situations

    • When the dividing line is a line segment, the operation line will be divided into two line objects at the intersection point of the dividing line. As shown in the figure below, the black line in the figure is the dividing line. After dividing, the original line object is divided into two line objects (red line and green line).
    ../_images/LineSplitLine_1.png
    • When the dividing line is a polyline, there may be multiple intersections with the operation line. At this time, the operation line will be interrupted at all intersections, and then the line segments in the odd and even order will be merged in order to produce two line objects. That is, when dividing lines by polylines, complex line objects may be generated. The following figure shows this situation. After segmentation, the red line and the green line are respectively a complex line object.
    ../_images/LineSplitLine_2.png
  • Area object dividing line object. The area object segment line object is similar to the line segment line object. The operation line will be broken at all the intersections of the segmentation surface and the operation line, and then the lines at the odd and even positions will be merged to produce two line objects. This situation will produce at least one complex line object. In the figure below, the area object (light orange) divides the line object into two complex line objects, red and green.

../_images/RegionSplitLine.png

note:

  1. If the divided line object is a complex object, then if the dividing line passes through a sub-object, the sub-object will be divided into two line objects. Therefore, dividing a complex line object may generate multiple line objects.
  2. If the line object or area object used for segmentation has self-intersection, the segmentation will not fail, but the segmentation result may be incorrect. Therefore, you should try to use a line or area object that does not intersect itself to divide the line.
Parameters:
  • source_line (GeoLine) – the line object to be divided (interrupted)
  • split_geometry (GeoPoint or GeoRegion or GeoLine or Rectangle or Point2D) – The object used to split (break) line objects. It supports point, line and area objects.
  • tolerance (float) – The specified tolerance is used to determine whether the point object is on the line. If the vertical foot distance from the point to the line is greater than the tolerance value, the point object used for interrupting is considered invalid, and the interruption is not executed.
Returns:

The divided line object array.

Return type:

list[GeoLine]

iobjectspy.data.split_region(source_region, split_geometry)

Use line or area geometry objects to divide area geometry objects. Note: There must be at least two intersection points between the segmented object and the segmented object in the parameters, otherwise the segmentation will fail.

Parameters:
Returns:

Return the segmented area object. Two area objects will be obtained after correct separation.

Return type:

tuple[GeoRegion]

iobjectspy.data.georegion_to_center_line(source_region, pnt_from=None, pnt_to=None)

Extract the centerline of the area object, generally used to extract the centerline of the river. This method is used to extract the center line of the area object. If the surface contains an island hole, the island hole will be bypassed during extraction and the shortest path will be used to bypass it. As shown below.

../_images/RegionToCenterLine_1.png

If the area object is not a simple long strip, but has a bifurcated structure, the extracted center line is the longest segment. As shown below.

../_images/RegionToCenterLine_2.png

If the extracted center line is not the desired center line, you can extract the center line of the area object by specifying the start point and end point, which is generally used to extract the center line of a river. Especially the centerline of the main stream of the river, And you can specify the start and end points of extraction. If the surface contains an island hole, the island hole will be bypassed during extraction, and the shortest path will be used to bypass. As shown below.

../_images/RegionToCenterLine_3.png

The start and end points specified by the pnt_from and pnt_to parameters are used as reference points for extraction, that is, the center line extracted by the system may not strictly start from the specified start point and end at the specified end point. The system generally finds a closer point as the starting point or end point of the extraction near the specified starting point and ending point. Also note:

  • If the start point and end point are specified as the same point, which is equivalent to not specifying the extracted start point and end point, the longest center line of the area object will be extracted.
  • If the specified start or end point is outside the area object, the extraction fails.
Parameters:
  • source_region (GeoRegion) – Specifies the region object whose centerline is to be extracted.
  • pnt_from (Point2D) – Specifies the starting point for extracting the center line.
  • pnt_to (Point2D) – Specifies the end point of the extracted center line.
Returns:

The extracted centerline is a two-dimensional line object

Return type:

GeoLine

iobjectspy.data.orthogonal_polygon_fitting(geometry, width_threshold, height_threshold)

Right-angle polygon fitting of surface objects If the distance from a series of continuous nodes to the lower bound of the minimum area bounding rectangle is greater than height_threshold, and the total width of the node is greater than width_threshold, then the continuous node is fitted.

Parameters:
  • geometry (GeoRegion or Rectangle) – The polygon object to be right-angled can only be a simple area object
  • width_threshold (float) – The threshold value from the point to the left and right boundary of the minimum area bounding rectangle
  • height_threshold (float) – The threshold value from the point to the upper and lower boundary of the minimum area bounding rectangle
Returns:

the polygon object to be rectangularized, if it fails, it Return None

Return type:

GeoRegion

class iobjectspy.data.PrjCoordSys(prj_type=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The projected coordinate system class. The projection coordinate system is composed of map projection method, projection parameters, coordinate unit and geographic coordinate system. SuperMap Objects Java provides many predefined projection systems. Users can use it directly, in addition, users can customize their own projection system. The projected coordinate system is defined on a two-dimensional plane. Unlike the geographic coordinate system, which uses latitude and longitude to locate ground points, the projected coordinate system uses X and Y coordinates to locate. Each projected coordinate system is based on a geographic coordinate system.

Construct projected coordinate system objects

Parameters:prj_type (PrjCoordSysType or str) – Projected coordinate system type
clone()

Copy an object

Return type:PrjCoordSys
coord_unit

Unit – Return the coordinate unit of the projection system. The coordinate unit of the projection system can be different from the distance unit (distance_unit). For example, the coordinate unit under the latitude and longitude coordinates is degrees, and the distance unit can be meters, Kilometers, etc.; even if they are ordinary plane coordinates or projected coordinates, these two units can also be different.

distance_unit

Unit – Distance (length) unit

static from_epsg_code(code)

Construct projected coordinate system objects from EPSG coding

Parameters:code (int) – EPSG code
Return type:PrjCoordSys
from_file(file_path)

Read projection coordinate information from xml file or prj file

Parameters:file_path (str) – file path
Returns:Return True if the build is successful, otherwise False
Return type:bool
static from_wkt(wkt)

Construct projected coordinate system objects from WKT strings

Parameters:wkt (str) – WKT string
Return type:PrjCoordSys
from_xml(xml)

Read projection information from xml string

Parameters:xml (str) – xml string
Returns:Return True if the construction is successful, otherwise Return False.
Return type:bool
geo_coordsys

GeoCoordSys – the geographic coordinate system object of the projected coordinate system

static make(prj)

Construct PrjCoordSys object, support constructing from epsg code, PrjCoordSysType type, xml or wkt, or projection information file. Note that if you pass in an integer value, It must be epsg encoding and cannot be an integer value of type PrjCoordSysType.

Parameters:prj (int or str or PrjCoordSysType) – projection information
Returns:Projection object
Return type:PrjCoordSys
name

str – the name of the projected coordinate system object

prj_parameter

PrjParameter – the projection parameter of the projected coordinate system object

projection

Projection – Projection method of the projection coordinate system. Projection methods such as equiangular conic projection, equidistant azimuth projection, etc.

set_geo_coordsys(geo_coordsys)

Set the geographic coordinate system object of the projected coordinate system. Each projection system depends on a geographic coordinate system. This method is only valid when the coordinate system type is a custom projection coordinate system and a custom geographic coordinate system.

Parameters:geo_coordsys (GeoCoordSys) –
Returns:self
Return type:PrjCoordSys
set_name(name)

Set the name of the projected coordinate system object

Parameters:name (str) – the name of the projected coordinate system object
Returns:self
Return type:PrjCoordSys
set_prj_parameter(parameter)

Set the projection parameters of the projection coordinate system object.

Parameters:parameter (PrjParameter) – the projection parameter of the projection coordinate system object
Returns:self
Return type:PrjCoordSys
set_projection(projection)

Set the projection method of the projection coordinate system. Projection methods such as equiangular conic projection, equidistant azimuth projection and so on.

Parameters:projection (Projection) –
Returns:self
Return type:PrjCoordSys
set_type(prj_type)

Set the type of projected coordinate system

Parameters:prj_type (PrjCoordSysType or str) – Projected coordinate system type
Returns:self
Return type:PrjCoordSys
to_epsg_code()

Return the EPSG code of the current object

Return type:int
to_file(file_path)

Output projection coordinate information to a file. Only supports output as xml file.

Parameters:file_path (str) – The full path of the XML file.
Returns:Return True if the export is successful, otherwise False.
Return type:bool
to_wkt()

Output current projection information as WKT string

Return type:str
to_xml()

Convert the object of the projected coordinate system class to a string in XML format.

Returns:XML string representing the object of the projected coordinate system class
Return type:str
type

PrjCoordSysType – Projected coordinate system type

class iobjectspy.data.GeoCoordSys

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The geographic coordinate system class.

The geographic coordinate system is composed of a geodetic reference system, a central meridian, and coordinate units. In the geographic coordinate system, the unit is generally expressed in degrees, but also in degrees, minutes and seconds. The east-west (horizontal direction) range is -180 degrees to 180 degree. The north-south direction (vertical direction) ranges from -90 degrees to 90 degrees.

Geographical coordinates are spherical coordinates that use latitude and longitude to indicate the location of ground points. In a spherical system, the circle intercepted by the intersection of the parallel plane of the equatorial plane and the ellipsoidal surface of the earth is called the circle of latitude, also called the line of latitude, which represents the east-west direction. The circle that intersects the surface of the ellipsoid through the earth’s axis of rotation is the meridian circle, also known as the longitude line, which indicates the north-south direction. These grids surrounding the earth are called latitude and longitude grids.

The latitude and longitude lines are generally expressed in degrees (and also in degrees, minutes and seconds when necessary). Longitude refers to the dihedral angle formed by the meridian of a certain point on the ground and the prime meridian. The longitude of the prime meridian is defined as 0 degrees. From prime meridian 0 to 180 degrees east is “east longitude”, represented by “E”; Westward 0 to -180 degrees is the “West longitude”, represented by the letter “W”. Latitude refers to the connection between a point on the ground and the center of the earth’s sphere The line-face angle formed by the line and the equatorial plane, the latitude of the equator is set at 0 degrees, from the equator to the north from 0 to 90 degrees is “north latitude”, which is represented by the letter “N”, and from 0 to -90 degrees to the south is “south latitude”, which is represented by the letter “S”.

clone()

Copy object

Return type:GeoCoordSys
coord_unit

Unit – Return the unit of the geographic coordinate system. The default value is DEGREE

from_xml(xml)

Construct an object of geographic coordinate system class from the specified XML string, and return True successfully

Parameters:xml (str) – XML string
Return type:bool
geo_datum

GeoDatum – Return the object of the geodetic reference system

geo_prime_meridian

GeoPrimeMeridian – Return the central meridian object

geo_spatial_ref_type

GeoSpatialRefType – Return the type of spatial coordinate system.

name

str – Return the name of the geographic coordinate system object

set_coord_unit(unit)

Set the unit of the geographic coordinate system.

Parameters:unit (Unit or str) – the unit of the geographic coordinate system
Returns:self
Return type:GeoCoordSys
set_geo_datum(datum)

Set the geodetic reference system object

Parameters:datum (GeoDatum) –
Returns:self
Return type:GeoCoordSys
set_geo_prime_meridian(prime_meridian)

Set the central meridian object

Parameters:prime_meridian (GeoPrimeMeridian) –
Returns:self
Return type:GeoCoordSys
set_geo_spatial_ref_type(spatial_ref_type)

Set the type of spatial coordinate system.

Parameters:spatial_ref_type (GeoSpatialRefType or str) – spatial coordinate system type
Returns:self
Return type:GeoCoordSys
set_name(name)

Set the name of the geographic coordinate system object

Parameters:name (str) – the name of the geographic coordinate system object
Returns:self
Return type:GeoCoordSys
set_type(coord_type)

Set geographic coordinate system type

Parameters:coord_type (GeoCoordSysType or str) – geographic coordinate system type
Returns:self
Return type:GeoCoordSys
to_xml()

Convert the object of the geographic coordinate system into a string in XML format.

Return type:str
type

GeoCoordSysType – Return the type of geographic coordinate system

class iobjectspy.data.GeoDatum(datum_type=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Geodetic reference system class. This class contains the parameters of the earth ellipsoid. The earth ellipsoid only describes the size and shape of the earth. In order to more accurately describe the specific location of the features on the earth, a geodetic reference system needs to be introduced. The geodetic reference system determines the position of the earth ellipsoid relative to the center of the earth’s sphere, provides a frame of reference for the measurement of surface features, and determines the origin and direction of the latitude and longitude grid lines on the surface. The geodetic reference system takes the center of the earth ellipsoid as the origin. The earth ellipsoid of the geodetic reference system in a region is more or less offset from the true center of the earth, and the coordinates of the surface features are relative to the center of the ellipsoid. At present, WGS84 is widely used, which is used as the basic frame of geodetic survey. Different geodetic reference systems are suitable for different countries and regions, and one geodetic reference system is not suitable for all regions.

Construct a geodetic reference system object

Parameters:datum_type (GeoDatumType or str) – type of geodetic reference system
clone()

Copy object

Return type:GeoDatum
from_xml(xml)

Construct a GeoDatum object based on the XML string, and return True if it succeeds.

Parameters:xml (str) –
Return type:bool
geo_spheroid

GeoSpheroid – Earth ellipsoid object

name

str – the name of the geodetic reference system object

set_geo_spheroid(geo_spheroid)

Set the earth ellipsoid object. It can be set only when the geodetic reference system type is a custom type. People usually use a sphere or ellipsoid to describe the shape and size of the earth. Sometimes for the convenience of calculation, the earth can be regarded as a sphere, but more often it is regarded as an ellipsoid. Generally in the map scale when it is less than 1:1,000,000, suppose the shape of the earth is a sphere, because the difference between a sphere and an ellipsoid is almost indistinguishable at this scale; At large scales with accuracy of 1:1,000,000 or more, an ellipsoid is needed to approach the earth. The ellipsoid is based on an ellipse, so two axes are used to express the size of the earth sphere, namely the major axis (equatorial radius) and the minor axis (polar radius).

Parameters:geo_spheroid (GeoSpheroid) – Earth ellipsoid object
Returns:self
Return type:GeoSpheroid
set_name(name)

Set the name of the geodetic reference system object

Parameters:name (str) – the name of the geodetic reference system object
Returns:self
Return type:GeoDatum
set_type(datum_type)

Set the type of geodetic reference system. When the geodetic reference system is customized, the user needs to specify the ellipsoid parameters separately; other values are predefined by the system, and the user does not need to specify the ellipsoid parameters. See: py:class:GeoDatumType.

Parameters:datum_type (GeoDatumType or str) – type of geodetic reference system
Returns:self
Return type:GeoDatum
to_xml()

Convert the object of the geodetic reference system into a string in XML format

Return type:str
type

GeoDatumType – Type of geodetic reference system

class iobjectspy.data.GeoSpheroid(spheroid_type=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The parameter class of the earth ellipsoid. This category is mainly used to describe the long radius and oblateness of the earth.

People usually use a sphere or ellipsoid to describe the shape and size of the earth. Sometimes for the convenience of calculation, the earth can be regarded as a sphere, but more often it is regarded as an ellipsoid. In general, when the scale of the map is less than 1:1,000,000, assuming that the earth is a sphere shape, since in this case the difference between the ball and the dimensions of the ellipsoid not almost impossible to distinguish; in 1: 1,000,000 for greater accuracy even large scale, you need to use an ellipsoid to approach the earth. The ellipsoid is based on an ellipse, so two axes are used to express the size of the earth sphere, namely the major axis (equatorial radius) and the minor axis (polar radius).

Because the same projection method, different ellipsoid parameters, and the same data projection results may be very different, it is necessary to select the appropriate ellipsoid parameters. Earth ellipsoid parameters used in different ages, countries and regions may be different. At present, China mainly uses Krasovsky ellipsoid parameters; North American continent and Britain and France mainly use Clark ellipsoid parameters.

Constructs the parameter class object of the earth ellipsoid

Parameters:spheroid_type (GeoSpheroidType or str) – the type of the earth spheroid parameter object
axis

float – Return the long radius of the earth ellipsoid

clone()

Copy object

Return type:GeoSpheroid
flatten

float – Return the flatness of the earth ellipsoid

from_xml(xml)

Construct an object of the earth ellipsoid parameter class from the specified XML string.

Parameters:xml (str) – XML string
Returns:return True if the build is successful, otherwise return False
Return type:bool
name

str – the name of the earth ellipsoid object

set_axis(value)

Set the major radius of the earth ellipsoid. The long radius of the earth ellipsoid is also called the earth’s equatorial radius, and the polar radius, first eccentricity, second eccentricity and so on of the earth ellipsoid can be obtained through it and the flatness of the earth. Only when the type of the earth ellipsoid is a custom type, the long radius can be set.

Parameters:value (float) – the long radius of the earth ellipsoid
Returns:self
Return type:GeoSpheroid
set_flatten(value)

Set the oblateness of the earth ellipsoid. The oblateness can be set only when the type of the earth ellipsoid is a custom type. The flatness of the earth ellipsoid reflects the roundness of the earth ellipsoid, Generally, it is the ratio of the difference between the length and length of the semi-axis of the earth..

Parameters:value (float) –
Returns:self
Return type:GeoSpheroid
set_name(name)

Set the name of the earth ellipsoid object

Parameters:name (str) – the name of the earth ellipsoid object
Returns:self
Return type:GeoSpheroid
set_type(spheroid_type)

Set the type of the earth ellipsoid. When the earth ellipsoid type is a custom type, the user needs to additionally specify the major radius and oblateness of the ellipsoid; the remaining values are predefined by the system, user does not need to specify the major radius and oblateness. See also the earth ellipsoid:py:class:GeoSpheroidType enumeration class.

Parameters:spheroid_type (GeoSpheroidType or str) –
Returns:self
Return type:GeoSpheroid
to_xml()

Convert the object of the earth ellipsoid parameter class to a string in XML format.

Return type:str
type

GeoSpheroidType – Return the type of the earth ellipsoid

class iobjectspy.data.GeoPrimeMeridian(meridian_type=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The central meridian class. This object is mainly used in the geographic coordinate system. The geographic coordinate system consists of three parts: the central meridian, the reference system or Datum and the angle unit.

Construct a central meridian object

Parameters:meridian_type (GeoPrimeMeridianType or str) – central meridian type
clone()

Copy object

Return type:GeoPrimeMeridian
from_xml(xml)

The specified XML string constructs a GeoPrimeMeridian object

Parameters:xml (str) – XML string
Returns:
Return type:bool
longitude_value

float – warp central value of a single bit of the

name

str – The name of the central meridian object

set_longitude_value(value)

Set the central meridian value in degrees

Parameters:value (float) – Central meridian value in degrees
Returns:self
Return type:GeoPrimeMeridian
set_name(name)

Set the name of the central meridian object

Parameters:name (str) – the name of the central meridian object
Returns:self
Return type:GeoPrimeMeridian
set_type(meridian_type)

Set the central warp type

Parameters:meridian_type (GeoPrimeMeridianType or str) – central meridian type
Returns:self
Return type:GeoPrimeMeridian
to_xml()

Return an XML string representing a GeoPrimeMeridian object

Return type:str
type

GeoPrimeMeridianType – Central meridian type

class iobjectspy.data.Projection(projection_type=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The projection coordinate system map projection class. Map projection is the process of converting spherical coordinates into plane coordinates.

Generally speaking, map projection can be divided into equal-angle projection, equal-distance projection and equal-area projection according to the deformation properties, which are suitable for different purposes. If it is a nautical chart, the isometric projection is very common. And then there are other kinds of deformation between them, which is generally used for reference purposes and as instructional maps. Map projections can also be divided into two categories according to their composition methods, namely geometric projections and non-geometric projections. Geometric projection is the projection of a network of longitude and latitude lines on an ellipsoid onto a geometric surface, and then expand the geometric surface into a plane. it includes azimuth projection, cylindrical projection and conic projection; non-geometric projection does not rely on geometric surface。 According to some conditions, the functional relationship of point to point between sphere and plane can be determined by mathematical analysis, including pseudo-azimuth projection, pseudo-cylindrical projection, pseudo-conic projection and poly-conic projection. For more information about the projection method type, please refer to: py:class:ProjectionType

Parameters:projection_type (ProjectionType or str) –
clone()

Copy object

Return type:Projection
from_xml(xml)

Construct a projection coordinate method object based on the XML string, and return True if it succeeds.

Parameters:xml (str) – the specified XML string
Return type:bool
name

str – the name of the projection method object

set_name(name)

Name of your custom projection setting

Parameters:name (str) – the name of the custom projection
Returns:self
Return type:Projection
set_type(projection_type)

Set the type of projection method of the projection coordinate system.

Parameters:projection_type (ProjectionType or str) – the type of projection method of the projection coordinate system
Returns:self
Return type:Projection
to_xml()

Return the XML string representation of the projection method object.

Return type:str
type

ProjectionType – the type of projection method of the projection coordinate system

class iobjectspy.data.PrjParameter

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The map projection parameter class. Map projection parameters, such as the central longitude, the origin latitude, the first and second latitudes of the double standard latitude, etc.

azimuth

float – azimuth angle

central_meridian

**float* – central meridian angle value. Unit* – degree. The value range is -180 degrees to 180 degrees

central_parallel

**float* – Return the latitude value corresponding to the origin of the coordinate. Unit* – degree. The value range is -90 degrees to 90 degrees. In conic projection, the projection area is usually the most The latitude value of the southern end.

clone()

Copy object

Return type:PrjParameter
false_easting

**float* – coordinate horizontal offset. Unit* – meter

false_northing

float – coordinate vertical offset

first_point_longitude

**float* – Return the longitude of the first point. Used for azimuth projection or oblique projection. Unit* – degree

from_xml(xml)

Construct a PrjParameter object based on the incoming XML string

Parameters:xml (str) –
Returns:return True if the build is successful, otherwise return False
Return type:bool
rectified_angle

float – Return the corrected angle in the parameter of the ProjectionType.RectifiedSkewedOrthomorphic, in radians

scale_factor

float – Return the scale factor of the projection conversion. Used to reduce the error of the projection transformation. The values of Mercator, Gauss-Krüger and UTM projections are normal Is 0.9996

second_point_longitude

**float* – Return the longitude of the second point. Used for azimuth projection or oblique projection. Unit* – degree

set_azimuth(value)

Set the azimuth angle. Mainly used for oblique axis projection. Unit: Degree

Parameters:value (float) – azimuth
Returns:self
Return type:PrjParameter
set_central_meridian(value)

Set the central meridian angle value. Unit: degree. The value range is -180 degrees to 180 degrees.

Parameters:value (float) – The angle value of the central meridian. Unit: Degree
Returns:self
Return type:PrjParameter
set_central_parallel(value)

Set the latitude value corresponding to the origin of the coordinate. Unit: degree. The value range is -90 degrees to 90 degrees. In conic projection, it is usually the latitude value of the southernmost point of the projection area.

Parameters:value (float) – The latitude value corresponding to the origin of the coordinate
Returns:self
Return type:PrjParameter
set_false_easting(value)

Set the horizontal offset of the coordinate. Unit: m. The parameter value of this method is an offset added to avoid negative values of the system coordinates . Usually used in Gauss-Krüger, UTM and Mercator projections. The general value is 500,000 meters.

Parameters:value (float) – The horizontal offset of the coordinate. Unit: m.
Returns:self
Return type:PrjParameter
set_false_northing(value)

Set the vertical offset of the coordinate. Unit: m. The parameter value of this method is an offset added to avoid negative values of the system coordinates. Usually used in Gauss-Krüger, UTM and Mercator projections. The general value is 1,000,000 meters.

Parameters:value (float) – The vertical offset of the coordinate. Unit: m
Returns:self
Return type:PrjParameter
set_first_point_longitude(value)

Set the longitude of the first point. Used for azimuth projection or oblique projection. Unit: Degree

Parameters:value (float) – The longitude of the first point. Unit: Degree
Returns:self
Return type:PrjParameter
set_rectified_angle(value)

Set the corrected angle in the parameter of ProjectionType.RectifiedSkewedOrthomorphic, in radians.

Parameters:value (float) – The corrected angle in the parameter of the ProjectionType.RectifiedSkewedOrthomorphic, the unit is radians
Returns:self
Return type:PrjParameter
set_scale_factor(value)

Set the scale factor for projection conversion. Used to reduce the error of projection transformation. The values of Mercator, Gauss-Krüger, and UTM projections are generally 0.9996.

Parameters:value (float) – the scale factor of the projection conversion
Returns:self
Return type:PrjParameter
set_second_point_longitude(value)

Set the longitude of the second point. Used for azimuth projection or oblique projection. Unit: degree.

Parameters:value (float) – The longitude of the second point. Unit: Degree
Returns:self
Return type:PrjParameter
set_standard_parallel1(value)

Set the latitude value of the first standard parallel. Unit: degree. Mainly used in conic projection. If it is a single standard latitude, the latitude values of the first standard latitude and the second standard latitude are the same.

Parameters:value (float) – The latitude value of the first standard parallel
Returns:self
Return type:PrjParameter
set_standard_parallel2(value)

Set the latitude value of the second standard parallel. Unit: degree. Mainly used in conic projection. If it is a single standard latitude, the latitude value of the first standard latitude is the same as that of the second standard latitude; if it is a double standard latitude, then its value cannot be the same as the value of the first standard parallel.

Parameters:value (float) – The latitude value of the second standard parallel. Unit: degree.
Returns:self
Return type:PrjParameter
standard_parallel1

**float* – Return the latitude value of the first standard parallel. Unit* – degree. Mainly used in conic projection. If it is a single standard parallel, then the first standard parallel Same as the latitude value of the second standard parallel.

standard_parallel2

**float* – Return the latitude value of the second standard parallel. Unit* – degree. Mainly used in conic projection. If it is a single standard parallel, then the first standard parallel Same as the latitude value of the second standard latitude; If it is a double standard parallel, its value cannot be the same as that of the first standard parallel.

to_xml()

Return the XML string representation of the PrjParameter object

Return type:str
class iobjectspy.data.CoordSysTransParameter

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Projection conversion reference system conversion parameter class, usually including translation, rotation and scale factor.

When performing projection conversion, if the geographic coordinate systems of the source and target projections are different, a reference system conversion is required. SuperMap provides six commonly used reference system conversion methods, see CoordSysTransMethod for more details. Different reference system conversion methods need to specify different conversion parameters:

-Three-parameter conversion method (GeocentricTranslation), Molodensky conversion method (Molodensky), simplified Molodensky conversion method (MolodenskyAbridged) are the
conversion methods with low accuracy, which can generally be used when the data accuracy requirement is not very high. These three conversion methods require three translation conversion parameters: X axis coordinate offset (set_translate_x), Y axis coordinate offset (set_translate_y) and Z axis coordinate offset (set_translate_z).
-Position vector method (PositionVector), seven-parameter conversion method based on the center of the earth (CoordinateFrame), Bursa method (BursaWolf) are several conversion methods with higher precision. Need seven
parameters for adjustment and conversion, including the three translation conversion parameters mentioned above, but also three rotation conversion parameters (X-axis rotation angle (set_rotate_x), Y-axis rotation angle (set_rotate_y), Z axis rotation angle (set_rotate_z)), and projection scale difference parameter (set_scale_difference).
clone()

Copy object

Return type:CoordSysTransParameter
from_xml(xml)

Construct a CoordSysTransParameter object based on the XML string, and return True successfully

Parameters:xml (str) –
Return type:bool
rotate_x

float – X axis rotation angle

rotate_y

float – Y-axis rotation angle

rotate_z

float – Z-axis rotation angle

rotation_origin_x

float – X coordinate of the origin of rotation

rotation_origin_y

float – the amount of Y coordinate of the origin of rotation

rotation_origin_z

float – the amount of Z coordinate of the origin of rotation

scale_difference

float – Projection scale difference. The unit is one part per million. It is used to convert between different geodetic reference systems

set_rotate_x(value)

Set the rotation angle of the X axis. Used for conversion between different geodetic reference systems. The unit is radians.

Parameters:value (float) – Rotation angle of X axis
Returns:self
Return type:CoordSysTransParameter
set_rotate_y(value)

Set the rotation angle of the Y axis. Used for conversion between different geodetic reference systems. The unit is radians.

Parameters:value (float) – Y-axis rotation angle
Returns:self
Return type:CoordSysTransParameter
set_rotate_z(value)

Set the rotation angle of the Z axis. Used for conversion between different geodetic reference systems. The unit is radians.

Parameters:value (float) – Rotation angle of Z axis
Returns:self
Return type:CoordSysTransParameter
set_rotation_origin_x(value)

Set the amount of X coordinate of the origin of rotation

Parameters:value (float) – The amount of X coordinate of the origin of rotation
Returns:self
Return type:CoordSysTransParameter
set_rotation_origin_y(value)

Set the amount of Y coordinate of the origin of rotation

Parameters:value (float) – The amount of Y coordinate of the origin of rotation
Returns:self
Return type:CoordSysTransParameter
set_rotation_origin_z(value)

Set the amount of Z coordinate of the rotation origin

Parameters:value (float) – the amount of Z coordinate of the origin of rotation
Returns:self
Return type:CoordSysTransParameter
set_scale_difference(value)

Set the projection scale difference. The unit is one part per million. Used for conversion between different geodetic reference systems

Parameters:value (float) – projection scale difference
Returns:self
Return type:CoordSysTransParameter
set_translate_x(value)

Set the coordinate offset of the X axis. The unit is meters

Parameters:value (float) – X axis coordinate offset
Returns:self
Return type:CoordSysTransParameter
set_translate_y(value)

Set the coordinate offset of the Y axis. The unit is meters

Parameters:value (float) – Y-axis coordinate offset
Returns:self
Return type:CoordSysTransParameter
set_translate_z(value)

Set the coordinate offset of the Z axis. The unit is meters

Parameters:value (float) – coordinate offset of Z axis
Returns:self
Return type:CoordSysTransParameter
to_xml()

Output the CoordSysTransParameter object as an XML string.

Return type:str
translate_x

float – Return the coordinate offset of the X axis. The unit is meters

translate_y

float – Return the coordinate offset of the Y axis. The unit is meters

translate_z

float – Return the coordinate offset of the Z axis. The unit is meters

class iobjectspy.data.CoordSysTranslator

Bases: object

Projection conversion class. Mainly used for conversion between projection coordinates and projection coordinate systems.

Projection transformation generally has three working methods: the transformation between geographic (latitude and longitude) coordinates and projection coordinates uses the forward() method, the transformation between projection coordinates and geographic (latitude and longitude) coordinates uses the inverse() method, the conversion between the two projected coordinate systems uses the convert() method.

Note: The current version does not support the projection conversion of raster data. That is, in the same datasource, the projection transformation only transforms the vector data part. Geographic coordinate system (Geographic coordinate system) is also called geographic coordinate system, which uses latitude and longitude as the storage unit of the map. Obviously, the geographic coordinate system is a spherical coordinate system. If the digital information on the earth is stored in a spherical coordinate system, it is necessary to have such an ellipsoid with the following features: It can be quantified and calculated, with Semimajor Axis, Semiminor Axis, Flattening, prime meridian and datum.

The projection coordinate system is essentially a plane coordinate system, and the map unit is usually meters. The process of converting spherical coordinates into plane coordinates is called projection. So every projected coordinate systems must have geographic coordinate system (Geographic Coordinate System) parameters. Therefore, there is a conversion between projection coordinates and a conversion between projection coordinate systems.

When performing projection conversion, the text object (GeoText) has also been projected and converted, the character height and angle of the text object will be converted accordingly. If the user does not need such changes, the character height and angle of the converted text object need to be corrected.

static convert(source_data, target_prj_coordsys, coordsys_trans_parameter, coord_sys_trans_method, source_prj_coordsys=None, out_data=None, out_dataset_name=None)

The input data is projected and transformed according to the source projection coordinate system and the target projection coordinate system. According to whether valid result datasource information has been setted, you can directly modify the source data or store the converted result data in the result datasource.

Parameters:
  • source_data (DatasetVector or Geometry or list[Point2D] or list[Geometry]) – The data to be converted. Directly convert dataset, geometric objects, two-dimensional point sequences and geometric object sequences.
  • target_prj_coordsys (PrjCoordSys) – target projected coordinate system object.
  • coordsys_trans_parameter (Projected coordinate system transform0a tion parameter. Including the coordinate translation, rotation angle, and projection scale difference. For details, please refer to the CoordSysTransParameter class.) –
  • coord_sys_trans_method (CoordSysTransMethod) – Method of projection transformation. For details, see: py:class:CoordSysTransMethod. When performing projection conversion, if the geographic coordinate system of the source projection and the target projection are the same, the setting of this parameter has no effect.
  • source_prj_coordsys (PrjCoordSys) – Source projected coordinate system object. When the converted data is a dataset object, this parameter is invalid and the projection coordinate system information of the dataset will be used.
  • out_data (Datasource or DatasourceConnectionInfo or str) – result datasource. When the result datasource is valid, the converted result will be stored in the new result dataset, otherwise the point coordinates of the original data will be directly modified.
  • out_dataset_name (str) – The name of the result dataset. Only works when out_datasource is valid
Returns:

According to whether the result datasource object is set:

-If the result datasource object is set and the conversion is successful, the converted result will be written to the result dataset and the result dataset name or result dataset object will be returned. If the conversion fails, None is returned. -If the result datasource object is not set and the conversion is successful, it will directly modify the point coordinates of the input source data and return True, otherwise return False.

Return type:

DatasetVector or str bool

static forward(data, prj_coordsys, out_data=None, out_dataset_name=None)

In the same geographic coordinate system, this method is used to convert the two-dimensional point object in the specified Point2D list from geographic coordinates to projection coordinates

Parameters:
  • data (list[Point2D] or tuple[Point2D]) – a list of 2D points to be converted
  • prj_coordsys (PrjCoordSys) – the projected coordinate system where the two-dimensional point object is located
  • out_data (Datasource or DatasourceConnectionInfo or str) – The result datasource object, you can choose to save the converted points to the datasource. If it is empty, it will return a list of points obtained after conversion
  • out_dataset_name (str) – the name of the result dataset, it will work only when out_datasource is valid
Returns:

Return None if the conversion fails. If the conversion is successful, if a valid out_datasource is set, the result dataset or the name of the dataset will be returned; otherwise, the list of points obtained after the conversion will be returned.

Return type:

DatasetVector or str or list[Point2D]

static inverse(data, prj_coordsys, out_data=None, out_dataset_name=None)

In the same projected coordinate system, this method is used to convert the two-dimensional point objects in the specified Point2D list from projected coordinates to geographic coordinates.

Parameters:
  • data (list[Point2D] or tuple[Point2D]) – a list of 2D points to be converted
  • prj_coordsys (PrjCoordSys) – the projected coordinate system where the two-dimensional point object is located
  • out_data (Datasource or DatasourceConnectionInfo or str) – The result datasource object, you can choose to save the converted points to the datasource. If it is empty, it will return a list of points obtained after conversion
  • out_dataset_name (str) – the name of the result dataset, it will work only when out_datasource is valid
Returns:

Return None if the conversion fails. If the conversion is successful, if a valid out_datasource is set, the result dataset or the name of the dataset will be returned; otherwise, the list of points after the conversion will be returned.

Return type:

DatasetVector or str or list[Point2D]

class iobjectspy.data.StepEvent(title=None, message=None, percent=None, remain_time=None, cancel=None)

Bases: object

An event that indicates a progress bar. This event is triggered when the target progress of the listener changes. Some functions can return the progress information of the current task execution. The progress information is returned through StepEvent, and the user can get the status of the current task from the StepEvent.

For example, the user can define a function to display the progress information of the buffer analysis.

>>> def progress_function(step_event):
        print('%s-%s'% (step_event.title, step_event.message))
>>>
>>> ds = Workspace().open_datasource('E:/data.udb')
>>> dt = ds['point'].query('SmID <1000')
>>> buffer_dt = create_buffer(dt, 10, 10, progress=progress_function)
is_cancel

bool – the cancellation status of the event

message

str – information about the operation in progress

percent

int – the percentage of the current operation completed

remain_time

int – The estimated remaining time to complete the current operation, in seconds

set_cancel(value)

Set the cancellation status of the event. If the operation is set to cancel, the task will be interrupted.

Parameters:value (bool) – The state of event cancellation, if true, the execution will be interrupted
title

str – title of the progress information

class iobjectspy.data.WorkspaceConnectionInfo(server=None, workspace_type=None, version=None, driver=None, database=None, name=None, user=None, password=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Workspace connection information class. It includes all the information for connecting to the workspace, such as the name of the server to be connected, the name of the database, the username, and the password. For different types of workspaces, so when using the members contained in this class, please pay attention to the type of workspace that the member applies to.

Initialize the workspace link information object.

Parameters:
  • server (str) – database server name or file name
  • workspace_type (WorkspaceType or str) – the type of workspace
  • version (WorkspaceVersion or str) – version of the workspace

: param str driver: the driver name database using ODBC connection settings, log currently supported database workspace, SQL Server database using ODBC connection, the driver name, such as SQL Server databases to SQL Server or SQL Native Client :param str database: the name of the database connected to the workspace :param str name: the name of the workspace in the database :param str user: username to log in to the database :param str password: The password of the database or file connected to the login workspace.

database

str – The name of the database to which the workspace is connected. For database type workspaces

driver

str – The driver name of the database connected using ODBC

name

str – The name of the workspace in the database. For file-type workspaces, this name is empty

password

str – the password of the database or file connected to the login workspace

server

str – database server name or file name

set_database(value)

Set the name of the database connected to the workspace. Applicable to database type workspace

Parameters:value (str) – The name of the database connected to the workspace
Returns:self
Return type:WorkspaceConnectionInfo
set_driver(value)

Set the driver name of the database connected using ODBC. For the currently supported database workspace, SQL Server database uses ODBC connection, SQL Server database driver The program name is SQL Server or SQL Native Client.

Parameters:value (str) – The driver name of the database connected using ODBC
Returns:self
Return type:WorkspaceConnectionInfo
set_name(value)

Set the name of the workspace in the database.

Parameters:value (str) – The name of the workspace in the database. For file-type workspaces, set this name to empty
Returns:self
Return type:WorkspaceConnectionInfo
set_password(value)

Set the password of the database or file connected to the login workspace. This password setting is only valid for Oracle and SQL datasources, and invalid for local (UDB) datasources.

Parameters:value (str) – the password of the database or file connected to the login workspace
Returns:self
Return type:WorkspaceConnectionInfo
set_server(value)

Set the database server name or file name.

Parameters:value (str) – For Oracle database, its server name is its TNS service name; for SQL Server database, its server name is its system’s DNS (Database Source Name) name; for SXWU and SMWU files, its server name is its file The name, which includes the path name and file extension. In particular, the path here is an absolute path.
Returns:self
Return type:WorkspaceConnectionInfo
set_type(value)

Set the type of workspace.

Parameters:value (WorkspaceType or str) – the type of workspace
Returns:self
Return type:WorkspaceConnectionInfo
set_user(value)

Set the user name for logging in to the database. Applicable to database type workspace.

Parameters:value (str) – user name to log in to the database
Returns:self
Return type:WorkspaceConnectionInfo
set_version(value)

Set the workspace version.

Parameters:value (WorkspaceVersion or str) – version of the workspace
Returns:self
Return type:WorkspaceConnectionInfo

For example, set the workspace version to UGC60:

>>> conn_info = WorkspaceConnectionInfo()
>>> conn_info.set_version('UGC60')
>>> print(conn_info.version)
WorkspaceVersion.UGC60
type

WorkspaceType – The type of workspace. The workspace can be stored in a file or in a database. The currently supported file-based workspace type is SXWU format and SMWU format work space; The database-based workspace is in ORACLE format and SQL format; the default workspace type is unstored workspace

user

str – the user name to log in to the database

version

WorkspaceVersion – The version of the workspace. The default is UGC70.

class iobjectspy.data.Workspace

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The workspace is the user’s working environment, which mainly completes the organization and management of data, including opening, closing, creating, and saving workspace files. Workspace is an important concept as part of SuperMap. The workspace stores all the datasources and the organization relationship of the map in an engineering project (the same transaction process). You can manage datasources and maps through workspace objects. Only the connection information and location of the datasource are stored in the workspace,the actual datasource is stored in the database or UDB. The workspace only stores some configuration information of the map, such as the number of layers in the map, the dataset referenced by the layer, the map range, background style, etc. In the current version, only one workspace object can exist in a program. If the user does not open a specific workspace, the program will create a workspace object by default. If the user needs to open a new workspace objects, you need to save and close the current workspace first, otherwise, some information stored in the workspace may be lost.

For example, create a datasource object:

>>> ws = Workspace()
>>> ws.create_datasource(':memory:')
>>> print(len(ws.datasources))
1
>>> ws_a = Workspace()
>>> ws_a.create_datasource(':memory:')
>>> ws == ws_a
True
>>> print(len(ws_a.datasources))
2
>>> ws.close()
add_map(map_name, map_or_xml)

Add Map to the current workspace

Parameters:
  • map_name (str) – map name
  • map_or_xml (Map or str) – XML description of the map object or map
Returns:

The serial number of the newly added map in this map collection object.

Return type:

int

caption

str – Workspace display name, which is convenient for users to make some identification.

clear_maps()

Delete all maps in this map collection object, that is, all maps saved in the workspace.

Returns:Workspace object itself
Return type:Workspace
classmethod close()

Close the workspace, closing the workspace will destroy the instance objects in the current workspace. Before closing the workspace, make sure that the map and other contents of the workspace used are closed or disconnected. If the workspace is registered on the Java side, the workspace object will not be actually closed, only the binding relationship to the Java workspace object will be released, and you will not be able to continue working with Java workspace objects unless you construct a new instance using Workspace().

close_all_datasources()

Close all datasources

close_datasource(item)

Close the specified datasource.

Parameters:item (str or int) – alias or serial number of the datasource
Returns:Return True if closed successfully, otherwise return False
Return type:bool
connection_info

WorkspaceConnectionInfo – Workspace connection information

classmethod create(conn_info, save_existed=True, saved_connection_info=None)

Create a new workspace object. Before creating a new workspace, the user can save the current workspace object by setting save_existed to True, or set saved_connection_info to saves as the current workspace to the specified location.

Parameters:
  • conn_info (WorkspaceConnectionInfo) – connection information of the workspace
  • save_existed (bool) – Whether to save the current workspace work. If set to True, the current workspace will be saved and then closed, otherwise the current workspace will be closed directly, and then the new workspace object will be opened. save_existed is only suitable for situations where the current workspace is not in memory. The default is True.
  • saved_connection_info (WorkspaceConnectionInfo) – Choose to save the current work in the workspace specified by saved_connection_info. The default is None.
Returns:

new workspace object

Return type:

Workspace

create_datasource(conn_info)

Create a new datasource based on the specified datasource connection information.

Parameters:conn_info (str or dict or DatasourceConnectionInfo) – udb file path or datasource connection information: -datasource connection information. For details, please refer to: py:meth:DatasourceConnectionInfo.make -If conn_info is str, it can be’:memory:’, udb file path, udd file path, dcf file path, xml string of datasource connection information -If conn_info is a dict, it is the return result of DatasourceConnectionInfo.to_dict().
Returns:datasource object
Return type:Datasource
datasources

list[Datasource] – All datasource objects in the current workspace.

description

str – the description or descriptive information of the current workspace added by the user

get_datasource(item)

Get the specified datasource object.

Parameters:item (str or int) – alias or serial number of the datasource
Returns:datasource object
Return type:Datasource
get_map(index_or_name)

Get the map object with the specified name or serial number

Parameters:index_or_name (int or str) – the specified map name or serial number
Returns:map object
Return type:Map
get_map_xml(index_or_name)

Return the XML description of the map with the specified name or sequence number

Parameters:index_or_name (int or str) – the specified map name or serial number
Returns:XML description of the map
Return type:str
get_maps()

Return all maps

Returns:All Maps in the current workspace
Return type:list[Map]
index_of_datasource(alias)

Find the serial number of the specified datasource alias. An exception will be thrown if it does not exist.

Parameters:alias (str) – datasource alias
Returns:the serial number of the datasource
Return type:int
Raises:ValueError – An exception is thrown when the specified datasource alias does not exist.
insert_map(index, map_name, map_or_xml)

Add a map at the position of the specified serial number, and the content of the map is determined by the XML string.

Parameters:
  • index (int) – The specified serial number.
  • map_name (str) – The specified map name. The name is not case sensitive.
  • map_or_xml (Map or str) – The XML string used to represent the map or map to be inserted.
Returns:

If the map is inserted successfully, return true; otherwise, return false.

Return type:

bool

is_contains_datasource(item)

Whether there is a datasource with a specified serial number or datasource alias

Parameters:item (str or int) – alias or serial number of the datasource
Returns:return True if it exists, otherwise return False
Return type:bool
is_modified()

Return whether the content of the workspace has been changed. If some changes are made to the content of the workspace, it Return True, otherwise it Return False. The workspace is responsible for managing datasources, maps, any one of them changes, this attribute will return True. When closing the entire application, first use this attribute to determine whether the workspace has been changed, which can be used to prompt the user whether to save.

Returns:If some changes are made to the content of the workspace, it Return True, otherwise it Return False
Return type:bool
modify_datasource_alias(old_alias, new_alias)

Modify the alias of the datasource. datasource aliases are not case sensitive

Parameters:
  • old_alias (str) – the alias of the datasource to be modified
  • new_alias (str) – the new alias of the datasource
Returns:

If the datasource is modified successfully, it will return True, otherwise it will return False

Return type:

bool

classmethod open(conn_info, save_existed=True, saved_connection_info=None)

Open a new workspace object. Before opening a new workspace, the user can save the current workspace object by setting save_existed to True, or set saved_connection_info saves as the current workspace to the specified location.

Parameters:
  • conn_info (WorkspaceConnectionInfo) – connection information of the workspace
  • save_existed (bool) – Whether to save the current workspace work. If set to True, the current workspace will be saved and then closed, otherwise the current workspace will be closed directly, and then the new workspace object will be opened. save_existed is only suitable for situations where the current workspace is not in memory. The default is True.
  • saved_connection_info (WorkspaceConnectionInfo) – Choose to save the current work in the workspace specified by saved_connection_info. The default is None.
Returns:

new workspace object

Return type:

Workspace

open_datasource(conn_info, is_get_existed=True)

Open the datasource according to the datasource connection information. If the set connection information is a UDB type datasource, or is_get_existed is True, if the corresponding datasource already exists in the workspace, then it will return directly. It does not support directly opening the memory datasource. To use the memory datasource, you need to use create_datasource() to create the memory datasource.

Parameters:
  • conn_info (str or dict or DatasourceConnectionInfo) – udb file path or datasource connection information: -datasource connection information. For details, please refer to: py:meth:DatasourceConnectionInfo.make -If conn_info is str, it can be’:memory:’, udb file path, udd file path, dcf file path, xml string of datasource connection information -If conn_info is a dict, it is the return result of DatasourceConnectionInfo.to_dict().
  • is_get_existed (bool) – is_get_existed is True, if the corresponding datasource already exists in the workspace, it will return directly. When false, a new datasource will be opened. For UDB datasources, whether is_get_existed is True or False, the datasource in the workspace will be returned first. To determine whether DatasourceConnectionInfo is the same datasource as the datasource in the workspace, you can check:py:meth:DatasourceConnectionInfo.is_same
Returns:

datasource object

Return type:

Datasource

>>> ws = Workspace()
>>> ds = ws.open_datasource('E:/data.udb')
>>> print(ds.type)
EngineType.UDB
remove_map(index_or_name)

Delete the map with the specified serial number or name in this map collection object

Parameters:index_or_name (str or int) – the serial number or name of the map to be deleted
Returns:If the deletion is successful, return true; otherwise, return false.
Return type:bool
rename_map(old_name, new_name)

Modify the name of the map object

Parameters:
  • old_name (str) – the current name of the map object
  • new_name (str) – the specified new map name
Returns:

Return True if the modification is successful, otherwise return False

Return type:

bool

classmethod save()

Used to save the existing workspace without changing the original name

Returns:Return True if saved successfully, otherwise return False
Return type:bool
classmethod save_as(conn_info)

Use the specified workspace to connect the information object to save the workspace file.

Parameters:conn_info (WorkspaceConnectionInfo) – Workspace connection information object
Returns:Return True if Save As is successful, otherwise False
Return type:bool
set_caption(caption)

Set the workspace display name.

Parameters:caption (str) – workspace display name
set_description(description)

Set the description or descriptive information of the current workspace joined by the user

Parameters:description (str) – The description or descriptive information of the current workspace added by the user
set_map(index_or_name, map_or_xml)

The map represented by the specified map or the XML string of the map replaces the map with the specified sequence number in the map collection object.

Parameters:
  • index_or_name (int or str) – the specified serial number or map name
  • map_or_xml (Map or str) – The XML string representation of the new map used to replace the specified map.
Returns:

If the operation is successful, return true; otherwise, return false.

Return type:

bool

iobjectspy.data.open_datasource(conn_info, is_get_existed=True)

Open the datasource according to the datasource connection information. If the set connection information is a UDB type datasource, or is_get_existed is True, if the corresponding datasource already exists in the workspace, then it will return directly. It does not support directly opening the memory datasource. To use the memory datasource, you need to use create_datasource() to create the memory datasource.

Specific reference:py:meth:Workspace.open_datasource

Parameters:
  • conn_info (str or dict or DatasourceConnectionInfo) – udb file path or datasource connection information: -datasource connection information. For details, please refer to: py:meth:DatasourceConnectionInfo.make -If conn_info is str, it can be’:memory:’, udb file path, udd file path, dcf file path, xml string of datasource connection information -If conn_info is a dict, it is the return result of DatasourceConnectionInfo.to_dict().
  • is_get_existed (bool) – is_get_existed is True, if the corresponding datasource already exists in the workspace, it will return directly. When false, a new datasource will be opened. For UDB datasources, whether is_get_existed is True or False, the datasource in the workspace will be returned first. To determine whether DatasourceConnectionInfo is the same datasource as the datasource in the workspace, you can check:py:meth:DatasourceConnectionInfo.is_same
Returns:

datasource object

Return type:

Datasource

iobjectspy.data.get_datasource(item)

Get the specified datasource object.

Specific reference:py:meth:Workspace.get_datasource

Parameters:item (str or int) – alias or serial number of the datasource
Returns:datasource object
Return type:Datasource
iobjectspy.data.close_datasource(item)

Close the specified datasource.

Specific reference Workspace.close_datasource()

Parameters:item (str or int) – alias or serial number of the datasource
Returns:Return True if closed successfully, otherwise return False
Return type:bool
iobjectspy.data.list_datasources()

Return all datasource objects in the current workspace.

Specific reference Workspace.datasources

Returns:all datasource objects in the current workspace
Return type:list[Datasource]
iobjectspy.data.create_datasource(conn_info)

Create a new datasource based on the specified datasource connection information.

Parameters:conn_info (str or dict or DatasourceConnectionInfo) – udb file path or datasource connection information: -datasource connection information. For details, please refer to: py:meth:DatasourceConnectionInfo.make -If conn_info is str, it can be’:memory:’, udb file path, udd file path, dcf file path, xml string of datasource connection information -If conn_info is a dict, it is the return result of DatasourceConnectionInfo.to_dict().
Returns:datasource object
Return type:Datasource
iobjectspy.data.create_mem_datasource()

Create an in-memory data source

Returns:data source object
Return type:Datasource
iobjectspy.data.dataset_dim2_to_dim3(source, z_field_or_value, line_to_z_field=None, saved_fields=None, out_data=None, out_dataset_name=None)

Convert a two-dimensional dataset to a three-dimensional dataset, and the two-dimensional point, line and area dataset will be converted to three-dimensional point, line and area dataset respectively.

Parameters:
  • source (DatasetVector or str) – two-dimensional dataset, supporting point, line and area dataset
  • z_field_or_value (str or float) – The source field name of the z value or the specified z value. If it is a field, it must be a numeric field.
  • line_to_z_field (str) – When the input is a two-dimensional line dataset, it is used to specify the field name of the ending z value, and z_field_or_value is the name of the field of the starting z value. line_to_z_field must be a field name, the specified z value is not supported.
  • saved_fields (list[str] or str) – the names of the fields to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result three dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_dim3_to_dim2(source, out_z_field='Z', saved_fields=None, out_data=None, out_dataset_name=None)

Convert three-dimensional point, line and area dataset into two-dimensional point, line and area dataset.

Parameters:
  • source (DatasetVector or str) – 3D dataset, supports 3D point, line and area dataset
  • out_z_field (str) – A field that retains the Z value. If it is None or illegal, a valid field will be obtained to store the Z value of the object
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result two-dimensional dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_point_to_line(source, group_fields=None, order_fields=None, field_stats=None, out_data=None, out_dataset_name=None)

Collect the two-dimensional point data into point objects, group the line objects according to the grouping field, and return a two-dimensional line dataset

Parameters:
  • source (DatasetVector or str) – two-dimensional point dataset
  • group_fields (list[str]) – The field names used for grouping in the two-dimensional point dataset. Only when the field values of the grouped field names are equal will the points be connected into a line.
  • order_fields (list[str] or str) – Sort fields, the points in the same group are sorted according to the ascending order of the field value of the sort field, and then connected into a line. If it is None, the SmID field is used for sorting by default.
  • field_stats (list[tuple(str,AttributeStatisticsMode)] or list[tuple(str,str)] or str) – field statistics, field statistics are performed on the point attributes in the same group. It is a list, each element in the list is a tuple, the size of the tuple is 2, the first element of the tuple is the field name to be counted, and the second element of the tuple is the statistics type. Note that: py:attr:AttributeStatisticsMode.MAXINTERSECTAREA is not supported
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

Two-dimensional point dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_line_to_point(source, mode='VERTEX', saved_fields=None, out_data=None, out_dataset_name=None)

Convert line dataset to point dataset

Parameters:
  • source (DatasetVector or str) – two-dimensional line dataset
  • mode (LineToPointMode or str) – the way to convert line objects to point objects
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result two-dimensional point dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_line_to_region(source, saved_fields=None, out_data=None, out_dataset_name=None)

Convert a line dataset to a polygon dataset. This method converts the line object directly to the area object. If the line object is not first connected, the conversion may fail. If you want to convert a line dataset to a polygon dataset, a more Reliable approach is the topological aspect: py:func:.topology_build_regions

Parameters:
  • source (DatasetVector or str) – two-dimensional line dataset
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – datasource information where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result 2D surface dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_region_to_line(source, saved_fields=None, out_data=None, out_dataset_name=None)

Convert a 2D area object into a line dataset. This method will directly convert each point object into a line object. If you need to extract a line dataset that does not contain repeated lines, you can use: py:func:.pickup_border

Parameters:
  • source (DatasetVector or str) – two-dimensional surface dataset
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result line dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_region_to_point(source, mode='INNER_POINT', saved_fields=None, out_data=None, out_dataset_name=None)

Convert 2D polygon dataset to point dataset

Parameters:
  • source (DatasetVector or str) – two-dimensional surface dataset
  • mode (RegionToPointMode or str) – the way to convert area object to point object
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – datasource information where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result two-dimensional point dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_field_to_text(source, field, saved_fields=None, out_data=None, out_dataset_name=None)

Convert point dataset to text dataset

Parameters:
  • source (DatasetVector or str) – input two-dimensional point dataset
  • field (str) – A field containing text information, used to construct the text information of a text geometric object.
  • saved_fields (list[str] or str) – the names of the fields that need to be reserved
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result text dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_text_to_field(source, out_field='Text')

Store the text information of the two-dimensional text dataset in the field. The text information of the text object will be stored in the specified out_field field.

Parameters:
  • source (DatasetVector or str) – The input two-dimensional text dataset.
  • out_field (str) – The name of the field that stores the text information. If the field name specified by out_field already exists, it must be a text field. If it does not exist, a new text field will be created.
Returns:

Return True if successful, otherwise return False.

Return type:

bool

iobjectspy.data.dataset_text_to_point(source, out_field='Text', saved_fields=None, out_data=None, out_dataset_name=None)

Convert a two-dimensional text dataset to a point dataset, and the text information of the text object will be stored in the specified out_field field

Parameters:
  • source (DatasetVector or str) – input two-dimensional text dataset
  • out_field (str) – The name of the field storing text information. If the field name specified by out_field already exists, it must be a text field. If it does not exist, a new text field will be created.
  • saved_fields (list[str] or str) – The names of the fields that need to be reserved.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

Two-dimensional point dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_field_to_point(source, x_field, y_field, z_field=None, saved_fields=None, out_data=None, out_dataset_name=None)

According to the fields in the dataset, a two-dimensional point dataset or a three-dimensional point dataset is constructed. If a valid z_field is specified, a three-dimensional point dataset will be obtained, otherwise a two-dimensional point dataset will be obtained

Parameters:
  • source (DatasetVector or str) – The dataset that provides the data, which can be an attribute table or a dataset such as point, line, area, etc.
  • x_field (str) – The source field of the x coordinate value. It must be valid.
  • y_field (str) – The source field of the y coordinate value. It must be valid.
  • z_field (str) – The source field of the z coordinate value, optional.
  • saved_fields (list[str] or str) – The names of the fields that need to be reserved.
  • out_data (Datasource or DatasourceConnectionInfo or str) – The datasource where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

2D point or 3D point dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_network_to_line(source, saved_fields=None, out_data=None, out_dataset_name=None)

Convert the two-dimensional network dataset into a line dataset. The SmEdgeID, SmFNode and SmTNode field values of the network dataset will be stored in the EdgeID, FNode and TNode fields of the result dataset If EdgeID, FNode or TNode is already occupied, a valid field will be obtained.

Parameters:
  • source (DatasetVector or str) – the converted two-dimensional network dataset
  • saved_fields (list[str] or str) – The names of the fields to be saved.
  • out_data (Datasource or DatasourceConnectionInfo or str) – datasource information where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result two-dimensional line dataset or dataset name

Return type:

DatasetVector or str

iobjectspy.data.dataset_network_to_point(source, saved_fields=None, out_data=None, out_dataset_name=None)

The idea of two-dimensional dataset into a set of network data point dataset, SmNodeID field value of dataset will be stored in the field NodeID result dataset, if already occupied NodeID, it obtains a valid field.

Parameters:
  • source (DatasetVector or str) – the converted two-dimensional network dataset
  • saved_fields (list[str] or str) – The names of the fields to be saved.
  • out_data (Datasource or DatasourceConnectionInfo or str) – datasource information where the result dataset is located
  • out_dataset_name (str) – result dataset name
Returns:

result two-dimensional point dataset or dataset name

Return type:

DatasetVector or str

class iobjectspy.data.Color(seq=(0, 0, 0))

Bases: tuple

To define an RGB color object, the user can specify the RGB color value by specifying a three-element tuple, or use a four-element tuple to specify the RGBA color value. By default, the Alpha value is 255.

../_images/Colors.png

Construct a Color through tuple

Parameters:seq (tuple[int,int,int] or tuple[int,int,int,int]) – the specified RGB or RGBA color value
A

int – Get the A value of Color

B

int – Get the B value of Color

G

int – Get the G value of Color

R

int – Get the R value of Color

static aliceblue()

Construct color value (240, 248, 255)

Returns:color value (240, 248, 255)
Return type:Color
static antiquewhite()

Construct color value (250, 235, 215)

Returns:color value (250, 235, 215)
Return type:Color
static aqua()

Construct color value (0, 255, 255)

Returns:color value (0, 255, 255)
Return type:Color
static aquamarine()

Construct color value (127, 255, 212)

Returns:color value (127, 255, 212)
Return type:Color
static azure()

Construct color value (240, 255, 255)

Returns:color value (240, 255, 255)
Return type:Color
static beige()

Construct color value (245, 245, 220)

Returns:color value (245, 245, 220)
Return type:Color
static bisque()

Construct color value (255,228,196)

Returns:color value (255,228,196)
Return type:Color
static black()

Construct color value (0, 0, 0)

Returns:color value (0, 0, 0)
Return type:Color
static blanchedalmond()

Construct color value (255,235,205)

Returns:color value (255,235,205)
Return type:Color
static blue()

Construct color value (0, 0, 255)

Returns:color value (0, 0, 255)
Return type:Color
static blueviolet()

Construct color value (138, 43, 226)

Returns:color value (138, 43, 226)
Return type:Color
static burlywood()

Construct color value (222, 184, 135)

Returns:color value (222, 184, 135)
Return type:Color
static cadetblue()

Construct color value (95, 158, 160)

Returns:color value (95, 158, 160)
Return type:Color
static chartreuse()

Construct color value (127, 255, 0)

Returns:color value (127, 255, 0)
Return type:Color
static chocolate()

Construct color value (210, 105, 30)

Returns:color value (210, 105, 30)
Return type:Color
static coral()

Construct color value (255, 127, 80)

Returns:color value (255, 127, 80)
Return type:Color
static cornflowerblue()

Construct color value (100, 149, 237)

Returns:color value (100, 149, 237)
Return type:Color
static cornsilk()

Construct color value (255, 248, 220))

Returns:color value (255, 248, 220))
Return type:Color
static crimson()

Construct color value (220, 20, 60)

Returns:color value (220, 20, 60)
Return type:Color
static cyan()

Construct color value (0, 255, 255

Returns:color value (0, 255, 255
Return type:Color
static darkblue()

Construct color value (0, 0, 139)

Returns:color value (0, 0, 139)
Return type:Color
static darkcyan()

Construct color value (0, 139, 139)

Returns:color value (0, 139, 139)
Return type:Color
static darkgoldenrod()

Construct color value (184, 134, 11)

Returns:color value (184, 134, 11)
Return type:Color
static darkgray()

Construct color value (64, 64, 64)

Returns:color value (64, 64, 64)
Return type:Color
static darkgreen()

Construct color value (0, 100, 0)

Returns:color value (0, 100, 0)
Return type:Color
static darkkhaki()

Construct color value (189, 183, 107)

Returns:color value (189, 183, 107)
Return type:Color
static darkmagena()

Construct color value (139, 0, 139)

Returns:color value (139, 0, 139)
Return type:Color
static darkolivegreen()

Construct color value (85, 107, 47)

Returns:color value (85, 107, 47)
Return type:Color
static darkorange()

Construct color value (255, 140, 0)

Returns:color value (255, 140, 0)
Return type:Color
static darkorchid()

Construct color value (153, 50, 204)

Returns:color value (153, 50, 204)
Return type:Color
static darkred()

Construct color value (139, 0, 0)

Returns:color value (139, 0, 0)
Return type:Color
static darksalmon()

Construct color value (233, 150, 122)

Returns:color value (233, 150, 122)
Return type:Color
static darkseagreen()

Construct color value (143, 188, 143)

Returns:color value (143, 188, 143)
Return type:Color
static darkslateblue()

Construct color value (72, 61, 139)

Returns:color value (72, 61, 139)
Return type:Color
static darkturquoise()

Construct color value (0, 206, 209)

Returns:color value (0, 206, 209)
Return type:Color
static darkviolet()

Construct color value (148, 0, 211)

Returns:color value (148, 0, 211)
Return type:Color
static deeppink()

Construct color value (255, 20, 147)

Returns:color value (255, 20, 147)
Return type:Color
static deepskyblue()

Construct color value (0, 191, 255)

Returns:color value (0, 191, 255)
Return type:Color
static dimgray()

Construct color values (105, 105, 105)

Returns:color value (105, 105, 105)
Return type:Color
static dodgerblue()

Construct color value (30, 144, 255)

Returns:color value (30, 144, 255)
Return type:Color
static firebrick()

Construct color value (178, 34, 34)

Returns:color value (178, 34, 34)
Return type:Color
static floralwhite()

Construct color value (255, 250, 240)

Returns:color value (255, 250, 240)
Return type:Color
static forestgreen()

Construct color value (34, 139, 34)

Returns:color value (34, 139, 34)
Return type:Color
static fuschia()

Construct color value (255, 0, 255)

Returns:color value (255, 0, 255)
Return type:Color
static gainsboro()

Construct color value (220, 220, 220)

Returns:color value (220, 220, 220)
Return type:Color
static ghostwhite()

Construct color value (248, 248, 255)

Returns:color value (248, 248, 255)
Return type:Color
static gold()

Construct color value (255, 215, 0)

Returns:color value (255, 215, 0)
Return type:Color
static goldenrod()

Construct color value (218, 165, 32)

Returns:color value (218, 165, 32)
Return type:Color
static gray()

Construct color value (128, 128, 128)

Returns:color value (128, 128, 128)
Return type:Color
static green()

Construct color value (0, 128, 0)

Returns:color value (0, 128, 0)
Return type:Color
static greenyellow()

Construct color value (173, 255, 47)

Returns:color value (173, 255, 47)
Return type:Color
static honeydew()

Construct color value (240, 255, 240)

Returns:color value (240, 255, 240)
Return type:Color
static hotpink()

Construct color value (255, 105, 180)

Returns:color value (255, 105, 180)
Return type:Color
static indianred()

Construct color value (205, 92, 92)

Returns:color value (205, 92, 92)
Return type:Color
static indigo()

Construct color value (75, 0, 130)

Returns:color value (75, 0, 130)
Return type:Color
static ivory()

Construct color value (255, 240, 240)

Returns:color value (255, 240, 240)
Return type:Color
static khaki()

Construct color value (240, 230, 140)

Returns:color value (240, 230, 140)
Return type:Color
static lavender()

Construct color values (230, 230, 250)

Returns:color value (230, 230, 250)
Return type:Color
static lavenderblush()

Construct color value (255, 240, 245)

Returns:color value (255, 240, 245)
Return type:Color
static lawngreen()

Construct color value (124, 252, 0)

Returns:color value (124, 252, 0)
Return type:Color
static lemonchiffon()

Construct color value (255, 250, 205)

Returns:color value (255, 250, 205)
Return type:Color
static lightblue()

Construct color value (173, 216, 230)

Returns:color value (173, 216, 230)
Return type:Color
static lightcoral()

Construct color value (240, 128, 128)

Returns:color value (240, 128, 128)
Return type:Color
static lightcyan()

Construct color value (224, 255, 255)

Returns:color value (224, 255, 255)
Return type:Color
static lightgoldenrodyellow()

Construct color value (250, 250, 210)

Returns:color value (250, 250, 210)
Return type:Color
static lightgray()

Construct color value (211, 211, 211)

Returns:color value (211, 211, 211)
Return type:Color
static lightgreen()

Construct color value (144, 238, 144)

Returns:color value (144, 238, 144)
Return type:Color
static lightpink()

Construct color value (255, 182, 193)

Returns:color value (255, 182, 193)
Return type:Color
static lightsalmon()

Construct color value (255, 160, 122)

Returns:color value (255, 160, 122)
Return type:Color
static lightseagreen()

Construct color value (32, 178, 170)

Returns:color value (32, 178, 170)
Return type:Color
static lightskyblue()

Construct color value (135, 206, 250)

Returns:color value (135, 206, 250)
Return type:Color
static lightslategray()

Construct color value (119, 136, 153)

Returns:color value (119, 136, 153)
Return type:Color
static lightsteelblue()

Construct color value (176, 196, 222)

Returns:color value (176, 196, 222)
Return type:Color
static lightyellow()

Construct color value (255, 255, 224)

Returns:color value (255, 255, 224)
Return type:Color
static lime()

Construct color value (0, 255, 0)

Returns:color value (0, 255, 0)
Return type:Color
static limegreen()

Construct color value (50, 205, 50)

Returns:color value (50, 205, 50)
Return type:Color
static linen()

Construct color value (250, 240, 230)

Returns:color value (250, 240, 230)
Return type:Color
static magenta()

Construct color value (255, 0, 255)

Returns:color value (255, 0, 255)
Return type:Color
static make(value)

Construct a Color object

Parameters:value (Color or str or tuple[int,int,int] or tuple[int,int,int,int]) – The value used to construct the Color object. If it is str, it can be concatenated with’,’, for example: ‘0,255,232’ or ‘0,255,234,54’
Returns:color object
Return type:Color
static maroon()

Construct color value (128, 0, 0)

Returns:color value (128, 0, 0)
Return type:Color
static medium_sea_green()

Construct color value (60, 179, 113)

Returns:color value (60, 179, 113)
Return type:Color
static mediumaquamarine()

Construct color value (102, 205, 170)

Returns:color value (102, 205, 170)
Return type:Color
static mediumblue()

Construct color value (0, 0, 205)

Returns:color value (0, 0, 205)
Return type:Color
static mediumorchid()

Construct color value (186, 85, 211)

Returns:color value (186, 85, 211)
Return type:Color
static mediumpurple()

Construct color values (147, 112, 219)

Returns:color value (147, 112, 219)
Return type:Color
static mediumslateblue()

Construct color value (123, 104, 238)

Returns:color value (123, 104, 238)
Return type:Color
static mediumspringgreen()

Construct color value (0, 250, 154)

Returns:color value (0, 250, 154)
Return type:Color
static mediumturquoise()

Construct color value (72, 209, 204)

Returns:color value (72, 209, 204)
Return type:Color
static mediumvioletred()

Construct color value (199, 21, 112)

Returns:color value (199, 21, 112)
Return type:Color
static midnightblue()

Construct color value (25, 25, 112)

Returns:color value (25, 25, 112)
Return type:Color
static mintcream()

Construct color value (245, 255, 250)

Returns:color value (245, 255, 250)
Return type:Color
static mistyrose()

Construct color value (255, 228, 225)

Returns:color value (255, 228, 225)
Return type:Color
static moccasin()

Construct color value (255, 228, 181)

Returns:color value (255, 228, 181)
Return type:Color
static navajowhite()

Construct color value (255, 222, 173)

Returns:color value (255, 222, 173)
Return type:Color
static navy()

Construct color value (0, 0, 128)

Returns:color value (0, 0, 128)
Return type:Color
static oldlace()

Construct color value (253, 245, 230)

Returns:color value (253, 245, 230)
Return type:Color
static olive()

Construct color value (128, 128, 0)

Returns:color value (128, 128, 0)
Return type:Color
static olivedrab()

Construct color value (107, 142, 45)

Returns:color value (107, 142, 45)
Return type:Color
static orange()

Construct color value (255, 165, 0)

Returns:color value (255, 165, 0)
Return type:Color
static orangered()

Construct color value (255, 69, 0)

Returns:color value (255, 69, 0)
Return type:Color
static orchid()

Construct color value (218, 112, 214)

Returns:color value (218, 112, 214)
Return type:Color
static pale_goldenrod()

Construct color value (238, 232, 170)

Returns:color value (238, 232, 170)
Return type:Color
static palegreen()

Construct color value (152, 251, 152)

Returns:color value (152, 251, 152)
Return type:Color
static paleturquoise()

Construct color value (175, 238, 238)

Returns:color value (175, 238, 238)
Return type:Color
static palevioletred()

Construct color value (219, 112, 147)

Returns:color value (219, 112, 147)
Return type:Color
static papayawhip()

Construct color value (255, 239, 213)

Returns:color value (255, 239, 213)
Return type:Color
static peachpuff()

Construct color value (255, 218, 155

Returns:color value (255, 218, 155
Return type:Color
static peru()

Construct color value (205, 133, 63)

Returns:color value (205, 133, 63)
Return type:Color
static pink()

Construct color value (255, 192, 203)

Returns:color value (255, 192, 203)
Return type:Color
static plum()

Construct color value (221, 160, 221)

Returns:color value (221, 160, 221)
Return type:Color
static powderblue()

Construct color value (176, 224, 230)

Returns:color value (176, 224, 230)
Return type:Color
static purple()

Construct color value (128, 0, 128)

Returns:color value (128, 0, 128)
Return type:Color
static red()

Construct color value (255, 0, 0)

Returns:color value (255, 0, 0)
Return type:Color
static rgb(red, green, blue, alpha=255)

Construct a Color object by specifying R, G, B, and A values

Parameters:
  • red (int) – Red value
  • green (int) – Green value
  • blue (int) – Blue value
  • alpha (int) – Alpha value
Returns:

color object

Return type:

Color

static rosybrown()

Construct color value (188, 143, 143)

Returns:color value (188, 143, 143)
Return type:Color
static royalblue()

Construct color value (65, 105, 225)

Returns:color value (65, 105, 225)
Return type:Color
static saddlebrown()

Construct color value (244, 164, 96)

Returns:color value (244, 164, 96)
Return type:Color
static sandybrown()

Construct color value (244, 144, 96)

Returns:color value (244, 144, 96)
Return type:Color
static seagreen()

Construct color value (46, 139, 87)

Returns:color value (46, 139, 87)
Return type:Color
static seashell()

Construct color value (255, 245, 238)

Returns:color value (255, 245, 238)
Return type:Color
static sienna()

Construct color value (160, 82, 45)

Returns:color value (160, 82, 45)
Return type:Color
static silver()

Construct color value (192, 192, 192)

Returns:color value (192, 192, 192)
Return type:Color
static skyblue()

Construct color value (135, 206, 235)

Returns:color value (135, 206, 235)
Return type:Color
static slateblue()

Construct color value (106, 90, 205)

Returns:color value (106, 90, 205)
Return type:Color
static slategray()

Construct color value (106, 90, 205)

Returns:color value (106, 90, 205)
Return type:Color
static snow()

Construct color value (255, 250, 250)

Returns:color value (255, 250, 250)
Return type:Color
static springgreen()

Construct color value (0, 255, 127)

Returns:color value (0, 255, 127)
Return type:Color
static steelblue()

Construct color value (70, 130, 180)

Returns:color value (70, 130, 180)
Return type:Color
static tan()

Construct color value (210, 180, 140)

Returns:color value (210, 180, 140)
Return type:Color
static teal()

Construct color value (0, 128, 128)

Returns:color value (0, 128, 128)
Return type:Color
static thistle()

Construct color value (216, 191, 216)

Returns:color value (216, 191, 216)
Return type:Color
static tomato()

Construct color value (253, 99, 71)

Returns:color value (253, 99, 71)
Return type:Color
static turquoise()

Construct color value (64, 224, 208)

Returns:color value (64, 224, 208)
Return type:Color
static violet()

Construct color value (238, 130, 238)

Returns:color value (238, 130, 238)
Return type:Color
static wheat()

Construct color value (245, 222, 179)

Returns:color value (245, 222, 179)
Return type:Color
static white()

Construct color value (255, 255, 255)

Returns:color value (255, 255, 255)
Return type:Color
static white_smoke()

Construct color value (245, 245, 245)

Returns:color value (245, 245, 245)
Return type:Color
static yellow()

Construct color value (255, 255, 0)

Returns:color value (255, 255, 0)
Return type:Color
static yellowgreen()

Construct color value (154, 205, 50)

Returns:color value (154, 205, 50)
Return type:Color
class iobjectspy.data.GeoStyle

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Geometric style class. Used to define point symbols, line symbols, fill symbols and their related settings. For text objects, only text style can be set, not geometric style. Except for composite dataset (CAD dataset), other types of dataset do not store the style information of geometric objects. The filling mode is divided into normal filling mode and gradient filling mode. In normal filling mode, you can use pictures or vector symbols for filling; in gradient filling mode, there are four types of gradients to choose from: linear gradient filling, radial gradient filling, conical gradient filling and four-corner gradient filling

fill_back_color

Color – The background color of the fill symbol. When the fill mode is gradient fill, this color is the fill end color. The default value Color(255,255,255,255)

fill_fore_color

Color – The foreground color of the fill symbol. When the fill mode is gradient fill, this color is the starting color of the gradient fill. The default value Color(189,235,255,255)

fill_gradient_angle

float – The rotation angle of the gradient fill, the unit is 0.1 degrees, and the counterclockwise direction is the positive direction.

fill_gradient_mode

**FillGradientMode* – Return the gradient type of the gradient fill style. For the definition of each gradient fill type, please refer to* – py:class:FillGradientMode

fill_gradient_offset_ratio_x

float – Return the percentage of the horizontal offset of the center point of the gradient fill relative to the center point of the filled area. Set the coordinates of the center point of the filled area be (x0,y0 ), the coordinates of the filling center point are (x, y), the width of the filled area is a, and the horizontal offset percentage is dx, then x=x0 + a*dx/100. The percentage can be negative. When it is negative, the filling center point is offset to the negative direction of the X-axis relative to the filling area center point. This method is effective for radial gradient, cone gradient, four-corner gradient and linear gradient fill.

fill_gradient_offset_ratio_y

float – Return the vertical offset percentage of the filling center point relative to the center point of the filling area. Set the coordinates of the center point of the filling area to (x0,y0), the coordinates of the filling center point are (x, y), the height of the filled area is b, and the vertical offset percentage is dy, then y=y0 + b*dy/100. The percentage can be negative. When it is negative, the filling center point is offset to the negative direction of the X-axis relative to the filling area center point. This method is effective for radial gradient, cone gradient, four-corner gradient and linear gradient fill.

fill_opaque_rate

int – Return the opaqueness of the fill. The legal value is 0-100. Its value is 0 means completely transparent; if its value is 100, it means completely opaque. Assignment If it is less than 0, it will be treated as 0, and if it is greater than 100, it will be treated as 100.

fill_symbol_id

int – Return the code of the filling symbol. This code is used to uniquely identify the filling symbol of each common filling style. The filling symbol can be user-defined or the system can be used Built-in symbol library.

static from_xml(xml)

Construct GeoStyle object according to xml description information

Parameters:xml (str) – Describe the xml information of GeoStyle. Specific reference: py:meth:to_xml
Returns:geometric object style
Return type:GeoStyle
is_fill_back_opaque

bool – Whether the current filled background is opaque. If the current filled background is opaque, it is True, otherwise it is False.

line_color

Color – Line symbol style or color of dot symbol.

static line_style(line_id=0, line_width=0.1, color=(0, 0, 0))

Object style for constructing a line object

Parameters:
  • line_id (int) – The code of the line symbol. This code is used to uniquely identify each linear symbol. Linear symbols can be customized by users, or you can use the system’s own symbol library.
  • line_width (float) – The width of the line symbol. The unit is millimeter and the accuracy is 0.1.
  • color (Color or tuple[int,int,int] or tuple[int,int,int,int]) – parametric style
Returns:

the object style of the line object

Return type:

GeoStyle

line_symbol_id

int – the code of the line symbol. This code is used to uniquely identify each line symbol. The line symbol can be customized by the user or can be used by the system Symbol library.

line_width

float – The width of the line symbol. The unit is millimeters and the precision is 0.1.

marker_angle

float – The rotation angle of the dot symbol, in degrees, accurate to 0.1 degrees. And the counterclockwise direction is the positive direction. This angle can be used as a normal filling style. This angle can be used as the rotation angle of the fill symbol in the normal fill style.

marker_size

tuple[float,float] – The size of the dot symbol, in millimeters, accurate to 0.1 mm.

marker_symbol_id

int – The code of the dot symbol. This code is used to uniquely identify each dot symbol. The dot symbol can be customized by the user or the system itself Symbol library.

static point_style(marker_id=0, marker_angle=0.0, marker_size=(4, 4), color=(0, 0, 0))

Object style for constructing a point object

Parameters:
  • marker_id (int) – The code of the dot symbol. This code is used to uniquely identify each dot symbol. Point symbols can be customized by users, or you can use the system’s own symbol library. The ID value of the specified symbol must be an ID value that already exists in the symbol library.
  • marker_angle (float) – The rotation angle of the dot symbol. The unit is degree, accurate to 0.1 degree, and the counterclockwise direction is the positive direction. This angle can be used as the rotation angle of the fill symbol in the normal fill style.
  • marker_size (tuple[int,int]) – The size of the dot, in millimeters, with an accuracy of 0.1 mm. Its value must be greater than or equal to 0. If it is 0, it means no display; if it is less than 0, an exception will be thrown.
  • color (Color or tuple[int,int,int] or tuple[int,int,int,int]) – The color of the dot symbol.
Returns:

the object style of the point object

Return type:

GeoStyle

set_fill_back_color(value)

Set the background color of the fill symbol. When the fill mode is gradient fill, this color is the end color of the gradient fill.

Parameters:value (Color or tuple[int,int,int] or tuple[int,int,int,int]) – Used to set the background color of the fill symbol.
Returns:object itself
Return type:GeoStyle
set_fill_back_opaque(value)

Set whether the current filled background is opaque.

Parameters:value (bool) – Whether the current filled background is transparent, true means opaque.
Returns:object itself
Return type:GeoStyle
set_fill_fore_color(value)

Set the foreground color of the fill symbol. When the fill mode is gradient fill, this color is the starting color of the gradient fill.

Parameters:value (Color or tuple[int,int,int] or tuple[int,int,int,int]) – used to set the foreground color of the fill symbol
Returns:object itself
Return type:GeoStyle
set_fill_gradient_angle(value)

Set the rotation angle of the gradient fill, the unit is 0.1 degrees, and the counterclockwise direction is the positive direction. For the definition of each gradient fill style, please refer to FillGradientMode. For different gradient fills, the rotated effects are different, but they are all rotated counterclockwise with the center of the smallest bounding rectangle as the center of rotation:

  • Linear gradient
When the set angle is from 0 to 360 degrees, the line passing through the starting point and ending point is rotated counterclockwise with the center of the smallest enclosing rectangle as the center of rotation, and the gradient style rotates accordingly, it is still a linear gradient from the beginning of the line to the end. The following are the gradient styles at special angles:

-When the gradient fill angle is set to 0 degrees or 360 degrees, then the gradient fill style is a linear gradient from left to right from the start color to the end color, as shown in the figure, the start color is yellow and the end color is pink ;

-When the gradient fill angle is set to 180 degrees, the gradient fill style is exactly the opposite of the style described in 1, that is, from right to left, a linear gradient from the start color to the end color;

-When the gradient fill angle is set to 90 degrees, the gradient fill style is a linear gradient from bottom to top, starting color to ending color;

-When the gradient fill angle is set to 270 degrees, the gradient fill style is exactly the opposite of the style described in 3, that is, from top to bottom, the starting color to the ending color is linearly gradient.

  • Radiation gradient
When the gradient filling angle is set to any angle (not exceeding the normal range), the circle that defines the radial gradient will be rotated according to the set angle. Since the circle is symmetric about the center point of the smallest enclosing rectangle that fills the range, the style of the gradient filling after the rotation is always the same, that is, the radial gradient from the center point to the boundary of the filling range, from the foreground color to the background color.
  • Conical gradient

When the gradient angle is set to any angle between 0-360 degrees, all the generatrices of the cone will rotate, taking the center point of the cone, that is, the center of the smallest bounding rectangle of the filled area as the center of rotation, rotate counterclockwise. In the example shown in the figure, the rotation angle is 90 degrees, and all the buses are rotated from the starting position (the position where the rotation angle is zero) to the specified angle. Take the bus passing the starting point as an example, it starts from the 0 degree position Rotate to 90 degree position.

  • Four-corner gradient
According to the given gradient filling angle, the gradient square will be rotated with the center of the filled area as the center. All squares are rotated from their initial position, the default position with zero rotation Angle. The gradient is still a gradient from the starting color to the ending color from the inner square to the outer square.
Parameters:value (float) – to set the rotation angle of the gradient fill
Returns:object itself
Return type:GeoStyle
set_fill_gradient_mode(value)

Set the gradient type of the gradient fill style

Parameters:value (FillGradientMode or str) – The gradient type of the gradient fill style.
Returns:object itself
Return type:GeoStyle
set_fill_gradient_offset_ratio_x(value)

Set the horizontal offset percentage of the gradient fill center point relative to the center point of the filled area. Suppose the coordinates of the center point of the filling area are (x0, y0), and the coordinates of the filling center point are (x, y), The width of the filled area is a, and the horizontal offset percentage is dx, then x=x0 + a*dx/100. The percentage can be negative, when it is negative, the center of the filling is relative to the filled area The center point is offset in the negative direction of the x-axis. This method is effective for radial gradient, cone gradient, four-corner gradient and linear gradient fill.

Parameters:value (float) – The value used to set the horizontal offset of the filling center point.
Returns:object itself
Return type:GeoStyle
set_fill_gradient_offset_ratio_y(value)

Set the vertical offset percentage of the filling center point relative to the center point of the filled area. Suppose the coordinates of the center point of the filling area are (x0, y0), and the coordinates of the filling center point are (x, y), The height of the filled area is b, and the vertical offset percentage is dy, then y=y0 + b*dy/100 The percentage can be negative. When it is negative, the center of the filling is relative to the filled area The center point is offset in the negative direction of the y-axis. This method is effective for radial gradient, cone gradient, four-corner gradient and linear gradient fill.

Parameters:value (float) – The value used to set the vertical offset of the filling center point.
Returns:object itself
Return type:GeoStyle
set_fill_opaque_rate(value)

Set the fill opacity, the legal value is 0-100. A value of 0 means empty filling; a value of 100 means completely opaque. If the value is less than 0, it will be treated as 0, and if it is greater than 100, it will be treated as 100.

Parameters:value (int) – The integer value used to set the opacity of the fill.
Returns:object itself
Return type:GeoStyle
set_fill_symbol_id(value)

Set the code of the fill symbol. This code is used to uniquely identify the fill symbols of each common fill style. Filling symbols can be customized by users, or you can use the symbol library that comes with the system. The ID value of the specified filling symbol must be an ID value that already exists in the symbol library.

Parameters:value (int) – An integer is used to set the code of the filling symbol.
Returns:object itself
Return type:GeoStyle
set_line_color(value)

Set the style of linear symbols or the color of dot symbols

Parameters:value (Color or tuple[int,int,int] or tuple[int,int,int,int]) – A Color object is used to set the style of linear symbols or the color of dot symbols.
Returns:object itself
Return type:GeoStyle
set_line_symbol_id(value)

Set the encoding of the linear symbol. This code is used to uniquely identify each linear symbol. Linear symbols can be customized by users, or you can use the system’s own symbol library. The ID value of the specified line symbol must be an ID value that already exists in the symbol library .

Parameters:value (int) – An integer value used to set the encoding of the line symbol.
Returns:object itself
Return type:GeoStyle
set_line_width(value)

Set the width of the linear symbol. The unit is millimeter and the accuracy is 0.1.

Parameters:value (float) –
Returns:object itself
Return type:GeoStyle
set_marker_angle(value)

Set the rotation angle of the dot symbol, in degrees, accurate to 0.1 degrees, and counterclockwise is the positive direction. This angle can be used as the rotation angle of the fill symbol in the normal fill style.

Parameters:value (float) – The rotation angle of the dot symbol.
Returns:object itself
Return type:GeoStyle
set_marker_size(width, height)

Set the size of the dot symbol in millimeters, accurate to 0.1 mm. Its value must be greater than or equal to 0. If it is 0, it means no display, if it is less than 0, an exception will be thrown.

When setting the style of the dot vector layer, if the dot symbol used is a TrueType font, when specifying the width and height of the dot symbol, it is not supported to set the symbol size with unequal width and height values, that is, the aspect ratio of the symbol is always 1:1. When the user sets a symbol size with unequal width and height values, the system automatically takes the width and height values of the symbol size to be equal and equal to the height value specified by the user.

Parameters:
  • width (float) – width
  • height (float) – height
Returns:

object itself

Return type:

GeoStyle

set_marker_symbol_id(value)

Set the code of the dot symbol. This code is used to uniquely identify each dot symbol. Point symbols can be customized by users, or you can use the system’s own symbol library. The ID value of the specified line symbol must be an ID value that already exists in the symbol library.

Parameters:value (int) – The code of the point symbol.
Returns:object itself
Return type:GeoStyle
to_xml()

Return the XML string representing the GeoStyle object.

Return type:str
iobjectspy.data.list_maps()

Return all map objects in the current workspace

Returns:all maps in the current workspace
Return type:list[Map]
iobjectspy.data.remove_map(index_or_name)

Delete the map with the specified serial number or name in this map collection object

Parameters:index_or_name (str or int) – the serial number or name of the map to be deleted
Returns:If the deletion is successful, return true; otherwise, return false.
Return type:bool
iobjectspy.data.add_map(map_name, map_or_xml)

Add Map to the current workspace

Parameters:
  • map_name (str) – map name
  • map_or_xml (Map or str) – XML description of the map object or map
Returns:

The serial number of the newly added map in this map collection object.

Return type:

int

iobjectspy.data.get_map(index_or_name)

Get the map object with the specified name or serial number

Parameters:index_or_name (int or str) – the specified map name or serial number
Returns:map object
Return type:Map
iobjectspy.data.zoom_geometry(geometry, center_point, scale_x, scale_y)

The scale transformation (zoom) of geometric objects supports point, line and area geometric objects. For any point on the geometric object:

Scale transformation in the x direction: result_point.x = point.x * scala_x + center_point.x * (1-scala_x) Scale transformation in the y direction: result_point.y = point.y * scala_y + center_point.y * (1-scala_y)
Parameters:
  • geometry (GeoPoint or GeoLine or GeoRegion) – the geometric object to be transformed
  • center_point (Point2D) – zoom reference point, generally the center point of geometric objects
  • scale_x (float) – The scaling factor in the x direction. When the value is less than 1, the geometric object is reduced; when the value is greater than 1, the geometric object is enlarged; when it is equal to 1, the geometric object remains unchanged
  • scale_y (float) – The scaling factor in the y direction. When the value is less than 1, the geometric object is reduced; when the value is greater than 1, the geometric object is enlarged; when it is equal to 1, the geometric object remains unchanged
Returns:

Return True for success, False for failure

Return type:

bool

iobjectspy.data.divide_polygon(polygon, divide_type='PART', orientation='NORTH', angle=90.0, divide_parts=3, divide_area=0.0, divide_area_unit='SQUAREMETER', remainder_area=None, prj=None)

The object cutting used for national land parcel data can be cut by area or quantity. For example, a large parcel of land can be divided into a 10 acre plot. You can also divide the entire parcel into 10 parts, each with the same area.

>>> ds = open_datasource('E:/data.udb')
>>> dt = ds['dltb']
>>> polygon = dt.get_geometries('SmID == 1')[0]
>>> #Equal parts cut into 5 parts
>>> divide_result = divide_polygon(polygon,'PART','NORTH', angle=45, divide_parts=5, prj=dt.prj_coordsys)
>>> print(len(divide_result))
5
>>> #cut by area, cut to get a surface object with an area of ​​30 square meters
>>> divide_result = divide_polygon(polygon,'AREA','EAST', 0.0, divide_parts=1, divide_area=500,
>>> divide_area_unit='SQUAREMETER', prj=dt.prj_coordsys)
>>> print(len(divide_result))
2
>>> print(divide_result[0].area)
500.0
Parameters:
  • polygon (GeoRegion or Rectangle) – the divided two-dimensional polygon object, cannot be empty
  • divide_type (DividePolygonType or str) – face cut type, default is’PART’
  • orientation (DividePolygonOrientation or str) – the orientation of face cutting, the default is’NORTH’
  • angle (float) – The cutting azimuth angle, the clockwise included angle with the true north direction. If the cutting azimuth is 0 or 180 degrees, the cutting azimuth cannot be north and south. If the cutting azimuth is 90 or 270 degrees, The cutting position cannot be east and west.
  • divide_parts (int) – The number of face cuts. For area cutting, the number of cuts cannot be greater than (the area of ​​the object before cutting/cutting area), if it is cut in equal parts, it means the number after the final cut.
  • divide_area (float) – The cutting area. When the cutting type is face cutting, this parameter must be set.
  • divide_area_unit (AreaUnit or str) – The unit of the cutting area. When divide_area is set, this parameter needs to be set at the same time.
  • remainder_area (float) – When the cutting type is area cutting, there may be small area objects left. You can use this parameter to merge the small area objects into adjacent objects. If the remaining area is less than the currently specified area value, it will be merged. It is valid only when the value is greater than 0. If it is less than or equal to 0, it is not merged.
  • prj (PrjCoordSys or str) – The spatial reference coordinate system of the segmented area object. Geographical coordinate system is not supported, because the cutting surface object needs to be cut according to the area (equivalent cutting needs to be converted to equal area cutting), and the effective area cannot be directly calculated under the latitude and longitude. The area must be converted to the projected coordinate system to calculate the area, but after the projection system, the data will be greatly deformed, which may lead to errors in the final result.
Returns:

The area object array obtained after splitting

Return type:

list[GeoRegion]

class iobjectspy.data.CSGNode(other=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

CSGNode基类

__add__(other)

:布尔并,other可以是CSGNode、CSGEntity、Point3D :当other是CSGNode时,构造新的CSGBoolNode :当other是CSGEntity时,构造新的CSGBoolNode :当other是Point3D时,平移当前node

__mul__(other)

:布尔差,other可以是CSGNode、CSGEntity、Point3D、Matrix :当other是CSGNode时,构造新的CSGBoolNode :当other是CSGEntity时,构造新的CSGBoolNode :当other是Point3D时,当前node平移 :当other是Matrix时,矩阵变换当前Node

__sub__(other)

:布尔差,other可以是CSGNode、CSGEntity :构造新的CSGBoolNode

clone()

子类实现 :return:

static from_json(str_json)

从json解析

matrix
rotate(_rotate)

:旋转

set_Matrix(value)
to_json()

输出为json

type

type – 返回CSGNodeType

class iobjectspy.data.CSGEntity(other=None)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

__add__(other)

:布尔并,other可以是CSGNode、CSGEntity、Point3D :当other是CSGNode时,构造CSGBoolNode并返回 :当other是CSGEntity时,构造CSGBoolNode并返回 :当other是Point3D时,构造SimpleNode平移并返回

__mul__(other)

:布尔交,other可以是CSGNode、CSGEntity、Point3D、Matrix :当other是CSGNode时,构造CSGBoolNode并返回 :当other是CSGEntity时,构造CSGBoolNode并返回 :当other是Point3D时,构造SimpleNode缩放并返回 :当other是Matrix时,构造SimpleNode变换并返回

__sub__(other)

:布尔差,other可以是CSGNode、CSGEntity :当other是CSGNode时,构造CSGBoolNode并返回 :当other是CSGEntity时,构造CSGBoolNode并返回

area
clone()

克隆 :return:新实例对象

copy_j_object()

根据当前实体类型创建java对象 :return:java实例对象

rotate(_rotate)

:旋转

type
volume
class iobjectspy.data.CSGBooleanNode(left=None, right=None, boolType=CSGBooleanType.BOOL_UNION)

Bases: iobjectspy._jsuperpy.data.geo.CSGNode

:用左右两个节点构造, left,right可以同时为CSGNode、CSGEntity,也可一个是CSGNode、一个是CSGEntity

bool_type
clone()

子类实现 :return:

left_node
right_node
set_bool_type(value)
set_left_node(value)
set_right_node(value)
type()

type: 返回CSGNodeType

class iobjectspy.data.CSGSimpleNode(data=None)

Bases: iobjectspy._jsuperpy.data.geo.CSGNode

:可以用None、Entity或CSGSimpleNode初始化

clone()

子类实现 :return:

get_entity()
set_entity(entity)
type()

type: 返回CSGNodeType

class iobjectspy.data.Box(_length=10, _width=10, _height=10)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

立方体

:length width ,height 长宽高

height
length
set_height(value)
set_length(value)
set_width(value)
width
class iobjectspy.data.Sphere(dradius=10.0)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

球体

:半径

radius
set_radius(value)
class iobjectspy.data.Cylinder(top_radius=10, bottom_radius=10, height=50)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

圆柱体(上下半径可以不一致)

:上下底面半径可不同

bottom_radius
height
set_bottom_radius(value)
set_height(value)
set_top_radius(value)
top_radius
class iobjectspy.data.Cone(radius=10, height=50)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

圆锥体

:param radius:底面半径 :param height: 高

height
radius
set_height(value)
set_radius(value)
class iobjectspy.data.Ellipsoid(semiAxis_x=10, semiAxis_y=10, semiAxis_z=10)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

椭球体

:param semiAxis_x:x半轴 :param semiAxis_y:y半轴 :param semiAxis_z:z半轴

semiAxis_X
semiAxis_Y
semiAxis_Z
set_semiAxis_X(value)
set_semiAxis_Y(value)
set_semiAxis_Z(value)
class iobjectspy.data.Torus(_ring_radius=10, _pipe_radius=1, _sweep_angle=360, _start_angle=0)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

圆环体

:param ring_radius:环半径 :param pipe_radius:管半径 :param sweep_angle:扫略角度 :param start_angle:起始角度

pipe_radius
ring_radius
set_pipe_radius(value)
set_ring_radius(value)
set_start_angle(value)
set_sweep_angle(value)
start_angle
sweep_angle
class iobjectspy.data.TruncatedCone(_top_radius=10, _bottom_radius=20, _height=20, _top_offset=Point2D(0.000000, 0.000000))

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

圆台体

:param _top_radius:顶部半径 :param _bottom_radius:底部半径 :param _height:高度 :param _top_offset:顶面圆心偏移(Point2D)

bottom_radius
height
set_bottom_radius(value)
set_height(value)
set_top_offset(value)
set_top_radius(value)
top_offset
top_radius
class iobjectspy.data.Table3D(_bottom_length=10, _bottom_length1=10, _top_length=5, _top_length1=5, _height=6)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

棱台体

:param _bottom_length:底面对角线长度1(与X轴重合) :param _bottom_length1:底面对角线长度2(与Y轴重合) :param _top_length:顶面对角线长度1(与X轴平行) :param _top_length1:顶面对角线长度2(与Y轴平行) :param _height:高度

bottom_length
bottom_length1
height
set_bottom_length(value)
set_bottom_length1(value)
set_height(value)
set_top_length(value)
set_top_length1(value)
top_length
top_length1
class iobjectspy.data.Wedge(_bottom_length=10, _bottom_Width=8, _top_length=8, _topWidth=0, _height=15, _top_offset=Point2D(1.000000, 4.000000))

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

楔形体

:param _bottom_length:底面对角线长度1(与X轴重合) :param _bottom_length1:底面对角线长度2(与Y轴重合) :param _top_length:顶面对角线长度1(与X轴平行) :param _top_length1:顶面对角线长度2(与Y轴平行) :param _top_offset:顶面偏移 :param _height:高度

bottom_length
bottom_width
height
set_bottom_length(value)
set_bottom_width(value)
set_height(value)
set_top_length(value)
set_top_offset(value)
set_top_width(value)
top_length
top_offset
top_width
class iobjectspy.data.Pyramid(_length=10, _width=10, _height=10)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

棱锥体

:length width ,height 长宽高

height
length
set_height(value)
set_length(value)
set_width(value)
width
class iobjectspy.data.EllipticRing(_semi_major_axis=10, _semi_minor_axis=8, _pipe_padius=1)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

椭圆环

:param semi_major_axis:长半轴 :param _semi_minor_axis:短半轴 :param _pipe_padius:管半径

pipe_radius
semi_major_axis
semi_minor_axis
set_pipe_radius(value)
set_semi_major_axis(value)
set_semi_minor_axis(value)
class iobjectspy.data.RectangularRing(_length=10, _width=8, _pipe_radius=1)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

矩形环

:param _length:长 :param _width:宽 :param _pipe_radius:管半径

length
pipe_radius
set_length(value)
set_pipe_radius(value)
set_width(value)
width
class iobjectspy.data.SlopedCylinder(_bottom_radius=10, _top_radius=10, _height=50, _top_slope=Point2D(0.000000, 0.000000), _bottom_slope=Point2D(0.000000, 0.000000))

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

斜口圆柱

:param _bottom_radius:底部半径 :param _top_radius:顶部半径 :param _height:高 :param _top_angle:顶部坡度(与X、Y轴角度) :param _bottom_angle:底部坡度(与X、Y轴角度)

bottom_radius
bottom_slope
height
set_bottom_radius(value)
set_bottom_slope(value)
set_height(value)
set_top_radius(value)
set_top_slope(value)
top_radius
top_slope
class iobjectspy.data.BendingCylinder(_radius=3, _angle=30, _length=10)

Bases: iobjectspy._jsuperpy.data.geo.CSGEntity

弯折圆柱

:param _radius:半径 :param _angle:角度 :param _length:长度

angle
length
radius
set_angle(value)
set_length(value)
set_radius(value)
class iobjectspy.data.GeoConstructiveSolid(csg_nodes=None)

Bases: iobjectspy._jsuperpy.data.geo.Geometry3D

构造实体几何对象类

IsLonLat
csgNodes
set_IsLonLat(value)
set_csgNodes(value)

iobjectspy.enums module

class iobjectspy.enums.PixelFormat

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the pixel format type constants for raster and image data storage.

The raster data structure is actually an array of pixels, and the pixel (or pixel) is the most basic information storage unit of raster data. There are two types of raster data in SuperMap: Raster Dataset (DatasetGrid) And image datasets (DatasetImage), raster datasets are mostly used for raster analysis, so their pixel values are attribute values of features, such as elevation, precipitation, etc.; image datasets are generally used for display or as a base Graph, so its pixel value is the color value or color index value.

Variables:
  • PixelFormat.UNKONOWN – Unknown pixel format
  • PixelFormat.UBIT1 – Each pixel is represented by 1 bit. For raster datasets, it can represent two values of 0 and 1. For image datasets, it can represent two colors of black and white, corresponding to monochrome image data.
  • PixelFormat.UBIT4 – Each pixel is represented by 4 bits. For a raster dataset, it can represent a total of 16 integer values from 0 to 15; for an image dataset, it can represent 16 colors. These 16 colors are indexed colors, which are defined in its color table, corresponding to 16 colors Image data.
  • PixelFormat.UBIT8 – Each pixel is represented by 8 bits, that is, 1 byte. For a raster dataset, it can represent a total of 256 integer values from 0 to 255; for an image dataset, it can represent 256 gradual colors. These 256 colors are indexed colors, which are defined in the color table and correspond to Image data of 256 colors.
  • PixelFormat.BIT8 – Each pixel is represented by 8 bits, that is, 1 byte. For a raster dataset, it can represent 256 integer values from -128 to 127. Each pixel is represented by 8 bits, that is, 1 byte. For a raster dataset, it can represent 256 integer values from -128 to 127.
  • PixelFormat.BIT16 – Each pixel is represented by 16 bits, that is, 2 bytes. For raster datasets, it can represent 65536 integer values from -32768 to 32767; for image datasets, among the 16 bits, red, green, and blue are each represented by 5 bits, and the remaining 1 bit is unused. Corresponds to color image data.
  • PixelFormat.UBIT16 – Each pixel is represented by 16 bits, that is, 2 bytes. For raster datasets, it can represent 65536 integer values from 0 to 65535
  • PixelFormat.RGB – Each pixel is represented by 24 bits, ie 3 bytes. It is only available for image dataset. Among the 24 bits, red, green, and blue are each represented by 8 bits, corresponding to true color image data.
  • PixelFormat.RGBA – Each pixel is represented by 32 bits, that is, 4 bytes. It is only available for image dataset. Among the 32 bits, red, green, blue and alpha are represented by 8 bits each, corresponding to the image data with enhanced true color.
  • PixelFormat.BIT32 – Each pixel is represented by 32 bits, that is, 4 bytes. For raster datasets, it can represent 4294967296 integer values from -231 to (231-1); for image datasets, among the 32 bits, red, green, blue and alpha are represented by 8 bits each, corresponding to Enhance true color image data. This format supports DatasetGrid, DatasetImage (only supports multi-band).
  • PixelFormat.UBIT32 – Each pixel is represented by 32 bits, that is, 4 bytes, which can represent 4294967296 integer values from 0 to 4294967295.
  • PixelFormat.BIT64 – Each pixel is represented by 64 bits, that is, 8 bytes. It can represent a total of 18446744073709551616 integer values from -263 to (263-1). .
  • PixelFormat.SINGLE – Each pixel is represented by 4 bytes. It can represent single-precision floating-point numbers in the range of -3.402823E+38 to 3.402823E+38.
  • PixelFormat.DOUBLE – Each pixel is represented by 8 bytes. It can represent a double-precision floating-point number in the range of -1.7976313486232E+308 to 1.79769313486232E+308.
BIT16 = 16
BIT32 = 320
BIT64 = 64
BIT8 = 80
DOUBLE = 6400
RGB = 24
RGBA = 32
SINGLE = 3200
UBIT1 = 1
UBIT16 = 160
UBIT32 = 321
UBIT4 = 4
UBIT8 = 8
UNKONOWN = 0
class iobjectspy.enums.BlockSizeOption

Bases: iobjectspy._jsuperpy.enums.JEnum

This enumeration defines the type constants of pixel blocks. For raster datasets or image data:

Variables:
BS_1024 = 1024
BS_128 = 128
BS_256 = 256
BS_512 = 512
BS_64 = 64
class iobjectspy.enums.AreaUnit

Bases: iobjectspy._jsuperpy.enums.JEnum

Area unit type:

Variables:
ACRE = 14
ARE = 7
HECTARE = 6
MU = 9
QING = 8
SQUARECENTIMETER = 2
SQUAREDECIMETER = 3
SQUAREFOOT = 11
SQUAREINCH = 10
SQUAREKILOMETER = 5
SQUAREMETER = 4
SQUAREMILE = 13
SQUAREMILLIMETER = 1
SQUAREYARD = 12
class iobjectspy.enums.Unit

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines type constants that represent units.

Variables:
CENTIMETER = 100
DECIMETER = 1000
DEGREE = 1001745329
FOOT = 3048
INCH = 254
KILOMETER = 10000000
METER = 10000
MILE = 16090000
MILIMETER = 10
MINUTE = 1000029089
RADIAN = 1100000000
SECOND = 1000000485
YARD = 9144
class iobjectspy.enums.EngineType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the spatial database engine type. The spatial database engine is on top of the conventional database management system. In addition to the functions necessary for the conventional database management system, it also provides specific storage and management capabilities for spatial data. SuperMap SDX+ is the spatial database technology of supermap and an important part of the data model of SuperMap GIS software. Various spatial geometric objects and image data can be passed through SDX+ Engine, stored in a relational database to form a spatial database integrating spatial data and attribute data

Variables:
  • EngineType.IMAGEPLUGINS – Image read-only engine type, the corresponding enumeration value is 5. For common image formats such as BMP, JPG, TIFF and SuperMap custom image format SIT, two-dimensional map cache configuration file format SCI, etc. When loading the 2D map cache, the user needs to set this engine type, and also need to use the DatasourceConnectionInfo.set_server() method to set the parameter to the 2D map cache configuration file (SCI). For MrSID and ECW, the read-only opening is for the quick principle, and it is opened in the synthetic band mode. The non-grayscale data will be displayed in RGB or RGBA by default , and the grayscale data will be displayed in the original way.
  • EngineType.ORACLEPLUS – Oracle engine type
  • EngineType.SQLPLUS – SQL Server engine type, only supported in Windows platform version
  • EngineType.DB2 – DB2 engine type
  • EngineType.KINGBASE – Kingbase engine type, for Kingbase datasource, does not support multi-band data
  • EngineType.MEMORY – Memory datasource.
  • EngineType.OGC – OGC engine type, for web datasources, the corresponding enumeration value is 23. Currently supported types are WMS, WFS, WCS and WMTS. The default BoundingBox and TopLeftCorner label reading method in WMTS service is (longitude, latitude). The coordinate format provided by some service providers is (latitude, longitude). When you encounter this situation, in order to ensure the correctness of the coordinate data read, please check the SuperMap.xml file (the file is located in the Bin directory). Correctly modify the content. Usually, the performance of this situation is that the local vector data and the published WMTS service data cannot be superimposed together.
  • EngineType.MYSQL – MYSQL engine type, support MySQL version 5.6.16 and above
  • EngineType.MONGODB – MongoDB engine type, currently supported authentication method is Mongodb-cr
  • EngineType.BEYONDB – BeyonDB engine type
  • EngineType.GBASE – GBase engine type
  • EngineType.HIGHGODB – HighGoDB engine type
  • EngineType.UDB – UDB engine type
  • EngineType.POSTGRESQL – PostgreSQL engine type
  • EngineType.GOOGLEMAPS – GoogleMaps engine type. The engine is read-only and cannot be created. This constant is only supported in the Windows 32-bit platform version, not in the Linux version
  • EngineType.SUPERMAPCLOUD – Supermap cloud service engine type. This engine is a read-only engine and cannot be created. This constant is only supported in the Windows 32-bit platform version, not in the Linux version.
  • EngineType.ISERVERREST – REST map service engine type. This engine is read-only and cannot be created. For map services published based on the REST protocol. This constant is only supported in the Windows 32-bit platform version, not in the Linux version.
  • EngineType.BAIDUMAPS – Baidu map service engine type
  • EngineType.BINGMAPS – Bing map service engine type
  • EngineType.GAODEMAPS – GaoDe map service engine type
  • EngineType.OPENSTREETMAPS – OpenStreetMap engine type. This constant is only supported in the Windows 32-bit platform version, not in the Linux version
  • EngineType.SCV – Vector cache engine type
  • EngineType.DM – The third-generation DM engine type
  • EngineType.ORACLESPATIAL – Oracle Spatial engine type
  • EngineType.SDE

    ArcSDE engine type:

    -Support ArcSDE 9.2.0 and above -Supports reading of 5 data types of point, line, area, text and raster datasets of ArcSDE 9.2.0 and above, and does not support writing. -The style of reading ArcSDE text is not supported. The default field “TEXTSTRING” of ArcSDE storing text cannot be deleted, otherwise we cannot read the text. - does not support raster 2bit ArcSDE read bit depth, the bit depth of the other support, and stretchable display. -Does not support multi-threading. -To use the SDE engine, you need ArcInfo’s permission, and copy the three dlls sde.dll, sg.dll and pe.dll in the ArcSDE installation directory bin to the Bin directory under the SuperMap product (ie SuSDECI.dll and SuEngineSDE.sdx) Same level directory) -Support platform: Windows 32 bit, Windows 64 bit.

  • EngineType.ALTIBASE – Altibase engine type
  • EngineType.KDB – KDB engine type
  • EngineType.SRDB – The engine type of the relational database
  • EngineType.MYSQLPlus – MySQLPlus database engine type, essentially MySQL+Mongo
  • EngineType.VECTORFILE – Vector file engine type. For general vector formats such as shp, tab, Acad, etc., it supports editing and saving of vector files. If it is a type supported by FME, the corresponding FME license is required. Currently, there is no FME license that does not support FileGDBVector format.
  • EngineType.PGGIS – PostgreSQL’s spatial data extension PostGIS engine type
  • EngineType.ES – Elasticsearch engine type
  • EngineType.SQLSPATIAL – SQLSpatial engine type
  • EngineType.UDBX – UDBX engine type
  • EngineType.TIBERO – Tibero engine type
  • EngineType.SHENTONG – engine type
  • EngineType.HWPOSTGRESQL – HUAWEI PostgreSQL engine type
  • EngineType.GANOS – Ali PolarDB PostgreSQL engine type
  • EngineType.XUGU – XUGU engine type
  • EngineType.ATLASDB – AtlasDB engine type
ALTIBASE = 2004
ATLASDB = 2059
BAIDUMAPS = 227
BEYONDB = 2001
BINGMAPS = 230
DB2 = 18
DM = 17
ES = 2011
GANOS = 2057
GAODEMAPS = 232
GBASE = 2002
GOOGLEMAPS = 223
HIGHGODB = 2003
HWPOSTGRESQL = 2056
IMAGEPLUGINS = 5
ISERVERREST = 225
KDB = 2005
KINGBASE = 19
MEMORY = 20
MONGODB = 401
MYSQL = 32
MYSQLPlus = 2007
OGC = 23
OPENSTREETMAPS = 228
ORACLEPLUS = 12
ORACLESPATIAL = 10
PGGIS = 2012
POSTGRESQL = 221
SCV = 229
SDE = 4
SHENTONG = 2055
SQLPLUS = 16
SQLSPATIAL = 2013
SRDB = 2006
SUPERMAPCLOUD = 224
TIBERO = 2014
UDB = 219
UDBX = 2054
VECTORFILE = 101
XUGU = 2058
class iobjectspy.enums.DatasetType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the dataset type constants. dataset are generally a collection of related data stored together; according to the different types of data, they are divided into vector dataset, raster dataset and image dataset, , and are designed to handle specific problems such as topology dataset, Network dataset, etc.. According to the different spatial characteristics of the elements, vector dataset are divided into point dataset, line dataset, area dataset, composite dataset, text dataset, pure attribute dataset, etc.

Variables:
  • DatasetType.UNKNOWN – Unknown type dataset
  • DatasetType.TABULAR – Pure attribute dataset. It is used to store and manage pure attribute data. The pure attribute data is used to describe information such as the features and shapes of terrain and features, such as the length and width of a river. The data The set has no spatial graphics data. That is, a pure attribute dataset cannot be added to the map window as a layer for display.
  • DatasetType.POINT – Point dataset. The dataset class used to store point objects, such as the distribution of discrete points.
  • DatasetType.LINE – Line dataset. A dataset used to store line objects, such as the distribution of rivers, roads, and national borders.
  • DatasetType.REGION – Polygon dataset. A dataset used to store surface objects, such as the distribution of houses and administrative areas.
  • DatasetType.TEXT – Text dataset. A dataset used to store text objects, then only text objects can be stored in the text dataset, not other geometric objects. For example, a text object representing annotation.
  • DatasetType.CAD – Composite dataset. Refers to a dataset that can store a variety of geometric objects, that is, a collection of different types of objects such as points, lines, surfaces, and texts. Each object in the CAD dataset can Have different styles, and the CAD dataset stores styles for each object.
  • DatasetType.LINKTABLE – Database table. That is, the external attribute table does not contain system fields (fields starting with SM). Use the same as a general attribute dataset, but this dataset only has a read-only function.
  • DatasetType.NETWORK – Network dataset. Network datasets are used to store datasets with network topology relationships . Such as road traffic network, etc. The network dataset is different from the point dataset and the line dataset. It contains not only network line objects, but also network node objects, as well as the spatial topological relationship between the two objects. Based on the network dataset, path analysis, Various network analyses such as service area analysis, nearest facility search, location selection, bus transfer, and neighboring point and access point analysis.
  • DatasetType.NETWORK3D – Three-dimensional network dataset, a dataset used to store three-dimensional network objects.
  • DatasetType.LINEM – Route dataset. It is composed of a series of line objects with scale value Measure in the spatial information. Usually can be applied to linear reference models or as the result data of network analysis.
  • DatasetType.PARAMETRICLINE – Composite parametric line dataset, used to store the dataset of composite parametric line geometric objects.
  • DatasetType.PARAMETRICREGION – Composite parameterized surface dataset, used to store the dataset of composite parameterized surface geometry objects.
  • DatasetType.GRIDCOLLECTION – The dataset storing the raster dataset collection objects. For a detailed description of raster dataset collection objects, please refer to: py:class:DatasetGridCollection.
  • DatasetType.IMAGECOLLECTION – The dataset storing the object of the image dataset collection. For a detailed description of the image dataset collection objects, please refer to: py:class:DatasetImageCollection.
  • DatasetType.MODEL – Model dataset.
  • DatasetType.TEXTURE – Texture dataset, a sub-dataset of the model dataset.
  • DatasetType.IMAGE – Image dataset. Does not have attribute information, such as image maps, multi-band images, and physical maps. Each raster stores a color value or color index value (RGB value).
  • DatasetType.WMS – WMS dataset, a type of DatasetImage. WMS (Web Map Service), that is, Web Map Service. WMS uses data with geospatial location information To make maps. The web map service Return a layer-level map image. The map is defined as the visual representation of geographic data.
  • DatasetType.WCS – WCS dataset, which is a type of DatasetImage. WCS (Web Coverage Service), that is, Web Coverage Service, for spatial image data, It exchanges geospatial data containing geographic location values as “Coverage” on the Internet.
  • DatasetType.GRID – Raster dataset. For example, elevation dataset and land use maps. Each raster stores the attribute value (such as elevation value) representing the features.
  • DatasetType.VOLUME – Grid volume data collection, which expresses three-dimensional volume data in a slice sampling method, such as the signal strength of a mobile phone in a specified spatial range, smog pollution index, etc.
  • DatasetType.TOPOLOGY – Topological dataset. The topology dataset is actually a container that provides comprehensive management capabilities for topology errors. It covers topology related dataset, topology rules, topology preprocessing, Topological error generation, location modification, automatic maintenance of dirty areas and other key elements of topology error checking provide a complete solution for topology error checking. Dirty area Is an area that has not been topologically checked, and it has already been topologically checked. If the user partially edits the data locally, a new dirty area will be generated in this local area.
  • DatasetType.POINT3D – 3D point dataset, used to store 3D point object dataset.
  • DatasetType.LINE3D – 3D line dataset, used to store 3D line object dataset.
  • DatasetType.REGION3D – Three-dimensional surface dataset, used to store three-dimensional surface object dataset.
  • DatasetType.POINTEPS – Tsinghua Mountain Dimension Point dataset, used to store Tsinghua Mountain Dimension Point Object dataset.
  • DatasetType.LINEEPS – Tsinghua Mountain dimension line dataset, used to store the dataset of Tsinghua Mountain dimension line objects.
  • DatasetType.REGIONEPS – Tsinghua Mountain dimension dataset, used to store the dataset of Tsinghua Mountain dimension object.
  • DatasetType.TEXTEPS – Tsinghua Sundimensional text dataset, a dataset used to store Tsinghua Sundimensional text objects.
  • DatasetType.VECTORCOLLECTION – Vector dataset collection, used to store multiple vector datasets, only supports PostgreSQL engine.
  • DatasetType.MOSAIC – mosaic dataset
CAD = 149
GRID = 83
GRIDCOLLECTION = 199
IMAGE = 81
IMAGECOLLECTION = 200
LINE = 3
LINE3D = 103
LINEEPS = 158
LINEM = 35
LINKTABLE = 153
MODEL = 203
MOSAIC = 206
NETWORK = 4
NETWORK3D = 205
PARAMETRICLINE = 8
PARAMETRICREGION = 9
POINT = 1
POINT3D = 101
POINTEPS = 157
REGION = 5
REGION3D = 105
REGIONEPS = 159
TABULAR = 0
TEXT = 7
TEXTEPS = 160
TEXTURE = 204
TOPOLOGY = 154
UNKNOWN = -1
VECTORCOLLECTION = 201
VOLUME = 89
WCS = 87
WMS = 86
class iobjectspy.enums.FieldType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines field type constants. Define a series of constants to represent fields that store different types of values.

Variables:
BOOLEAN = 1
BYTE = 2
CHAR = 18
DATETIME = 23
DOUBLE = 7
INT16 = 3
INT32 = 4
INT64 = 16
JSONB = 129
LONGBINARY = 11
SINGLE = 6
TEXT = 10
WTEXT = 127
class iobjectspy.enums.GeometryType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines a series of type constants of geometric objects.

Variables:
GEOARC = 24
GEOBOX = 1205
GEOBSPLINE = 29
GEOCARDINAL = 27
GEOCHORD = 23
GEOCIRCLE = 15
GEOCIRCLE3D = 1210
GEOCOMPOUND = 1000
GEOCONE = 1207
GEOCONSTRUCTIVESOLID = 1234
GEOCURVE = 28
GEOCYLINDER = 1206
GEOELLIPSE = 20
GEOELLIPSOID = 1212
GEOELLIPTICARC = 25
GEOHEMISPHERE = 1204
GEOLEGEND = 2011
GEOLINE = 3
GEOLINE3D = 103
GEOLINEEPS = 4001
GEOLINEM = 35
GEOMAP = 2001
GEOMAPBORDER = 2009
GEOMAPSCALE = 2005
GEOMODEL = 1201
GEOMODEL3D = 1218
GEOMULTIPOINT = 2
GEONORTHARROW = 2008
GEOPARAMETRICLINE = 16
GEOPARAMETRICLINECOMPOUND = 8
GEOPARAMETRICREGION = 17
GEOPARAMETRICREGIONCOMPOUND = 9
GEOPARTICLE = 1213
GEOPICTURE = 1101
GEOPICTURE3D = 1202
GEOPIE = 21
GEOPIE3D = 1209
GEOPIECYLINDER = 1211
GEOPLACEMARK = 108
GEOPOINT = 1
GEOPOINT3D = 101
GEOPOINTEPS = 4000
GEOPYRAMID = 1208
GEORECTANGLE = 12
GEOREGION = 5
GEOREGION3D = 105
GEOREGIONEPS = 4002
GEOROUNDRECTANGLE = 13
GEOSPHERE = 1203
GEOTEXT = 7
GEOTEXT3D = 107
GEOTEXTEPS = 4003
GEOUSERDEFINED = 1001
GRAPHICOBJECT = 3000
class iobjectspy.enums.WorkspaceType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants of workspace type.

SuperMap supports four types of file workspaces, SSXWU format and SMWU format; SuperMap supports two types of database workspaces: Oracle workspace and SQL Server workspace.

Variables:
  • WorkspaceType.DEFAULT – The default value, which indicates the type of workspace when the workspace is not saved.
  • WorkspaceType.ORACLE – Oracle workspace. The workspace is saved in the Oracle database.
  • WorkspaceType.SQL – SQL Server workspace. The workspace is saved in the SQL Server database. This constant is only supported in the Windows platform version, not in the Linux version.
  • WorkspaceType.DM – DM workspace. The workspace is stored in the DM database.
  • WorkspaceType.MYSQL – MYSQL workspace. The workspace is saved in the MySQL database.
  • WorkspaceType.PGSQL – PostgreSQL workspace. The workspace is saved in the PostgreSQL database.
  • WorkspaceType.MONGO – MongoDB workspace. The workspace is stored in the MongoDB database.
  • WorkspaceType.SXWU – SXWU workspace. Only the 6R version of the workspace can be saved as a workspace file of type SXWU. When saving as 6R version workspace, file-type workspace can only be saved as SXWU or SMWU.
  • WorkspaceType.SMWU – SMWU workspace. Only the 6R version of the workspace can be saved as a workspace file of type SMWU. When saving as 6R version workspace, file-type workspace can only be saved as SXWU or SMWU. This constant is only supported in the Windows platform version, not in the Linux version.
DEFAULT = 1
DM = 12
MONGO = 15
MYSQL = 13
ORACLE = 6
PGSQL = 14
SMWU = 9
SQL = 7
SXWU = 8
class iobjectspy.enums.WorkspaceVersion

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the workspace version type constants.

Variables:
UGC60 = 20090106
UGC70 = 20120328
class iobjectspy.enums.EncodeType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of the compression encoding method when the dataset is stored.

For vector dataset, four compression encoding methods are supported, namely single-byte, double-byte, three-byte and four-byte encoding methods. These four compression encoding methods use the same compression encoding mechanism, but the compression ratios are different. All of them are lossy compression. It should be noted that point dataset, pure attribute dataset and CAD dataset are not compressible. For raster data, four compression coding methods can be used, namely DCT, SGL, LZW and COMPOUND. Among them, DCT and COMPOUND are lossy compression coding methods, and SGL and LZW are lossless compression coding methods.

For image and raster datasets, choosing an appropriate compression encoding method according to its pixel format (PixelFormat) is very beneficial to improve the efficiency of system operation and save storage space. The following table lists reasonable encoding methods for different Pixel formats of image and raster datasets:

../_images/EncodeTypeRec.png
Variables:
  • EncodeType.NONE – Do not use encoding
  • EncodeType.BYTE – Single-byte encoding method. Use 1 byte to store a coordinate value. (Only applicable to line and area datasets)
  • EncodeType.INT16 – Double-byte encoding method. Use 2 bytes to store a coordinate value. (Only applicable to line and area datasets)
  • EncodeType.INT24 – Three-byte encoding method. Use 3 bytes to store a coordinate value. (Only applicable to line and area datasets)
  • EncodeType.INT32 – Four-byte encoding method. Use 4 bytes to store a coordinate value. (Only applicable to line and area datasets)
  • EncodeType.DCT – DCT (Discrete Cosine Transform), discrete cosine encoding. It is a transform coding method widely used in image compression. This transform method Provides a good balance between information compression capacity, reconstructed image quality, adaptation range and algorithm complexity, and has become the most widely used image compression technology. The principle is To reduce the strong correlation existing in the original spatial domain representation of the image by transformation, so that the signal can be expressed more compact. This method has high compression rate and performance, but the encoding is distorted. Since image dataset are generally not used for accurate analysis, the DCT encoding method is a compression encoding method for image dataset storage. (Applicable to image dataset)
  • EncodeType.SGL – SGL (SuperMap Grid LZW), a compressed storage format customized by SuperMap. Its essence is an improved LZW encoding method. SGL improves on LZW And is a more efficient way of compressed storage. At present, the compression and storage of Grid dataset and DEM dataset in SuperMap is the SGL compression encoding method. This is a lossless compression. (Applicable to raster datasets)
  • EncodeType.LZW – LZW is a widely used dictionary compression method, which was first used in the compression of text data. The principle of LZW encoding is to replace a string with a code name. Subsequent strings of the same name use the same code name, so this encoding method can not only compress repeated data, but also compress non-repetitive data. It is suitable for the compression of Indexed color image, which is a lossless compression coding method. (Applicable to raster and image datasets)
  • EncodeType.PNG – PNG compression encoding method supports images with multiple bit depths and is a lossless compression method. (Applicable to image dataset)
  • EncodeType.COMPOUND – The dataset composite encoding method, the compression ratio is close to the DCT encoding method, mainly for the problem of boundary image block distortion caused by DCT compression. (Applicable to image dataset in RGB format)
BYTE = 1
COMPOUND = 17
DCT = 8
INT16 = 2
INT24 = 3
INT32 = 4
LZW = 11
NONE = 0
PNG = 12
SGL = 9
class iobjectspy.enums.CursorType

Bases: iobjectspy._jsuperpy.enums.JEnum

Cursor type:

Variables:
  • CursorType.DYNAMIC – Dynamic cursor type. Support various editing operations, slow speed. Dynamic cursor meaning: you can see the additions, changes and deletions made by other users. Allows to move back and forth in the record set, (but not The bookmark operation that the data provider does not support, the bookmark operation is mainly for ADO). This type of cursor is very powerful, but it is also the cursor that consumes the most system resources. Dynamic cursor can know All changes to the Recordset. Users who use dynamic cursors can see the editing, adding, and deleting operations that other users have done to the dataset. If the data provider allows this type of cursor, then it Dynamically refreshes the query’s recordset by fetching data from the datasource at regular intervals. There is no doubt that this will require a lot of resources.
  • CursorType.STATIC – Static cursor type. Meaning of a static cursor: a static copy of a collection of records that can be used to find data or generate reports. In addition, additions, changes, or deletions made by other users are not visible. Static cursor only Is a snapshot of the data. In other words, it cannot see the editing operations made by other users on the Recordset since it was created. With this type of cursor you can backtrack forward and backward. Because of its simple function, the resource consumption is smaller than that of the dynamic cursor (DYNAMIC)! )
DYNAMIC = 2
STATIC = 3
class iobjectspy.enums.Charset

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the character set type constants of the vector dataset.

Variables:
ANSI = 0
ARABIC = 178
BALTIC = 186
CHINESEBIG5 = 136
CYRILLIC = 135
DEFAULT = 1
EASTEUROPE = 238
GB18030 = 134
GREEK = 161
HANGEUL = 129
HEBREW = 177
JOHAB = 130
KOREAN = 131
MAC = 77
OEM = 255
RUSSIAN = 204
SHIFTJIS = 128
SYMBOL = 2
THAI = 222
TURKISH = 162
UNICODE = 132
UTF7 = 7
UTF8 = 250
VIETNAMESE = 163
WINDOWS1252 = 137
XIA5 = 3
XIA5GERMAN = 4
XIA5NORWEGIAN = 6
XIA5SWEDISH = 5
class iobjectspy.enums.OverlayMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Overlay analysis mode type

  • Clipping (CLIP)

    It is used to perform overlay analysis on the dataset by erasing method, and cut and delete the objects contained in the second dataset in the first dataset.

    • The type of the cropped dataset (the second dataset) must be a surface, and the dataset to be cropped (the first dataset) can be a point, a line, or a surface.
    • In the cropped dataset, only objects that fall within the polygons of the cropped dataset will be output to the result dataset.
    • The geographic coordinate system of the cropped dataset, the cropped dataset, and the result dataset must be consistent.
    • Clip and intersect are the same in spatial processing. The difference lies in the processing of the attributes of the result record set. Clip analysis is only used for clipping. The result record set has the same structure as the attribute table of the first record set. The analysis parameters are superimposed here. Object setting is invalid. The results of intersect intersection analysis can retain the fields of the two record sets according to the field settings.
    • All the results of overlay analysis do not consider the system fields of the dataset.
    ../_images/OverlayClip.png
  • Erase (ERASE)

    It is used to perform superposition analysis on the dataset in the same way, and the result dataset retains all the objects of the same operation dataset and the objects that are intersected by the same operation dataset and the dataset used for the same operation.

    • The type of the erased dataset (the second dataset) must be a surface, and the erased dataset (the first dataset) can be a point, line, or surface dataset.
    • The polygon set in the erasing dataset defines the erasing area. All the features in the erasing dataset that fall within these polygon areas will be removed, and the feature elements that fall outside the polygon area will be output to the result dataset , The opposite of clip operation.
    • The geographic coordinate system of the erased dataset, the erased dataset, and the result dataset must be consistent.
    ../_images/OverlayErase.png
  • Same (IDENTITY)

    It is used to perform superposition analysis on the dataset in the same way, and the result dataset retains all the objects of the same operation dataset and the objects that are intersected by the same operation dataset and the dataset used for the same operation.

    • The same operation is an operation in which the first dataset and the second dataset intersect first, and then the result of the intersection is merged with the first dataset. Among them, the type of the second dataset must be a polygon, and the type of the first dataset can be a point, line, or polygon dataset. If the first dataset is a point set, the newly generated dataset retains all the objects of the first dataset; if the first dataset is a line dataset, the newly generated dataset retains the data of the first dataset All objects, but the objects that intersect with the second dataset are interrupted at the intersection; if the first dataset is a polygon dataset, the result dataset retains all the polygons within the control boundary with the first dataset , And divide the object that intersects with the second dataset into multiple objects at the intersection .
    • The identiy operation is similar to the union operation. The difference is that the union operation retains all the parts of the two dataset, while the identity operation keeps the disjoint parts of the first dataset and the second dataset . The result attribute table of the identity operation comes from the attribute tables of the two dataset.
    • The geographic coordinate system of the dataset used for the same operation, the same operation dataset and the result dataset must be consistent.
    ../_images/OverlayIdentity.png
  • Intersect (INTERSECT)

    Perform the overlap analysis of the intersection method, and cut and delete the objects in the dataset that are not included in the dataset used for the intersection overlay analysis. That is, the overlapping part of the two dataset will be output to the result dataset, and the rest will be excluded.

    • The dataset to be analyzed by intersection and overlay can be point type, line type and area type, and the dataset used for intersection and overlay analysis must be of surface type. The feature objects (points, lines, and areas) of the first dataset are split at the intersection with the polygons in the second dataset (except point objects), and the split result is output to the result dataset.
    • The spatial geometric information of the result dataset obtained by the intersection operation and the clipping operation is the same, but the clipping operation does not do any processing on the attribute table, and the intersection operation allows the user to select the attribute fields that need to be retained.
    • The geographic coordinate system of the dataset used for the intersection and overlay analysis, the dataset to be intersected and overlay analysis, and the result dataset must be consistent.
    ../_images/OverlayIntersect.png
  • Symmetrical difference (XOR)

    Perform the symmetric difference analysis operation on the two face dataset. That is, the intersection is inverted.

    • The geographic coordinate system of the dataset used for symmetric difference analysis, the dataset to be analyzed by symmetric difference, and the result dataset must be consistent.
    • Symmetric difference operation is the exclusive OR operation of two dataset. The result of the operation is that for each area object, the part that it intersects with the geometric object in another dataset is removed, and the remaining part is retained. The attribute table of the output result of the symmetric difference operation contains the non-system attribute fields of the two input dataset.
    ../_images/OverlayXOR.png
  • UNION

    It is used to perform a merged overlay analysis on two surface dataset, and the result dataset saves the merged overlay analysis dataset and all the objects in the merged overlay analysis dataset, and performs intersection and division operations on the intersecting part. note:

    • Merging is the operation of merging two datasets. The merged layer retains all the layer features of the two datasets, and is limited to the two area datasets.
    • After the union operation, the two face datasets are divided into polygons at the intersection, and the geometry and attribute information of the two datasets are output to the result dataset.
    • The geographic coordinate system of the dataset used for the merged overlay analysis, the merged overlay analysis dataset, and the result dataset must be consistent.
    ../_images/OverlayUnion.png
  • Update (UPDATE)

    It is used for the overlay analysis of the update method of two face dataset. The update operation is to replace the overlapping part of the updated dataset with the updated dataset, which is a process of erasing and pasting.

    • The geographic coordinate system of the dataset used to update the overlay analysis, the dataset to be updated, and the result dataset must be consistent.
    • The type of the first dataset and the second dataset must be a face dataset. The result dataset retains the geometric shape and attribute information of the updated dataset.
    ../_images/OverlayUpdate.png
Variables:
CLIP = 1
ERASE = 2
IDENTITY = 3
INTERSECT = 4
UNION = 6
UPDATE = 7
XOR = 5
class iobjectspy.enums.SpatialIndexType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the spatial index type constants.

Spatial index is used to improve the data structure of data space query efficiency. R-tree index, quad-tree index, map frame index and multi-level grid index are provided in SuperMap. The above indexes are only applicable to vector datasets.

At the same time, a dataset can only use one index at a time, but the index can be switched, that is, after creating an index on the dataset, the old index must be deleted to create a new one. When the dataset is in the editing state, The system automatically maintains the current index. In particular, when the data is edited multiple times, the efficiency of the index will be affected to varying degrees. Through the judgment of the system, we know whether to re-establish the spatial index:

-The current version of UDB and PostgreSQL datasources only support R-tree indexes (RTree), and DB2 datasources only support multi-level grid indexes (Multi_Level_Grid); -None of the point datasets in the database supports quad-tree (QTree) index and R-tree index (RTree); -The network dataset does not support any type of spatial index; -The composite dataset does not support multi-level grid index; -The routing dataset does not support tile index (TILE); -The attribute dataset does not support any type of spatial index; -For database-type datasources, indexes can be created only when the database records are greater than 1000.

Variables:
  • SpatialIndexType.NONE – No spatial index means no spatial index, suitable for very small data volume
  • SpatialIndexType.RTREE

    R-tree index is a disk-based index structure. It is a natural expansion of B-tree (one-dimensional) in high-dimensional space. It is easy to integrate with existing database systems and can support various types of Spatial query processing operation of, has been widely used in practice and is currently one of the most popular spatial index methods. The R-tree spatial index method is to design some Rectangle of the spatial object contains some target objects with similar spatial positions in this rectangle, and uses these rectangles as the spatial index. It contains pointers to the contained Space object. note:

    -This index is suitable for static data (when browsing and querying data). -This index supports concurrent operations of data.

  • SpatialIndexType.QTREE – Quadtree is an important hierarchical dataset structure, mainly used to express the spatial hierarchical relationship under two-dimensional coordinates. In fact, it is an expansion of one-dimensional binary tree in two-dimensional space. Then, the quadtree index is to divide a map into four equal parts, and then divide it into four equal parts in each grid, and subdivide it layer by layer until it can no longer be divided. Now in SuperMap The middle quadtree allows up to 13 levels. Based on the sorting rules of Hilbert coding, From the quadtree we can determine the minimum range to which the indexed attribute value of each object Instance in the indexed class belongs.
  • SpatialIndexType.TILE – The map frame index. In SuperMap, spatial objects are classified according to a certain attribute field of the dataset or according to a given range, The spatial objects are classified, and the classified spatial objects are managed by index, So as to improve the query retrieval speed.
  • SpatialIndexType.MULTI_LEVEL_GRID

    Multilevel grid index, also known as dynamic index. Multilevel grid index combines the advantages of R tree index and quadtree index, provides very good concurrent editing support, And has good universality. If it is not clear which spatial index is suitable for the data, a multi-level grid index can be established for it. Multilayer grid Is used to organize and manage data. The basic method of grid index is to divide the dataset into equal or unequal grids according to certain rules and record The grid position occupied by each geographic object. Regular grids are commonly used in GIS. When the user makes a spatial query, the grid of the user’s query object is first calculated, And the query operation can be optimized by quickly querying the selected geographic object through the grid.

    In the current version, the index to define the grid is level 1, level 2, and level 3. Each level has its own division rules, the first level has the smallest grid, and the second level The grid corresponding to the third level should be larger than the previous one. When establishing a multi-level grid index, according to the specific data and its distribution, the size of the grid and The number of grid index levels is automatically given by the system and does not need to be set by the user.

  • SpatialIndexType.PRIMARY

    Native index, which creates a spatial index. In PostgreSQL spatial data extension PostGIS, it is GIST index, which means general search tree. In the SQL Server spatial data extension SQLSpatial is a multi-level grid index:

    -PostGIS’s GIST index is a balanced, tree-like access method, mainly using B-tree, R-tree, and RD-tree index algorithms. Advantages: Applicable to multidimensional data types and collection data types, as well as other data types. The GIST multi-field index will use index scans for any subset of indexed fields in the query conditions. Disadvantages: GIST index creation takes a long time and takes up a lot of space. -SQLSpatial’s multi-level grid index can be set up to four levels, and each level is performed in sequence in a uniform grid manner. When creating an index, you can choose high, medium, and low grid densities, corresponding to (4*4), (8*8) and (16*16) respectively. The current default medium grid density.

MULTI_LEVEL_GRID = 5
NONE = 1
PRIMARY = 7
QTREE = 3
RTREE = 2
TILE = 4
class iobjectspy.enums.DissolveType

Bases: iobjectspy._jsuperpy.enums.JEnum

Fusion type constant

Variables:
  • DissolveType.ONLYMULTIPART – combination. Combine objects with the same fusion field value into a complex object.
  • DissolveType.SINGLE – Dissolve. Combine objects with the same fusion field value and topologically adjacent objects into a simple object.
  • DissolveType.MULTIPART – Combination after fusion. Combine objects with the same fusion field value and topologically adjacent objects into a simple object, and then combine non-adjacent objects with the same fusion field value into a complex object.
MULTIPART = 3
ONLYMULTIPART = 1
SINGLE = 2
class iobjectspy.enums.TextAlignment

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for text alignment types.

Specify the alignment of each sub-object in the text. The position of each child object of the text object is determined by the anchor point of the text and the alignment of the text. When the anchor point of the text sub-object is fixed, the alignment determines the relative position of the text sub-object and the anchor point, thereby determining the position of the text sub-object.

Variables:
  • TextAlignment.TOPLEFT – Align the upper left corner. When the alignment of the text is the upper left corner alignment, the upper left corner of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.TOPCENTER – Top center alignment. When the alignment of the text is the top center alignment, the midpoint of the upper line of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.TOPRIGHT – Align the upper right corner. When the alignment of the text is the upper right corner alignment, the upper right corner of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BASELINELEFT – Align the baseline to the left. When the alignment of the text is the baseline left alignment, the left end of the baseline of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BASELINECENTER – The baseline is aligned in the center. When the alignment of the text is the baseline alignment, the midpoint of the baseline of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BASELINERIGHT – Align the baseline to the right. When the alignment of the text is the baseline right alignment, the right end of the baseline of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BOTTOMLEFT – Align the bottom left corner. When the alignment of the text is the lower left corner alignment, the lower left corner of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BOTTOMCENTER – Align the bottom to the center. When the alignment of the text is bottom line centered, the midpoint of the bottom line of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.BOTTOMRIGHT – Align the bottom right corner. When the alignment of the text is the lower right corner alignment, the lower right corner of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.MIDDLELEFT – Align left center. When the alignment of the text is left-center alignment, the midpoint of the left line of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.MIDDLECENTER – Center alignment. When the alignment of the text is center alignment, the center point of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
  • TextAlignment.MIDDLERIGHT – Right center alignment. When the alignment of the text is right-center alignment, the midpoint of the right line of the smallest enclosing rectangle of the text sub-object is at the anchor point of the text sub-object
BASELINECENTER = 4
BASELINELEFT = 3
BASELINERIGHT = 5
BOTTOMCENTER = 7
BOTTOMLEFT = 6
BOTTOMRIGHT = 8
MIDDLECENTER = 10
MIDDLELEFT = 9
MIDDLERIGHT = 11
TOPCENTER = 1
TOPLEFT = 0
TOPRIGHT = 2
class iobjectspy.enums.StringAlignment

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of multi-line text typesetting

Variables:
CENTER = 16
DISTRIBUTED = 144
LEFT = 0
RIGHT = 32
class iobjectspy.enums.ColorGradientType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the color gradient type.

The color gradient is the gradual mixing of multiple colors, which can be a gradient of two colors from the start color to the end color, or a gradient with multiple intermediate colors between the start color and the end color. This color gradient type can be used in the color scheme settings of thematic map objects, such as unique values map, range thematic map, statistical map, label map, grid range map and grid unique values map.

Variables:
BLACKWHITE = 0
BLUEBLACK = 9
BLUERED = 18
BLUEWHITE = 3
CYANBLACK = 12
CYANBLUE = 21
CYANGREEN = 22
CYANWHITE = 6
GREENBLACK = 8
GREENBLUE = 16
GREENORANGEVIOLET = 24
GREENRED = 17
GREENWHITE = 2
PINKBLACK = 11
PINKBLUE = 20
PINKRED = 19
PINKWHITE = 5
RAINBOW = 23
REDBLACK = 7
REDWHITE = 1
SPECTRUM = 26
TERRAIN = 25
YELLOWBLACK = 10
YELLOWBLUE = 15
YELLOWGREEN = 14
YELLOWRED = 13
YELLOWWHITE = 4
class iobjectspy.enums.Buffer3DJoinType

Bases: iobjectspy._jsuperpy.enums.JEnum

The constant of the chamfering style of lofting :var Buffer3DJoinType.SQUARE: sharp corner joint style :var Buffer3DJoinType.ROUND: rounded corner connection style :var Buffer3DJoinType.MITER: Bevel connection style

MITER = 2
ROUND = 1
SQUARE = 0
class iobjectspy.enums.SpatialQueryMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the spatial query operation mode type. Spatial query is a query method that constructs filter conditions through the spatial position relationship between geometric objects. For example: through spatial query, you can find spatial objects contained in the surface, separated or adjacent spatial objects, etc.

Variables:
  • SpatialQueryMode.NONE – No spatial query
  • SpatialQueryMode.IDENTITY

    Coincident spatial query mode. Return the objects in the searched layer that completely overlap with the searched objects. Note: The types of the searched object and the searched object must be the same; and the intersection of the two objects is not empty, the boundary and interior of the searched object and the external intersection of the searched object are empty. The object type that this relationship is suitable for:

    -Search object: point, line, area; -Searched object: point, line, area.

    As shown:

    ../_images/SQIdentical.png
  • SpatialQueryMode.DISJOINT

    separate spatial query mode. Return the objects that are separated from the search object in the searched layer. Note: The search object is separated from the searched object, that is, there is no intersection. The object type that this relationship is suitable for:

    -Search object: point, line, area; -Searched object: point, line, area.

    As shown:
    ../_images/SQDsjoint.png
  • SpatialQueryMode.INTERSECT

    Intersecting spatial query mode. Return all objects that intersect the search object. Note: If the search object is a face, all or part of the objects contained in the search object and all or part of the objects contained in the search object will be returned; if the search object is not a face, all or part of the objects contained in the search object will be returned. The object type that this relationship is suitable for:

    -Search object: point, line, area; -Searched object: point, line, area.

    As shown:

    ../_images/SQIntersect.png
  • SpatialQueryMode.TOUCH

    Adjacent spatial query mode. Back in the boundary layer to be searched and a search for the object touching the object boundary. Note: The internal intersection of the searched object and the searched object is empty. The object types for which this relationship is not suitable are:

    -Point to query the spatial relationship of points.

    As shown:

    ../_images/SQTouch.png
  • SpatialQueryMode.OVERLAP

    Overlay spatial query mode. Return the object that partially overlaps the search object in the searched layer. The suitable object types for this relationship are:

    -Line/line, surface/surface. Among them, the dimensions of the two geometric objects must be the same, and the dimension of their intersection should also be the same as the dimension of the geometric object

    Note: There is no partial overlap between the point and any geometric object

    As shown:

    ../_images/SQOverlap.png
  • SpatialQueryMode.CROSS

    Cross spatial query mode. Return all objects (lines or areas) that intersect the search object (line) in the searched layer. Note: The internal intersection of the search object and the searched object cannot be empty; one of the two objects involved in the cross relation operation must be a line object. The object type that this relationship is suitable for:

    -Search object: line; -The searched object: line, surface.

    As shown:

    ../_images/SQCross.png
  • SpatialQueryMode.WITHIN

    Included spatial query mode. Return the objects in the searched layer that completely contain the search objects. If the returned object is a surface, it must contain all (including edge contact) the search object; if the returned object is a line, it must completely contain the search object; if the returned object is a point, it must coincide with the search object. This type is the opposite of the Contain query mode. The object type that this relationship is suitable for:

    -Search object: point, line, area; -Searched object: point, line, area.

    As shown:

    ../_images/SQWithin.png
  • SpatialQueryMode.CONTAIN

    Contains the spatial query mode. Return the objects in the searched layer that are completely contained by the searched objects. Note: The boundary intersection of the searched object and the searched object may not be empty; point-check line/point-check surface/line-check surface, there is no inclusion. The object type that this relationship is suitable for:

    -Search object: point, line, area; -Searched object: point, line, area.

    As shown:

    ../_images/SQContain.png
  • SpatialQueryMode.INNERINTERSECT – Internal intersection query mode, Return all objects that intersect but not only touch the search object. That is, the results of all contact operators are excluded from the results of the intersection operator.
CONTAIN = 7
CROSS = 5
DISJOINT = 1
IDENTITY = 0
INNERINTERSECT = 13
INTERSECT = 2
NONE = -1
OVERLAP = 4
TOUCH = 3
WITHIN = 6
class iobjectspy.enums.StatisticsType

Bases: iobjectspy._jsuperpy.enums.JEnum

Field statistics type constant

Variables:
FIRST = 5
LAST = 6
MAX = 1
MEAN = 4
MIN = 2
SUM = 3
class iobjectspy.enums.JoinType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants that define the connection type between two tables.

This class is used to query between the two connected tables, which determines the records obtained in the query results

Variables:
  • JoinType.INNERJOIN – Full inner join, only if there are related records in two tables will it be added to the query result set.
  • JoinType.LEFTJOIN – Left join, all related records in the left table enter the query result set, and if there is no related record in the right table, the corresponding field value is displayed as empty.
INNERJOIN = 0
LEFTJOIN = 1
class iobjectspy.enums.BufferEndType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the buffer endpoint type.

It is used to distinguish whether the endpoint of the line object buffer analysis is round or flat.

Variables:
  • BufferEndType.ROUND – Round head buffer. The round head buffer is a semi-circular arc treatment at the end of the line segment when the buffer is generated
  • BufferEndType.FLAT – Flat head buffer. The flat buffer is the vertical line of the arc at the end of the line segment when the buffer is generated.
FLAT = 2
ROUND = 1
class iobjectspy.enums.BufferRadiusUnit

Bases: iobjectspy._jsuperpy.enums.JEnum

This enumeration defines the constants of the buffer analysis radius unit type

Variables:
CENTIMETER = 100
DECIMETER = 1000
FOOT = 3048
INCH = 254
KILOMETER = 10000000
METER = 10000
MILE = 16090000
MILIMETER = 10
YARD = 9144
class iobjectspy.enums.StatisticMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the field statistics method type. Provide common statistical functions for a single field. There are 6 statistical functions provided by SuperMap, including the maximum, minimum, average, sum, standard deviation and variance of statistical fields.

Variables:
AVERAGE = 3
MAX = 1
MIN = 2
STDDEVIATION = 5
SUM = 4
VARIANCE = 6
class iobjectspy.enums.PrjCoordSysType

Bases: iobjectspy._jsuperpy.enums.JEnum

An enumeration.

PCS_ADINDAN_UTM_37N = 20137
PCS_ADINDAN_UTM_38N = 20138
PCS_AFGOOYE_UTM_38N = 20538
PCS_AFGOOYE_UTM_39N = 20539
PCS_AGD_1966_AMG_48 = 20248
PCS_AGD_1966_AMG_49 = 20249
PCS_AGD_1966_AMG_50 = 20250
PCS_AGD_1966_AMG_51 = 20251
PCS_AGD_1966_AMG_52 = 20252
PCS_AGD_1966_AMG_53 = 20253
PCS_AGD_1966_AMG_54 = 20254
PCS_AGD_1966_AMG_55 = 20255
PCS_AGD_1966_AMG_56 = 20256
PCS_AGD_1966_AMG_57 = 20257
PCS_AGD_1966_AMG_58 = 20258
PCS_AGD_1984_AMG_48 = 20348
PCS_AGD_1984_AMG_49 = 20349
PCS_AGD_1984_AMG_50 = 20350
PCS_AGD_1984_AMG_51 = 20351
PCS_AGD_1984_AMG_52 = 20352
PCS_AGD_1984_AMG_53 = 20353
PCS_AGD_1984_AMG_54 = 20354
PCS_AGD_1984_AMG_55 = 20355
PCS_AGD_1984_AMG_56 = 20356
PCS_AGD_1984_AMG_57 = 20357
PCS_AGD_1984_AMG_58 = 20358
PCS_AIN_EL_ABD_BAHRAIN_GRID = 20499
PCS_AIN_EL_ABD_UTM_37N = 20437
PCS_AIN_EL_ABD_UTM_38N = 20438
PCS_AIN_EL_ABD_UTM_39N = 20439
PCS_AMERSFOORT_RD_NEW = 28992
PCS_ARATU_UTM_22S = 20822
PCS_ARATU_UTM_23S = 20823
PCS_ARATU_UTM_24S = 20824
PCS_ATF_NORD_DE_GUERRE = 27500
PCS_ATS_1977_UTM_19N = 2219
PCS_ATS_1977_UTM_20N = 2220
PCS_AZORES_CENTRAL_1948_UTM_ZONE_26N = 2189
PCS_AZORES_OCCIDENTAL_1939_UTM_ZONE_25N = 2188
PCS_AZORES_ORIENTAL_1940_UTM_ZONE_26N = 2190
PCS_BATAVIA_UTM_48S = 21148
PCS_BATAVIA_UTM_49S = 21149
PCS_BATAVIA_UTM_50S = 21150
PCS_BEIJING_1954_3_DEGREE_GK_25 = 2401
PCS_BEIJING_1954_3_DEGREE_GK_25N = 2422
PCS_BEIJING_1954_3_DEGREE_GK_26 = 2402
PCS_BEIJING_1954_3_DEGREE_GK_26N = 2423
PCS_BEIJING_1954_3_DEGREE_GK_27 = 2403
PCS_BEIJING_1954_3_DEGREE_GK_27N = 2424
PCS_BEIJING_1954_3_DEGREE_GK_28 = 2404
PCS_BEIJING_1954_3_DEGREE_GK_28N = 2425
PCS_BEIJING_1954_3_DEGREE_GK_29 = 2405
PCS_BEIJING_1954_3_DEGREE_GK_29N = 2426
PCS_BEIJING_1954_3_DEGREE_GK_30 = 2406
PCS_BEIJING_1954_3_DEGREE_GK_30N = 2427
PCS_BEIJING_1954_3_DEGREE_GK_31 = 2407
PCS_BEIJING_1954_3_DEGREE_GK_31N = 2428
PCS_BEIJING_1954_3_DEGREE_GK_32 = 2408
PCS_BEIJING_1954_3_DEGREE_GK_32N = 2429
PCS_BEIJING_1954_3_DEGREE_GK_33 = 2409
PCS_BEIJING_1954_3_DEGREE_GK_33N = 2430
PCS_BEIJING_1954_3_DEGREE_GK_34 = 2410
PCS_BEIJING_1954_3_DEGREE_GK_34N = 2431
PCS_BEIJING_1954_3_DEGREE_GK_35 = 2411
PCS_BEIJING_1954_3_DEGREE_GK_35N = 2432
PCS_BEIJING_1954_3_DEGREE_GK_36 = 2412
PCS_BEIJING_1954_3_DEGREE_GK_36N = 2433
PCS_BEIJING_1954_3_DEGREE_GK_37 = 2413
PCS_BEIJING_1954_3_DEGREE_GK_37N = 2434
PCS_BEIJING_1954_3_DEGREE_GK_38 = 2414
PCS_BEIJING_1954_3_DEGREE_GK_38N = 2435
PCS_BEIJING_1954_3_DEGREE_GK_39 = 2415
PCS_BEIJING_1954_3_DEGREE_GK_39N = 2436
PCS_BEIJING_1954_3_DEGREE_GK_40 = 2416
PCS_BEIJING_1954_3_DEGREE_GK_40N = 2437
PCS_BEIJING_1954_3_DEGREE_GK_41 = 2417
PCS_BEIJING_1954_3_DEGREE_GK_41N = 2438
PCS_BEIJING_1954_3_DEGREE_GK_42 = 2418
PCS_BEIJING_1954_3_DEGREE_GK_42N = 2439
PCS_BEIJING_1954_3_DEGREE_GK_43 = 2419
PCS_BEIJING_1954_3_DEGREE_GK_43N = 2440
PCS_BEIJING_1954_3_DEGREE_GK_44 = 2420
PCS_BEIJING_1954_3_DEGREE_GK_44N = 2441
PCS_BEIJING_1954_3_DEGREE_GK_45 = 2421
PCS_BEIJING_1954_3_DEGREE_GK_45N = 2442
PCS_BEIJING_1954_GK_13 = 21413
PCS_BEIJING_1954_GK_13N = 21473
PCS_BEIJING_1954_GK_14 = 21414
PCS_BEIJING_1954_GK_14N = 21474
PCS_BEIJING_1954_GK_15 = 21415
PCS_BEIJING_1954_GK_15N = 21475
PCS_BEIJING_1954_GK_16 = 21416
PCS_BEIJING_1954_GK_16N = 21476
PCS_BEIJING_1954_GK_17 = 21417
PCS_BEIJING_1954_GK_17N = 21477
PCS_BEIJING_1954_GK_18 = 21418
PCS_BEIJING_1954_GK_18N = 21478
PCS_BEIJING_1954_GK_19 = 21419
PCS_BEIJING_1954_GK_19N = 21479
PCS_BEIJING_1954_GK_20 = 21420
PCS_BEIJING_1954_GK_20N = 21480
PCS_BEIJING_1954_GK_21 = 21421
PCS_BEIJING_1954_GK_21N = 21481
PCS_BEIJING_1954_GK_22 = 21422
PCS_BEIJING_1954_GK_22N = 21482
PCS_BEIJING_1954_GK_23 = 21423
PCS_BEIJING_1954_GK_23N = 21483
PCS_BELGE_LAMBERT_1950 = 21500
PCS_BOGOTA_COLOMBIA_BOGOTA = 21892
PCS_BOGOTA_COLOMBIA_EAST = 21894
PCS_BOGOTA_COLOMBIA_E_CENTRAL = 21893
PCS_BOGOTA_COLOMBIA_WEST = 21891
PCS_BOGOTA_UTM_17N = 21817
PCS_BOGOTA_UTM_18N = 21818
PCS_CAMACUPA_UTM_32S = 22032
PCS_CAMACUPA_UTM_33S = 22033
PCS_CARTHAGE_NORD_TUNISIE = 22391
PCS_CARTHAGE_SUD_TUNISIE = 22392
PCS_CARTHAGE_UTM_32N = 22332
PCS_CHINA_2000_3_DEGREE_GK_25 = 21625
PCS_CHINA_2000_3_DEGREE_GK_25N = 21675
PCS_CHINA_2000_3_DEGREE_GK_26 = 21626
PCS_CHINA_2000_3_DEGREE_GK_26N = 21676
PCS_CHINA_2000_3_DEGREE_GK_27 = 21627
PCS_CHINA_2000_3_DEGREE_GK_27N = 21677
PCS_CHINA_2000_3_DEGREE_GK_28 = 21628
PCS_CHINA_2000_3_DEGREE_GK_28N = 21678
PCS_CHINA_2000_3_DEGREE_GK_29 = 21629
PCS_CHINA_2000_3_DEGREE_GK_29N = 21679
PCS_CHINA_2000_3_DEGREE_GK_30 = 21630
PCS_CHINA_2000_3_DEGREE_GK_30N = 21680
PCS_CHINA_2000_3_DEGREE_GK_31 = 21631
PCS_CHINA_2000_3_DEGREE_GK_31N = 21681
PCS_CHINA_2000_3_DEGREE_GK_32 = 21632
PCS_CHINA_2000_3_DEGREE_GK_32N = 21682
PCS_CHINA_2000_3_DEGREE_GK_33 = 21633
PCS_CHINA_2000_3_DEGREE_GK_33N = 21683
PCS_CHINA_2000_3_DEGREE_GK_34 = 21634
PCS_CHINA_2000_3_DEGREE_GK_34N = 21684
PCS_CHINA_2000_3_DEGREE_GK_35 = 21635
PCS_CHINA_2000_3_DEGREE_GK_35N = 21685
PCS_CHINA_2000_3_DEGREE_GK_36 = 21636
PCS_CHINA_2000_3_DEGREE_GK_36N = 21686
PCS_CHINA_2000_3_DEGREE_GK_37 = 21637
PCS_CHINA_2000_3_DEGREE_GK_37N = 21687
PCS_CHINA_2000_3_DEGREE_GK_38 = 21638
PCS_CHINA_2000_3_DEGREE_GK_38N = 21688
PCS_CHINA_2000_3_DEGREE_GK_39 = 21639
PCS_CHINA_2000_3_DEGREE_GK_39N = 21689
PCS_CHINA_2000_3_DEGREE_GK_40 = 21640
PCS_CHINA_2000_3_DEGREE_GK_40N = 21690
PCS_CHINA_2000_3_DEGREE_GK_41 = 21641
PCS_CHINA_2000_3_DEGREE_GK_41N = 21691
PCS_CHINA_2000_3_DEGREE_GK_42 = 21642
PCS_CHINA_2000_3_DEGREE_GK_42N = 21692
PCS_CHINA_2000_3_DEGREE_GK_43 = 21643
PCS_CHINA_2000_3_DEGREE_GK_43N = 21693
PCS_CHINA_2000_3_DEGREE_GK_44 = 21644
PCS_CHINA_2000_3_DEGREE_GK_44N = 21694
PCS_CHINA_2000_3_DEGREE_GK_45 = 21645
PCS_CHINA_2000_3_DEGREE_GK_45N = 21695
PCS_CHINA_2000_GK_13 = 21513
PCS_CHINA_2000_GK_13N = 21573
PCS_CHINA_2000_GK_14 = 21514
PCS_CHINA_2000_GK_14N = 21574
PCS_CHINA_2000_GK_15 = 21515
PCS_CHINA_2000_GK_15N = 21575
PCS_CHINA_2000_GK_16 = 21516
PCS_CHINA_2000_GK_16N = 21576
PCS_CHINA_2000_GK_17 = 21517
PCS_CHINA_2000_GK_17N = 21577
PCS_CHINA_2000_GK_18 = 21518
PCS_CHINA_2000_GK_18N = 21578
PCS_CHINA_2000_GK_19 = 21519
PCS_CHINA_2000_GK_19N = 21579
PCS_CHINA_2000_GK_20 = 21520
PCS_CHINA_2000_GK_20N = 21580
PCS_CHINA_2000_GK_21 = 21521
PCS_CHINA_2000_GK_21N = 21581
PCS_CHINA_2000_GK_22 = 21522
PCS_CHINA_2000_GK_22N = 21582
PCS_CHINA_2000_GK_23 = 21523
PCS_CHINA_2000_GK_23N = 21583
PCS_CORREGO_ALEGRE_UTM_23S = 22523
PCS_CORREGO_ALEGRE_UTM_24S = 22524
PCS_C_INCHAUSARGENTINA_1 = 22191
PCS_C_INCHAUSARGENTINA_2 = 22192
PCS_C_INCHAUSARGENTINA_3 = 22193
PCS_C_INCHAUSARGENTINA_4 = 22194
PCS_C_INCHAUSARGENTINA_5 = 22195
PCS_C_INCHAUSARGENTINA_6 = 22196
PCS_C_INCHAUSARGENTINA_7 = 22197
PCS_DATUM73_MODIFIED_PORTUGUESE_GRID = 27493
PCS_DATUM73_MODIFIED_PORTUGUESE_NATIONAL_GRID = 27492
PCS_DATUM_73_UTM_ZONE_29N = 27429
PCS_DEALUL_PISCULUI_1933_STEREO_33 = 31600
PCS_DEALUL_PISCULUI_1970_STEREO_EALUL_PISCULUI_1970_STEREO_70 = 31700
PCS_DHDN_GERMANY_1 = 31491
PCS_DHDN_GERMANY_2 = 31492
PCS_DHDN_GERMANY_3 = 31493
PCS_DHDN_GERMANY_4 = 31494
PCS_DHDN_GERMANY_5 = 31495
PCS_DOUALA_UTM_32N = 22832
PCS_EARTH_LONGITUDE_LATITUDE = 1
PCS_ED50_CENTRAL_GROUP = 61002
PCS_ED50_OCCIDENTAL_GROUP = 61003
PCS_ED50_ORIENTAL_GROUP = 61001
PCS_ED_1950_UTM_28N = 23028
PCS_ED_1950_UTM_29N = 23029
PCS_ED_1950_UTM_30N = 23030
PCS_ED_1950_UTM_31N = 23031
PCS_ED_1950_UTM_32N = 23032
PCS_ED_1950_UTM_33N = 23033
PCS_ED_1950_UTM_34N = 23034
PCS_ED_1950_UTM_35N = 23035
PCS_ED_1950_UTM_36N = 23036
PCS_ED_1950_UTM_37N = 23037
PCS_ED_1950_UTM_38N = 23038
PCS_EGYPT_EXT_PURPLE_BELT = 22994
PCS_EGYPT_PURPLE_BELT = 22993
PCS_EGYPT_RED_BELT = 22992
PCS_ETRS89_PORTUGAL_TM06 = 3763
PCS_ETRS_1989_UTM_28N = 25828
PCS_ETRS_1989_UTM_29N = 25829
PCS_ETRS_1989_UTM_30N = 25830
PCS_ETRS_1989_UTM_31N = 25831
PCS_ETRS_1989_UTM_32N = 25832
PCS_ETRS_1989_UTM_33N = 25833
PCS_ETRS_1989_UTM_34N = 25834
PCS_ETRS_1989_UTM_35N = 25835
PCS_ETRS_1989_UTM_36N = 25836
PCS_ETRS_1989_UTM_37N = 25837
PCS_ETRS_1989_UTM_38N = 25838
PCS_FAHUD_UTM_39N = 23239
PCS_FAHUD_UTM_40N = 23240
PCS_GAROUA_UTM_33N = 23433
PCS_GDA_1994_MGA_48 = 28348
PCS_GDA_1994_MGA_49 = 28349
PCS_GDA_1994_MGA_50 = 28350
PCS_GDA_1994_MGA_51 = 28351
PCS_GDA_1994_MGA_52 = 28352
PCS_GDA_1994_MGA_53 = 28353
PCS_GDA_1994_MGA_54 = 28354
PCS_GDA_1994_MGA_55 = 28355
PCS_GDA_1994_MGA_56 = 28356
PCS_GDA_1994_MGA_57 = 28357
PCS_GDA_1994_MGA_58 = 28358
PCS_GGRS_1987_GREEK_GRID = 2100
PCS_ID_1974_UTM_46N = 23846
PCS_ID_1974_UTM_46S = 23886
PCS_ID_1974_UTM_47N = 23847
PCS_ID_1974_UTM_47S = 23887
PCS_ID_1974_UTM_48N = 23848
PCS_ID_1974_UTM_48S = 23888
PCS_ID_1974_UTM_49N = 23849
PCS_ID_1974_UTM_49S = 23889
PCS_ID_1974_UTM_50N = 23850
PCS_ID_1974_UTM_50S = 23890
PCS_ID_1974_UTM_51N = 23851
PCS_ID_1974_UTM_51S = 23891
PCS_ID_1974_UTM_52N = 23852
PCS_ID_1974_UTM_52S = 23892
PCS_ID_1974_UTM_53N = 23853
PCS_ID_1974_UTM_53S = 23893
PCS_ID_1974_UTM_54S = 23894
PCS_INDIAN_1954_UTM_47N = 23947
PCS_INDIAN_1954_UTM_48N = 23948
PCS_INDIAN_1975_UTM_47N = 24047
PCS_INDIAN_1975_UTM_48N = 24048
PCS_JAD_1969_JAMAICA_GRID = 24200
PCS_JAMAICA_1875_OLD_GRID = 24100
PCS_JAPAN_PLATE_ZONE_I = 32786
PCS_JAPAN_PLATE_ZONE_II = 32787
PCS_JAPAN_PLATE_ZONE_III = 32788
PCS_JAPAN_PLATE_ZONE_IV = 32789
PCS_JAPAN_PLATE_ZONE_IX = 32794
PCS_JAPAN_PLATE_ZONE_V = 32790
PCS_JAPAN_PLATE_ZONE_VI = 32791
PCS_JAPAN_PLATE_ZONE_VII = 32792
PCS_JAPAN_PLATE_ZONE_VIII = 32793
PCS_JAPAN_PLATE_ZONE_X = 32795
PCS_JAPAN_PLATE_ZONE_XI = 32796
PCS_JAPAN_PLATE_ZONE_XII = 32797
PCS_JAPAN_PLATE_ZONE_XIII = 32798
PCS_JAPAN_PLATE_ZONE_XIV = 32800
PCS_JAPAN_PLATE_ZONE_XIX = 32805
PCS_JAPAN_PLATE_ZONE_XV = 32801
PCS_JAPAN_PLATE_ZONE_XVI = 32802
PCS_JAPAN_PLATE_ZONE_XVII = 32803
PCS_JAPAN_PLATE_ZONE_XVIII = 32804
PCS_JAPAN_UTM_51 = 32806
PCS_JAPAN_UTM_52 = 32807
PCS_JAPAN_UTM_53 = 32808
PCS_JAPAN_UTM_54 = 32809
PCS_JAPAN_UTM_55 = 32810
PCS_JAPAN_UTM_56 = 32811
PCS_KALIANPUR_INDIA_0 = 24370
PCS_KALIANPUR_INDIA_I = 24371
PCS_KALIANPUR_INDIA_IIA = 24372
PCS_KALIANPUR_INDIA_IIB = 24382
PCS_KALIANPUR_INDIA_IIIA = 24373
PCS_KALIANPUR_INDIA_IIIB = 24383
PCS_KALIANPUR_INDIA_IVA = 24374
PCS_KALIANPUR_INDIA_IVB = 24384
PCS_KERTAU_MALAYA_METERS = 23110
PCS_KERTAU_UTM_47N = 24547
PCS_KERTAU_UTM_48N = 24548
PCS_KKJ_FINLAND_1 = 2391
PCS_KKJ_FINLAND_2 = 2392
PCS_KKJ_FINLAND_3 = 2393
PCS_KKJ_FINLAND_4 = 2394
PCS_KOC_LAMBERT = 24600
PCS_KUDAMS_KTM = 31900
PCS_LA_CANOA_UTM_20N = 24720
PCS_LA_CANOA_UTM_21N = 24721
PCS_LEIGON_GHANA_GRID = 25000
PCS_LISBON_1890_PORTUGAL_BONNE = 61008
PCS_LISBON_PORTUGUESE_GRID = 20700
PCS_LISBON_PORTUGUESE_OFFICIAL_GRID = 20791
PCS_LOME_UTM_31N = 25231
PCS_LUZON_PHILIPPINES_I = 25391
PCS_LUZON_PHILIPPINES_II = 25392
PCS_LUZON_PHILIPPINES_III = 25393
PCS_LUZON_PHILIPPINES_IV = 25394
PCS_LUZON_PHILIPPINES_V = 25395
PCS_Lisboa_Hayford_Gauss_IGeoE = 53700
PCS_Lisboa_Hayford_Gauss_IPCC = 53791
PCS_MADEIRA_1936_UTM_ZONE_28N = 2191
PCS_MALONGO_1987_UTM_32S = 25932
PCS_MASSAWA_UTM_37N = 26237
PCS_MERCHICH_NORD_MAROC = 26191
PCS_MERCHICH_SAHARA = 26193
PCS_MERCHICH_SUD_MAROC = 26192
PCS_MGI_FERRO_AUSTRIA_CENTRAL = 31292
PCS_MGI_FERRO_AUSTRIA_EAST = 31293
PCS_MGI_FERRO_AUSTRIA_WEST = 31291
PCS_MHAST_UTM_32S = 26432
PCS_MINNA_NIGERIA_EAST_BELT = 26393
PCS_MINNA_NIGERIA_MID_BELT = 26392
PCS_MINNA_NIGERIA_WEST_BELT = 26391
PCS_MINNA_UTM_31N = 26331
PCS_MINNA_UTM_32N = 26332
PCS_MONTE_MARIO_ROME_ITALY_1 = 26591
PCS_MONTE_MARIO_ROME_ITALY_2 = 26592
PCS_MPORALOKO_UTM_32N = 26632
PCS_MPORALOKO_UTM_32S = 26692
PCS_NAD_1927_AK_1 = 26731
PCS_NAD_1927_AK_10 = 26740
PCS_NAD_1927_AK_2 = 26732
PCS_NAD_1927_AK_3 = 26733
PCS_NAD_1927_AK_4 = 26734
PCS_NAD_1927_AK_5 = 26735
PCS_NAD_1927_AK_6 = 26736
PCS_NAD_1927_AK_7 = 26737
PCS_NAD_1927_AK_8 = 26738
PCS_NAD_1927_AK_9 = 26739
PCS_NAD_1927_AL_E = 26729
PCS_NAD_1927_AL_W = 26730
PCS_NAD_1927_AR_N = 26751
PCS_NAD_1927_AR_S = 26752
PCS_NAD_1927_AZ_C = 26749
PCS_NAD_1927_AZ_E = 26748
PCS_NAD_1927_AZ_W = 26750
PCS_NAD_1927_BLM_14N = 32074
PCS_NAD_1927_BLM_15N = 32075
PCS_NAD_1927_BLM_16N = 32076
PCS_NAD_1927_BLM_17N = 32077
PCS_NAD_1927_CA_I = 26741
PCS_NAD_1927_CA_II = 26742
PCS_NAD_1927_CA_III = 26743
PCS_NAD_1927_CA_IV = 26744
PCS_NAD_1927_CA_V = 26745
PCS_NAD_1927_CA_VI = 26746
PCS_NAD_1927_CA_VII = 26747
PCS_NAD_1927_CO_C = 26754
PCS_NAD_1927_CO_N = 26753
PCS_NAD_1927_CO_S = 26755
PCS_NAD_1927_CT = 26756
PCS_NAD_1927_DE = 26757
PCS_NAD_1927_FL_E = 26758
PCS_NAD_1927_FL_N = 26760
PCS_NAD_1927_FL_W = 26759
PCS_NAD_1927_GA_E = 26766
PCS_NAD_1927_GA_W = 26767
PCS_NAD_1927_GU = 65061
PCS_NAD_1927_HI_1 = 26761
PCS_NAD_1927_HI_2 = 26762
PCS_NAD_1927_HI_3 = 26763
PCS_NAD_1927_HI_4 = 26764
PCS_NAD_1927_HI_5 = 26765
PCS_NAD_1927_IA_N = 26775
PCS_NAD_1927_IA_S = 26776
PCS_NAD_1927_ID_C = 26769
PCS_NAD_1927_ID_E = 26768
PCS_NAD_1927_ID_W = 26770
PCS_NAD_1927_IL_E = 26771
PCS_NAD_1927_IL_W = 26772
PCS_NAD_1927_IN_E = 26773
PCS_NAD_1927_IN_W = 26774
PCS_NAD_1927_KS_N = 26777
PCS_NAD_1927_KS_S = 26778
PCS_NAD_1927_KY_N = 26779
PCS_NAD_1927_KY_S = 26780
PCS_NAD_1927_LA_N = 26781
PCS_NAD_1927_LA_S = 26782
PCS_NAD_1927_MA_I = 26787
PCS_NAD_1927_MA_M = 26786
PCS_NAD_1927_MD = 26785
PCS_NAD_1927_ME_E = 26783
PCS_NAD_1927_ME_W = 26784
PCS_NAD_1927_MI_C = 26789
PCS_NAD_1927_MI_N = 26788
PCS_NAD_1927_MI_S = 26790
PCS_NAD_1927_MN_C = 26792
PCS_NAD_1927_MN_N = 26791
PCS_NAD_1927_MN_S = 26793
PCS_NAD_1927_MO_C = 26797
PCS_NAD_1927_MO_E = 26796
PCS_NAD_1927_MO_W = 26798
PCS_NAD_1927_MS_E = 26794
PCS_NAD_1927_MS_W = 26795
PCS_NAD_1927_MT_C = 32002
PCS_NAD_1927_MT_N = 32001
PCS_NAD_1927_MT_S = 32003
PCS_NAD_1927_NC = 32019
PCS_NAD_1927_ND_N = 32020
PCS_NAD_1927_ND_S = 32021
PCS_NAD_1927_NE_N = 32005
PCS_NAD_1927_NE_S = 32006
PCS_NAD_1927_NH = 32010
PCS_NAD_1927_NJ = 32011
PCS_NAD_1927_NM_C = 32013
PCS_NAD_1927_NM_E = 32012
PCS_NAD_1927_NM_W = 32014
PCS_NAD_1927_NV_C = 32008
PCS_NAD_1927_NV_E = 32007
PCS_NAD_1927_NV_W = 32009
PCS_NAD_1927_NY_C = 32016
PCS_NAD_1927_NY_E = 32015
PCS_NAD_1927_NY_LI = 32018
PCS_NAD_1927_NY_W = 32017
PCS_NAD_1927_OH_N = 32022
PCS_NAD_1927_OH_S = 32023
PCS_NAD_1927_OK_N = 32024
PCS_NAD_1927_OK_S = 32025
PCS_NAD_1927_OR_N = 32026
PCS_NAD_1927_OR_S = 32027
PCS_NAD_1927_PA_N = 32028
PCS_NAD_1927_PA_S = 32029
PCS_NAD_1927_PR = 32059
PCS_NAD_1927_RI = 32030
PCS_NAD_1927_SC_N = 32031
PCS_NAD_1927_SC_S = 32033
PCS_NAD_1927_SD_N = 32034
PCS_NAD_1927_SD_S = 32035
PCS_NAD_1927_TN = 32036
PCS_NAD_1927_TX_C = 32039
PCS_NAD_1927_TX_N = 32037
PCS_NAD_1927_TX_NC = 32038
PCS_NAD_1927_TX_S = 32041
PCS_NAD_1927_TX_SC = 32040
PCS_NAD_1927_UTM_10N = 26710
PCS_NAD_1927_UTM_11N = 26711
PCS_NAD_1927_UTM_12N = 26712
PCS_NAD_1927_UTM_13N = 26713
PCS_NAD_1927_UTM_14N = 26714
PCS_NAD_1927_UTM_15N = 26715
PCS_NAD_1927_UTM_16N = 26716
PCS_NAD_1927_UTM_17N = 26717
PCS_NAD_1927_UTM_18N = 26718
PCS_NAD_1927_UTM_19N = 26719
PCS_NAD_1927_UTM_20N = 26720
PCS_NAD_1927_UTM_21N = 26721
PCS_NAD_1927_UTM_22N = 26722
PCS_NAD_1927_UTM_3N = 26703
PCS_NAD_1927_UTM_4N = 26704
PCS_NAD_1927_UTM_5N = 26705
PCS_NAD_1927_UTM_6N = 26706
PCS_NAD_1927_UTM_7N = 26707
PCS_NAD_1927_UTM_8N = 26708
PCS_NAD_1927_UTM_9N = 26709
PCS_NAD_1927_UT_C = 32043
PCS_NAD_1927_UT_N = 32042
PCS_NAD_1927_UT_S = 32044
PCS_NAD_1927_VA_N = 32046
PCS_NAD_1927_VA_S = 32047
PCS_NAD_1927_VI = 32060
PCS_NAD_1927_VT = 32045
PCS_NAD_1927_WA_N = 32048
PCS_NAD_1927_WA_S = 32049
PCS_NAD_1927_WI_C = 32053
PCS_NAD_1927_WI_N = 32052
PCS_NAD_1927_WI_S = 32054
PCS_NAD_1927_WV_N = 32050
PCS_NAD_1927_WV_S = 32051
PCS_NAD_1927_WY_E = 32055
PCS_NAD_1927_WY_EC = 32056
PCS_NAD_1927_WY_W = 32058
PCS_NAD_1927_WY_WC = 32057
PCS_NAD_1983_AK_1 = 26931
PCS_NAD_1983_AK_10 = 26940
PCS_NAD_1983_AK_2 = 26932
PCS_NAD_1983_AK_3 = 26933
PCS_NAD_1983_AK_4 = 26934
PCS_NAD_1983_AK_5 = 26935
PCS_NAD_1983_AK_6 = 26936
PCS_NAD_1983_AK_7 = 26937
PCS_NAD_1983_AK_8 = 26938
PCS_NAD_1983_AK_9 = 26939
PCS_NAD_1983_AL_E = 26929
PCS_NAD_1983_AL_W = 26930
PCS_NAD_1983_AR_N = 26951
PCS_NAD_1983_AR_S = 26952
PCS_NAD_1983_AZ_C = 26949
PCS_NAD_1983_AZ_E = 26948
PCS_NAD_1983_AZ_W = 26950
PCS_NAD_1983_CA_I = 26941
PCS_NAD_1983_CA_II = 26942
PCS_NAD_1983_CA_III = 26943
PCS_NAD_1983_CA_IV = 26944
PCS_NAD_1983_CA_V = 26945
PCS_NAD_1983_CA_VI = 26946
PCS_NAD_1983_CO_C = 26954
PCS_NAD_1983_CO_N = 26953
PCS_NAD_1983_CO_S = 26955
PCS_NAD_1983_CT = 26956
PCS_NAD_1983_DE = 26957
PCS_NAD_1983_FL_E = 26958
PCS_NAD_1983_FL_N = 26960
PCS_NAD_1983_FL_W = 26959
PCS_NAD_1983_GA_E = 26966
PCS_NAD_1983_GA_W = 26967
PCS_NAD_1983_GU = 65161
PCS_NAD_1983_HI_1 = 26961
PCS_NAD_1983_HI_2 = 26962
PCS_NAD_1983_HI_3 = 26963
PCS_NAD_1983_HI_4 = 26964
PCS_NAD_1983_HI_5 = 26965
PCS_NAD_1983_IA_N = 26975
PCS_NAD_1983_IA_S = 26976
PCS_NAD_1983_ID_C = 26969
PCS_NAD_1983_ID_E = 26968
PCS_NAD_1983_ID_W = 26970
PCS_NAD_1983_IL_E = 26971
PCS_NAD_1983_IL_W = 26972
PCS_NAD_1983_IN_E = 26973
PCS_NAD_1983_IN_W = 26974
PCS_NAD_1983_KS_N = 26977
PCS_NAD_1983_KS_S = 26978
PCS_NAD_1983_KY_N = 26979
PCS_NAD_1983_KY_S = 26980
PCS_NAD_1983_LA_N = 26981
PCS_NAD_1983_LA_S = 26982
PCS_NAD_1983_MA_I = 26987
PCS_NAD_1983_MA_M = 26986
PCS_NAD_1983_MD = 26985
PCS_NAD_1983_ME_E = 26983
PCS_NAD_1983_ME_W = 26984
PCS_NAD_1983_MI_C = 26989
PCS_NAD_1983_MI_N = 26988
PCS_NAD_1983_MI_S = 26990
PCS_NAD_1983_MN_C = 26992
PCS_NAD_1983_MN_N = 26991
PCS_NAD_1983_MN_S = 26993
PCS_NAD_1983_MO_C = 26997
PCS_NAD_1983_MO_E = 26996
PCS_NAD_1983_MO_W = 26998
PCS_NAD_1983_MS_E = 26994
PCS_NAD_1983_MS_W = 26995
PCS_NAD_1983_MT = 32100
PCS_NAD_1983_NC = 32119
PCS_NAD_1983_ND_N = 32120
PCS_NAD_1983_ND_S = 32121
PCS_NAD_1983_NE = 32104
PCS_NAD_1983_NH = 32110
PCS_NAD_1983_NJ = 32111
PCS_NAD_1983_NM_C = 32113
PCS_NAD_1983_NM_E = 32112
PCS_NAD_1983_NM_W = 32114
PCS_NAD_1983_NV_C = 32108
PCS_NAD_1983_NV_E = 32107
PCS_NAD_1983_NV_W = 32109
PCS_NAD_1983_NY_C = 32116
PCS_NAD_1983_NY_E = 32115
PCS_NAD_1983_NY_LI = 32118
PCS_NAD_1983_NY_W = 32117
PCS_NAD_1983_OH_N = 32122
PCS_NAD_1983_OH_S = 32123
PCS_NAD_1983_OK_N = 32124
PCS_NAD_1983_OK_S = 32125
PCS_NAD_1983_OR_N = 32126
PCS_NAD_1983_OR_S = 32127
PCS_NAD_1983_PA_N = 32128
PCS_NAD_1983_PA_S = 32129
PCS_NAD_1983_PR_VI = 32161
PCS_NAD_1983_RI = 32130
PCS_NAD_1983_SC = 32133
PCS_NAD_1983_SD_N = 32134
PCS_NAD_1983_SD_S = 32135
PCS_NAD_1983_TN = 32136
PCS_NAD_1983_TX_C = 32139
PCS_NAD_1983_TX_N = 32137
PCS_NAD_1983_TX_NC = 32138
PCS_NAD_1983_TX_S = 32141
PCS_NAD_1983_TX_SC = 32140
PCS_NAD_1983_UTM_10N = 26910
PCS_NAD_1983_UTM_11N = 26911
PCS_NAD_1983_UTM_12N = 26912
PCS_NAD_1983_UTM_13N = 26913
PCS_NAD_1983_UTM_14N = 26914
PCS_NAD_1983_UTM_15N = 26915
PCS_NAD_1983_UTM_16N = 26916
PCS_NAD_1983_UTM_17N = 26917
PCS_NAD_1983_UTM_18N = 26918
PCS_NAD_1983_UTM_19N = 26919
PCS_NAD_1983_UTM_20N = 26920
PCS_NAD_1983_UTM_21N = 26921
PCS_NAD_1983_UTM_22N = 26922
PCS_NAD_1983_UTM_23N = 26923
PCS_NAD_1983_UTM_3N = 26903
PCS_NAD_1983_UTM_4N = 26904
PCS_NAD_1983_UTM_5N = 26905
PCS_NAD_1983_UTM_6N = 26906
PCS_NAD_1983_UTM_7N = 26907
PCS_NAD_1983_UTM_8N = 26908
PCS_NAD_1983_UTM_9N = 26909
PCS_NAD_1983_UT_C = 32143
PCS_NAD_1983_UT_N = 32142
PCS_NAD_1983_UT_S = 32144
PCS_NAD_1983_VA_N = 32146
PCS_NAD_1983_VA_S = 32147
PCS_NAD_1983_VT = 32145
PCS_NAD_1983_WA_N = 32148
PCS_NAD_1983_WA_S = 32149
PCS_NAD_1983_WI_C = 32153
PCS_NAD_1983_WI_N = 32152
PCS_NAD_1983_WI_S = 32154
PCS_NAD_1983_WV_N = 32150
PCS_NAD_1983_WV_S = 32151
PCS_NAD_1983_WY_E = 32155
PCS_NAD_1983_WY_EC = 32156
PCS_NAD_1983_WY_W = 32158
PCS_NAD_1983_WY_WC = 32157
PCS_NAHRWAN_1967_UTM_38N = 27038
PCS_NAHRWAN_1967_UTM_39N = 27039
PCS_NAHRWAN_1967_UTM_40N = 27040
PCS_NAPARIMA_1972_UTM_20N = 27120
PCS_NGN_UTM_38N = 31838
PCS_NGN_UTM_39N = 31839
PCS_NON_EARTH = 0
PCS_NORD_SAHARA_UTM_29N = 30729
PCS_NORD_SAHARA_UTM_30N = 30730
PCS_NORD_SAHARA_UTM_31N = 30731
PCS_NORD_SAHARA_UTM_32N = 30732
PCS_NTF_CENTRE_FRANCE = 27592
PCS_NTF_CORSE = 27594
PCS_NTF_FRANCE_I = 27581
PCS_NTF_FRANCE_II = 27582
PCS_NTF_FRANCE_III = 27583
PCS_NTF_FRANCE_IV = 27584
PCS_NTF_NORD_FRANCE = 27591
PCS_NTF_SUD_FRANCE = 27593
PCS_NZGD_1949_NORTH_ISLAND = 27291
PCS_NZGD_1949_SOUTH_ISLAND = 27292
PCS_OSGB_1936_BRITISH_GRID = 27700
PCS_POINTE_NOIRE_UTM_32S = 28232
PCS_PSAD_1956_PERU_CENTRAL = 24892
PCS_PSAD_1956_PERU_EAST = 24893
PCS_PSAD_1956_PERU_WEST = 24891
PCS_PSAD_1956_UTM_17S = 24877
PCS_PSAD_1956_UTM_18N = 24818
PCS_PSAD_1956_UTM_18S = 24878
PCS_PSAD_1956_UTM_19N = 24819
PCS_PSAD_1956_UTM_19S = 24879
PCS_PSAD_1956_UTM_20N = 24820
PCS_PSAD_1956_UTM_20S = 24880
PCS_PSAD_1956_UTM_21N = 24821
PCS_PTRA08_UTM25_ITRF93 = 61007
PCS_PTRA08_UTM26_ITRF93 = 61006
PCS_PTRA08_UTM28_ITRF93 = 61004
PCS_PULKOVO_1942_GK_10 = 28410
PCS_PULKOVO_1942_GK_10N = 28470
PCS_PULKOVO_1942_GK_11 = 28411
PCS_PULKOVO_1942_GK_11N = 28471
PCS_PULKOVO_1942_GK_12 = 28412
PCS_PULKOVO_1942_GK_12N = 28472
PCS_PULKOVO_1942_GK_13 = 28413
PCS_PULKOVO_1942_GK_13N = 28473
PCS_PULKOVO_1942_GK_14 = 28414
PCS_PULKOVO_1942_GK_14N = 28474
PCS_PULKOVO_1942_GK_15 = 28415
PCS_PULKOVO_1942_GK_15N = 28475
PCS_PULKOVO_1942_GK_16 = 28416
PCS_PULKOVO_1942_GK_16N = 28476
PCS_PULKOVO_1942_GK_17 = 28417
PCS_PULKOVO_1942_GK_17N = 28477
PCS_PULKOVO_1942_GK_18 = 28418
PCS_PULKOVO_1942_GK_18N = 28478
PCS_PULKOVO_1942_GK_19 = 28419
PCS_PULKOVO_1942_GK_19N = 28479
PCS_PULKOVO_1942_GK_20 = 28420
PCS_PULKOVO_1942_GK_20N = 28480
PCS_PULKOVO_1942_GK_21 = 28421
PCS_PULKOVO_1942_GK_21N = 28481
PCS_PULKOVO_1942_GK_22 = 28422
PCS_PULKOVO_1942_GK_22N = 28482
PCS_PULKOVO_1942_GK_23 = 28423
PCS_PULKOVO_1942_GK_23N = 28483
PCS_PULKOVO_1942_GK_24 = 28424
PCS_PULKOVO_1942_GK_24N = 28484
PCS_PULKOVO_1942_GK_25 = 28425
PCS_PULKOVO_1942_GK_25N = 28485
PCS_PULKOVO_1942_GK_26 = 28426
PCS_PULKOVO_1942_GK_26N = 28486
PCS_PULKOVO_1942_GK_27 = 28427
PCS_PULKOVO_1942_GK_27N = 28487
PCS_PULKOVO_1942_GK_28 = 28428
PCS_PULKOVO_1942_GK_28N = 28488
PCS_PULKOVO_1942_GK_29 = 28429
PCS_PULKOVO_1942_GK_29N = 28489
PCS_PULKOVO_1942_GK_30 = 28430
PCS_PULKOVO_1942_GK_30N = 28490
PCS_PULKOVO_1942_GK_31 = 28431
PCS_PULKOVO_1942_GK_31N = 28491
PCS_PULKOVO_1942_GK_32 = 28432
PCS_PULKOVO_1942_GK_32N = 28492
PCS_PULKOVO_1942_GK_4 = 28404
PCS_PULKOVO_1942_GK_4N = 28464
PCS_PULKOVO_1942_GK_5 = 28405
PCS_PULKOVO_1942_GK_5N = 28465
PCS_PULKOVO_1942_GK_6 = 28406
PCS_PULKOVO_1942_GK_6N = 28466
PCS_PULKOVO_1942_GK_7 = 28407
PCS_PULKOVO_1942_GK_7N = 28467
PCS_PULKOVO_1942_GK_8 = 28408
PCS_PULKOVO_1942_GK_8N = 28468
PCS_PULKOVO_1942_GK_9 = 28409
PCS_PULKOVO_1942_GK_9N = 28469
PCS_PULKOVO_1995_GK_10 = 20010
PCS_PULKOVO_1995_GK_10N = 20070
PCS_PULKOVO_1995_GK_11 = 20011
PCS_PULKOVO_1995_GK_11N = 20071
PCS_PULKOVO_1995_GK_12 = 20012
PCS_PULKOVO_1995_GK_12N = 20072
PCS_PULKOVO_1995_GK_13 = 20013
PCS_PULKOVO_1995_GK_13N = 20073
PCS_PULKOVO_1995_GK_14 = 20014
PCS_PULKOVO_1995_GK_14N = 20074
PCS_PULKOVO_1995_GK_15 = 20015
PCS_PULKOVO_1995_GK_15N = 20075
PCS_PULKOVO_1995_GK_16 = 20016
PCS_PULKOVO_1995_GK_16N = 20076
PCS_PULKOVO_1995_GK_17 = 20017
PCS_PULKOVO_1995_GK_17N = 20077
PCS_PULKOVO_1995_GK_18 = 20018
PCS_PULKOVO_1995_GK_18N = 20078
PCS_PULKOVO_1995_GK_19 = 20019
PCS_PULKOVO_1995_GK_19N = 20079
PCS_PULKOVO_1995_GK_20 = 20020
PCS_PULKOVO_1995_GK_20N = 20080
PCS_PULKOVO_1995_GK_21 = 20021
PCS_PULKOVO_1995_GK_21N = 20081
PCS_PULKOVO_1995_GK_22 = 20022
PCS_PULKOVO_1995_GK_22N = 20082
PCS_PULKOVO_1995_GK_23 = 20023
PCS_PULKOVO_1995_GK_23N = 20083
PCS_PULKOVO_1995_GK_24 = 20024
PCS_PULKOVO_1995_GK_24N = 20084
PCS_PULKOVO_1995_GK_25 = 20025
PCS_PULKOVO_1995_GK_25N = 20085
PCS_PULKOVO_1995_GK_26 = 20026
PCS_PULKOVO_1995_GK_26N = 20086
PCS_PULKOVO_1995_GK_27 = 20027
PCS_PULKOVO_1995_GK_27N = 20087
PCS_PULKOVO_1995_GK_28 = 20028
PCS_PULKOVO_1995_GK_28N = 20088
PCS_PULKOVO_1995_GK_29 = 20029
PCS_PULKOVO_1995_GK_29N = 20089
PCS_PULKOVO_1995_GK_30 = 20030
PCS_PULKOVO_1995_GK_30N = 20090
PCS_PULKOVO_1995_GK_31 = 20031
PCS_PULKOVO_1995_GK_31N = 20091
PCS_PULKOVO_1995_GK_32 = 20032
PCS_PULKOVO_1995_GK_32N = 20092
PCS_PULKOVO_1995_GK_4 = 20004
PCS_PULKOVO_1995_GK_4N = 20064
PCS_PULKOVO_1995_GK_5 = 20005
PCS_PULKOVO_1995_GK_5N = 20065
PCS_PULKOVO_1995_GK_6 = 20006
PCS_PULKOVO_1995_GK_6N = 20066
PCS_PULKOVO_1995_GK_7 = 20007
PCS_PULKOVO_1995_GK_7N = 20067
PCS_PULKOVO_1995_GK_8 = 20008
PCS_PULKOVO_1995_GK_8N = 20068
PCS_PULKOVO_1995_GK_9 = 20009
PCS_PULKOVO_1995_GK_9N = 20069
PCS_QATAR_GRID = 28600
PCS_RT38_STOCKHOLM_SWEDISH_GRID = 30800
PCS_SAD_1969_UTM_17S = 29177
PCS_SAD_1969_UTM_18N = 29118
PCS_SAD_1969_UTM_18S = 29178
PCS_SAD_1969_UTM_19N = 29119
PCS_SAD_1969_UTM_19S = 29179
PCS_SAD_1969_UTM_20N = 29120
PCS_SAD_1969_UTM_20S = 29180
PCS_SAD_1969_UTM_21N = 29121
PCS_SAD_1969_UTM_21S = 29181
PCS_SAD_1969_UTM_22N = 29122
PCS_SAD_1969_UTM_22S = 29182
PCS_SAD_1969_UTM_23S = 29183
PCS_SAD_1969_UTM_24S = 29184
PCS_SAD_1969_UTM_25S = 29185
PCS_SAPPER_HILL_UTM_20S = 29220
PCS_SAPPER_HILL_UTM_21S = 29221
PCS_SCHWARZECK_UTM_33S = 29333
PCS_SPHERE_BEHRMANN = 53017
PCS_SPHERE_BONNE = 53024
PCS_SPHERE_CASSINI = 53028
PCS_SPHERE_ECKERT_I = 53015
PCS_SPHERE_ECKERT_II = 53014
PCS_SPHERE_ECKERT_III = 53013
PCS_SPHERE_ECKERT_IV = 53012
PCS_SPHERE_ECKERT_V = 53011
PCS_SPHERE_ECKERT_VI = 53010
PCS_SPHERE_EQUIDISTANT_CONIC = 53027
PCS_SPHERE_EQUIDISTANT_CYLINDRICAL = 53002
PCS_SPHERE_GALL_STEREOGRAPHIC = 53016
PCS_SPHERE_LOXIMUTHAL = 53023
PCS_SPHERE_MERCATOR = 53004
PCS_SPHERE_MILLER_CYLINDRICAL = 53003
PCS_SPHERE_MOLLWEIDE = 53009
PCS_SPHERE_PLATE_CARREE = 53001
PCS_SPHERE_POLYCONIC = 53021
PCS_SPHERE_QUARTIC_AUTHALIC = 53022
PCS_SPHERE_ROBINSON = 53030
PCS_SPHERE_SINUSOIDAL = 53008
PCS_SPHERE_STEREOGRAPHIC = 53026
PCS_SPHERE_TWO_POINT_EQUIDISTANT = 53031
PCS_SPHERE_VAN_DER_GRINTEN_I = 53029
PCS_SPHERE_WINKEL_I = 53018
PCS_SPHERE_WINKEL_II = 53019
PCS_SUDAN_UTM_35N = 29635
PCS_SUDAN_UTM_36N = 29636
PCS_TANANARIVE_UTM_38S = 29738
PCS_TANANARIVE_UTM_39S = 29739
PCS_TC_1948_UTM_39N = 30339
PCS_TC_1948_UTM_40N = 30340
PCS_TIMBALAI_1948_RSO_BORNEO = 23130
PCS_TIMBALAI_1948_UTM_49N = 29849
PCS_TIMBALAI_1948_UTM_50N = 29850
PCS_TM65_IRISH_GRID = 29900
PCS_TOKYO_PLATE_ZONE_I = 32761
PCS_TOKYO_PLATE_ZONE_II = 32762
PCS_TOKYO_PLATE_ZONE_III = 32763
PCS_TOKYO_PLATE_ZONE_IV = 32764
PCS_TOKYO_PLATE_ZONE_IX = 32769
PCS_TOKYO_PLATE_ZONE_V = 32765
PCS_TOKYO_PLATE_ZONE_VI = 32766
PCS_TOKYO_PLATE_ZONE_VII = 32767
PCS_TOKYO_PLATE_ZONE_VIII = 32768
PCS_TOKYO_PLATE_ZONE_X = 32770
PCS_TOKYO_PLATE_ZONE_XI = 32771
PCS_TOKYO_PLATE_ZONE_XII = 32772
PCS_TOKYO_PLATE_ZONE_XIII = 32773
PCS_TOKYO_PLATE_ZONE_XIV = 32774
PCS_TOKYO_PLATE_ZONE_XIX = 32779
PCS_TOKYO_PLATE_ZONE_XV = 32775
PCS_TOKYO_PLATE_ZONE_XVI = 32776
PCS_TOKYO_PLATE_ZONE_XVII = 32777
PCS_TOKYO_PLATE_ZONE_XVIII = 32778
PCS_TOKYO_UTM_51 = 32780
PCS_TOKYO_UTM_52 = 32781
PCS_TOKYO_UTM_53 = 32782
PCS_TOKYO_UTM_54 = 32783
PCS_TOKYO_UTM_55 = 32784
PCS_TOKYO_UTM_56 = 32785
PCS_USER_DEFINED = -1
PCS_VOIROL_N_ALGERIE_ANCIENNE = 30491
PCS_VOIROL_S_ALGERIE_ANCIENNE = 30492
PCS_VOIROL_UNIFIE_N_ALGERIE = 30591
PCS_VOIROL_UNIFIE_S_ALGERIE = 30592
PCS_WGS_1972_UTM_10N = 32210
PCS_WGS_1972_UTM_10S = 32310
PCS_WGS_1972_UTM_11N = 32211
PCS_WGS_1972_UTM_11S = 32311
PCS_WGS_1972_UTM_12N = 32212
PCS_WGS_1972_UTM_12S = 32312
PCS_WGS_1972_UTM_13N = 32213
PCS_WGS_1972_UTM_13S = 32313
PCS_WGS_1972_UTM_14N = 32214
PCS_WGS_1972_UTM_14S = 32314
PCS_WGS_1972_UTM_15N = 32215
PCS_WGS_1972_UTM_15S = 32315
PCS_WGS_1972_UTM_16N = 32216
PCS_WGS_1972_UTM_16S = 32316
PCS_WGS_1972_UTM_17N = 32217
PCS_WGS_1972_UTM_17S = 32317
PCS_WGS_1972_UTM_18N = 32218
PCS_WGS_1972_UTM_18S = 32318
PCS_WGS_1972_UTM_19N = 32219
PCS_WGS_1972_UTM_19S = 32319
PCS_WGS_1972_UTM_1N = 32201
PCS_WGS_1972_UTM_1S = 32301
PCS_WGS_1972_UTM_20N = 32220
PCS_WGS_1972_UTM_20S = 32320
PCS_WGS_1972_UTM_21N = 32221
PCS_WGS_1972_UTM_21S = 32321
PCS_WGS_1972_UTM_22N = 32222
PCS_WGS_1972_UTM_22S = 32322
PCS_WGS_1972_UTM_23N = 32223
PCS_WGS_1972_UTM_23S = 32323
PCS_WGS_1972_UTM_24N = 32224
PCS_WGS_1972_UTM_24S = 32324
PCS_WGS_1972_UTM_25N = 32225
PCS_WGS_1972_UTM_25S = 32325
PCS_WGS_1972_UTM_26N = 32226
PCS_WGS_1972_UTM_26S = 32326
PCS_WGS_1972_UTM_27N = 32227
PCS_WGS_1972_UTM_27S = 32327
PCS_WGS_1972_UTM_28N = 32228
PCS_WGS_1972_UTM_28S = 32328
PCS_WGS_1972_UTM_29N = 32229
PCS_WGS_1972_UTM_29S = 32329
PCS_WGS_1972_UTM_2N = 32202
PCS_WGS_1972_UTM_2S = 32302
PCS_WGS_1972_UTM_30N = 32230
PCS_WGS_1972_UTM_30S = 32330
PCS_WGS_1972_UTM_31N = 32231
PCS_WGS_1972_UTM_31S = 32331
PCS_WGS_1972_UTM_32N = 32232
PCS_WGS_1972_UTM_32S = 32332
PCS_WGS_1972_UTM_33N = 32233
PCS_WGS_1972_UTM_33S = 32333
PCS_WGS_1972_UTM_34N = 32234
PCS_WGS_1972_UTM_34S = 32334
PCS_WGS_1972_UTM_35N = 32235
PCS_WGS_1972_UTM_35S = 32335
PCS_WGS_1972_UTM_36N = 32236
PCS_WGS_1972_UTM_36S = 32336
PCS_WGS_1972_UTM_37N = 32237
PCS_WGS_1972_UTM_37S = 32337
PCS_WGS_1972_UTM_38N = 32238
PCS_WGS_1972_UTM_38S = 32338
PCS_WGS_1972_UTM_39N = 32239
PCS_WGS_1972_UTM_39S = 32339
PCS_WGS_1972_UTM_3N = 32203
PCS_WGS_1972_UTM_3S = 32303
PCS_WGS_1972_UTM_40N = 32240
PCS_WGS_1972_UTM_40S = 32340
PCS_WGS_1972_UTM_41N = 32241
PCS_WGS_1972_UTM_41S = 32341
PCS_WGS_1972_UTM_42N = 32242
PCS_WGS_1972_UTM_42S = 32342
PCS_WGS_1972_UTM_43N = 32243
PCS_WGS_1972_UTM_43S = 32343
PCS_WGS_1972_UTM_44N = 32244
PCS_WGS_1972_UTM_44S = 32344
PCS_WGS_1972_UTM_45N = 32245
PCS_WGS_1972_UTM_45S = 32345
PCS_WGS_1972_UTM_46N = 32246
PCS_WGS_1972_UTM_46S = 32346
PCS_WGS_1972_UTM_47N = 32247
PCS_WGS_1972_UTM_47S = 32347
PCS_WGS_1972_UTM_48N = 32248
PCS_WGS_1972_UTM_48S = 32348
PCS_WGS_1972_UTM_49N = 32249
PCS_WGS_1972_UTM_49S = 32349
PCS_WGS_1972_UTM_4N = 32204
PCS_WGS_1972_UTM_4S = 32304
PCS_WGS_1972_UTM_50N = 32250
PCS_WGS_1972_UTM_50S = 32350
PCS_WGS_1972_UTM_51N = 32251
PCS_WGS_1972_UTM_51S = 32351
PCS_WGS_1972_UTM_52N = 32252
PCS_WGS_1972_UTM_52S = 32352
PCS_WGS_1972_UTM_53N = 32253
PCS_WGS_1972_UTM_53S = 32353
PCS_WGS_1972_UTM_54N = 32254
PCS_WGS_1972_UTM_54S = 32354
PCS_WGS_1972_UTM_55N = 32255
PCS_WGS_1972_UTM_55S = 32355
PCS_WGS_1972_UTM_56N = 32256
PCS_WGS_1972_UTM_56S = 32356
PCS_WGS_1972_UTM_57N = 32257
PCS_WGS_1972_UTM_57S = 32357
PCS_WGS_1972_UTM_58N = 32258
PCS_WGS_1972_UTM_58S = 32358
PCS_WGS_1972_UTM_59N = 32259
PCS_WGS_1972_UTM_59S = 32359
PCS_WGS_1972_UTM_5N = 32205
PCS_WGS_1972_UTM_5S = 32305
PCS_WGS_1972_UTM_60N = 32260
PCS_WGS_1972_UTM_60S = 32360
PCS_WGS_1972_UTM_6N = 32206
PCS_WGS_1972_UTM_6S = 32306
PCS_WGS_1972_UTM_7N = 32207
PCS_WGS_1972_UTM_7S = 32307
PCS_WGS_1972_UTM_8N = 32208
PCS_WGS_1972_UTM_8S = 32308
PCS_WGS_1972_UTM_9N = 32209
PCS_WGS_1972_UTM_9S = 32309
PCS_WGS_1984_UTM_10N = 32610
PCS_WGS_1984_UTM_10S = 32710
PCS_WGS_1984_UTM_11N = 32611
PCS_WGS_1984_UTM_11S = 32711
PCS_WGS_1984_UTM_12N = 32612
PCS_WGS_1984_UTM_12S = 32712
PCS_WGS_1984_UTM_13N = 32613
PCS_WGS_1984_UTM_13S = 32713
PCS_WGS_1984_UTM_14N = 32614
PCS_WGS_1984_UTM_14S = 32714
PCS_WGS_1984_UTM_15N = 32615
PCS_WGS_1984_UTM_15S = 32715
PCS_WGS_1984_UTM_16N = 32616
PCS_WGS_1984_UTM_16S = 32716
PCS_WGS_1984_UTM_17N = 32617
PCS_WGS_1984_UTM_17S = 32717
PCS_WGS_1984_UTM_18N = 32618
PCS_WGS_1984_UTM_18S = 32718
PCS_WGS_1984_UTM_19N = 32619
PCS_WGS_1984_UTM_19S = 32719
PCS_WGS_1984_UTM_1N = 32601
PCS_WGS_1984_UTM_1S = 32701
PCS_WGS_1984_UTM_20N = 32620
PCS_WGS_1984_UTM_20S = 32720
PCS_WGS_1984_UTM_21N = 32621
PCS_WGS_1984_UTM_21S = 32721
PCS_WGS_1984_UTM_22N = 32622
PCS_WGS_1984_UTM_22S = 32722
PCS_WGS_1984_UTM_23N = 32623
PCS_WGS_1984_UTM_23S = 32723
PCS_WGS_1984_UTM_24N = 32624
PCS_WGS_1984_UTM_24S = 32724
PCS_WGS_1984_UTM_25N = 32625
PCS_WGS_1984_UTM_25S = 32725
PCS_WGS_1984_UTM_26N = 32626
PCS_WGS_1984_UTM_26S = 32726
PCS_WGS_1984_UTM_27N = 32627
PCS_WGS_1984_UTM_27S = 32727
PCS_WGS_1984_UTM_28N = 32628
PCS_WGS_1984_UTM_28S = 32728
PCS_WGS_1984_UTM_29N = 32629
PCS_WGS_1984_UTM_29S = 32729
PCS_WGS_1984_UTM_2N = 32602
PCS_WGS_1984_UTM_2S = 32702
PCS_WGS_1984_UTM_30N = 32630
PCS_WGS_1984_UTM_30S = 32730
PCS_WGS_1984_UTM_31N = 32631
PCS_WGS_1984_UTM_31S = 32731
PCS_WGS_1984_UTM_32N = 32632
PCS_WGS_1984_UTM_32S = 32732
PCS_WGS_1984_UTM_33N = 32633
PCS_WGS_1984_UTM_33S = 32733
PCS_WGS_1984_UTM_34N = 32634
PCS_WGS_1984_UTM_34S = 32734
PCS_WGS_1984_UTM_35N = 32635
PCS_WGS_1984_UTM_35S = 32735
PCS_WGS_1984_UTM_36N = 32636
PCS_WGS_1984_UTM_36S = 32736
PCS_WGS_1984_UTM_37N = 32637
PCS_WGS_1984_UTM_37S = 32737
PCS_WGS_1984_UTM_38N = 32638
PCS_WGS_1984_UTM_38S = 32738
PCS_WGS_1984_UTM_39N = 32639
PCS_WGS_1984_UTM_39S = 32739
PCS_WGS_1984_UTM_3N = 32603
PCS_WGS_1984_UTM_3S = 32703
PCS_WGS_1984_UTM_40N = 32640
PCS_WGS_1984_UTM_40S = 32740
PCS_WGS_1984_UTM_41N = 32641
PCS_WGS_1984_UTM_41S = 32741
PCS_WGS_1984_UTM_42N = 32642
PCS_WGS_1984_UTM_42S = 32742
PCS_WGS_1984_UTM_43N = 32643
PCS_WGS_1984_UTM_43S = 32743
PCS_WGS_1984_UTM_44N = 32644
PCS_WGS_1984_UTM_44S = 32744
PCS_WGS_1984_UTM_45N = 32645
PCS_WGS_1984_UTM_45S = 32745
PCS_WGS_1984_UTM_46N = 32646
PCS_WGS_1984_UTM_46S = 32746
PCS_WGS_1984_UTM_47N = 32647
PCS_WGS_1984_UTM_47S = 32747
PCS_WGS_1984_UTM_48N = 32648
PCS_WGS_1984_UTM_48S = 32748
PCS_WGS_1984_UTM_49N = 32649
PCS_WGS_1984_UTM_49S = 32749
PCS_WGS_1984_UTM_4N = 32604
PCS_WGS_1984_UTM_4S = 32704
PCS_WGS_1984_UTM_50N = 32650
PCS_WGS_1984_UTM_50S = 32750
PCS_WGS_1984_UTM_51N = 32651
PCS_WGS_1984_UTM_51S = 32751
PCS_WGS_1984_UTM_52N = 32652
PCS_WGS_1984_UTM_52S = 32752
PCS_WGS_1984_UTM_53N = 32653
PCS_WGS_1984_UTM_53S = 32753
PCS_WGS_1984_UTM_54N = 32654
PCS_WGS_1984_UTM_54S = 32754
PCS_WGS_1984_UTM_55N = 32655
PCS_WGS_1984_UTM_55S = 32755
PCS_WGS_1984_UTM_56N = 32656
PCS_WGS_1984_UTM_56S = 32756
PCS_WGS_1984_UTM_57N = 32657
PCS_WGS_1984_UTM_57S = 32757
PCS_WGS_1984_UTM_58N = 32658
PCS_WGS_1984_UTM_58S = 32758
PCS_WGS_1984_UTM_59N = 32659
PCS_WGS_1984_UTM_59S = 32759
PCS_WGS_1984_UTM_5N = 32605
PCS_WGS_1984_UTM_5S = 32705
PCS_WGS_1984_UTM_60N = 32660
PCS_WGS_1984_UTM_60S = 32760
PCS_WGS_1984_UTM_6N = 32606
PCS_WGS_1984_UTM_6S = 32706
PCS_WGS_1984_UTM_7N = 32607
PCS_WGS_1984_UTM_7S = 32707
PCS_WGS_1984_UTM_8N = 32608
PCS_WGS_1984_UTM_8S = 32708
PCS_WGS_1984_UTM_9N = 32609
PCS_WGS_1984_UTM_9S = 32709
PCS_WGS_1984_WORLD_MERCATOR = 3395
PCS_WORLD_BEHRMANN = 54017
PCS_WORLD_BONNE = 54024
PCS_WORLD_CASSINI = 54028
PCS_WORLD_ECKERT_I = 54015
PCS_WORLD_ECKERT_II = 54014
PCS_WORLD_ECKERT_III = 54013
PCS_WORLD_ECKERT_IV = 54012
PCS_WORLD_ECKERT_V = 54011
PCS_WORLD_ECKERT_VI = 54010
PCS_WORLD_EQUIDISTANT_CONIC = 54027
PCS_WORLD_EQUIDISTANT_CYLINDRICAL = 54002
PCS_WORLD_GALL_STEREOGRAPHIC = 54016
PCS_WORLD_HOTINE = 54025
PCS_WORLD_LOXIMUTHAL = 54023
PCS_WORLD_MERCATOR = 54004
PCS_WORLD_MILLER_CYLINDRICAL = 54003
PCS_WORLD_MOLLWEIDE = 54009
PCS_WORLD_PLATE_CARREE = 54001
PCS_WORLD_POLYCONIC = 54021
PCS_WORLD_QUARTIC_AUTHALIC = 54022
PCS_WORLD_ROBINSON = 54030
PCS_WORLD_SINUSOIDAL = 54008
PCS_WORLD_STEREOGRAPHIC = 54026
PCS_WORLD_TWO_POINT_EQUIDISTANT = 54031
PCS_WORLD_VAN_DER_GRINTEN_I = 54029
PCS_WORLD_WINKEL_I = 54018
PCS_WORLD_WINKEL_II = 54019
PCS_XIAN_1980_3_DEGREE_GK_25 = 2349
PCS_XIAN_1980_3_DEGREE_GK_25N = 2370
PCS_XIAN_1980_3_DEGREE_GK_26 = 2350
PCS_XIAN_1980_3_DEGREE_GK_26N = 2371
PCS_XIAN_1980_3_DEGREE_GK_27 = 2351
PCS_XIAN_1980_3_DEGREE_GK_27N = 2372
PCS_XIAN_1980_3_DEGREE_GK_28 = 2352
PCS_XIAN_1980_3_DEGREE_GK_28N = 2373
PCS_XIAN_1980_3_DEGREE_GK_29 = 2353
PCS_XIAN_1980_3_DEGREE_GK_29N = 2374
PCS_XIAN_1980_3_DEGREE_GK_30 = 2354
PCS_XIAN_1980_3_DEGREE_GK_30N = 2375
PCS_XIAN_1980_3_DEGREE_GK_31 = 2355
PCS_XIAN_1980_3_DEGREE_GK_31N = 2376
PCS_XIAN_1980_3_DEGREE_GK_32 = 2356
PCS_XIAN_1980_3_DEGREE_GK_32N = 2377
PCS_XIAN_1980_3_DEGREE_GK_33 = 2357
PCS_XIAN_1980_3_DEGREE_GK_33N = 2378
PCS_XIAN_1980_3_DEGREE_GK_34 = 2358
PCS_XIAN_1980_3_DEGREE_GK_34N = 2379
PCS_XIAN_1980_3_DEGREE_GK_35 = 2359
PCS_XIAN_1980_3_DEGREE_GK_35N = 2380
PCS_XIAN_1980_3_DEGREE_GK_36 = 2360
PCS_XIAN_1980_3_DEGREE_GK_36N = 2381
PCS_XIAN_1980_3_DEGREE_GK_37 = 2361
PCS_XIAN_1980_3_DEGREE_GK_37N = 2382
PCS_XIAN_1980_3_DEGREE_GK_38 = 2362
PCS_XIAN_1980_3_DEGREE_GK_38N = 2383
PCS_XIAN_1980_3_DEGREE_GK_39 = 2363
PCS_XIAN_1980_3_DEGREE_GK_39N = 2384
PCS_XIAN_1980_3_DEGREE_GK_40 = 2364
PCS_XIAN_1980_3_DEGREE_GK_40N = 2385
PCS_XIAN_1980_3_DEGREE_GK_41 = 2365
PCS_XIAN_1980_3_DEGREE_GK_41N = 2386
PCS_XIAN_1980_3_DEGREE_GK_42 = 2366
PCS_XIAN_1980_3_DEGREE_GK_42N = 2387
PCS_XIAN_1980_3_DEGREE_GK_43 = 2367
PCS_XIAN_1980_3_DEGREE_GK_43N = 2388
PCS_XIAN_1980_3_DEGREE_GK_44 = 2368
PCS_XIAN_1980_3_DEGREE_GK_44N = 2389
PCS_XIAN_1980_3_DEGREE_GK_45 = 2369
PCS_XIAN_1980_3_DEGREE_GK_45N = 2390
PCS_XIAN_1980_GK_13 = 2327
PCS_XIAN_1980_GK_13N = 2338
PCS_XIAN_1980_GK_14 = 2328
PCS_XIAN_1980_GK_14N = 2339
PCS_XIAN_1980_GK_15 = 2329
PCS_XIAN_1980_GK_15N = 2340
PCS_XIAN_1980_GK_16 = 2330
PCS_XIAN_1980_GK_16N = 2341
PCS_XIAN_1980_GK_17 = 2331
PCS_XIAN_1980_GK_17N = 2342
PCS_XIAN_1980_GK_18 = 2332
PCS_XIAN_1980_GK_18N = 2343
PCS_XIAN_1980_GK_19 = 2333
PCS_XIAN_1980_GK_19N = 2344
PCS_XIAN_1980_GK_20 = 2334
PCS_XIAN_1980_GK_20N = 2345
PCS_XIAN_1980_GK_21 = 2335
PCS_XIAN_1980_GK_21N = 2346
PCS_XIAN_1980_GK_22 = 2336
PCS_XIAN_1980_GK_22N = 2347
PCS_XIAN_1980_GK_23 = 2337
PCS_XIAN_1980_GK_23N = 2348
PCS_YOFF_1972_UTM_28N = 31028
PCS_ZANDERIJ_1972_UTM_21N = 31121
class iobjectspy.enums.ImportMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for import mode types. It is used to control the operation mode when the target object (dataset, etc.) name of the setting that appears during data import already exists, that is, the setting name has a name conflict.

Variables:
  • ImportMode.NONE – If there is a name conflict, automatically modify the name of the target object before importing.
  • ImportMode.OVERWRITE – If there is a name conflict, perform a forced overwrite.
  • ImportMode.APPEND – If there is a name conflict, the dataset will be added.
APPEND = 2
NONE = 0
OVERWRITE = 1
class iobjectspy.enums.IgnoreMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines type constants that ignore the color value mode.

Variables:
IGNOREBORDER = 2
IGNORENONE = 0
IGNORESIGNAL = 1
class iobjectspy.enums.MultiBandImportMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the multi-band import mode type, and provides the mode used to import the multi-band data.

Variables:
  • MultiBandImportMode.SINGLEBAND – Import multi-band data into multiple single-band dataset
  • MultiBandImportMode.MULTIBAND – Import multi-band data as a multi-band dataset
  • MultiBandImportMode.COMPOSITE

    Import multi-band data as a single-band dataset. Currently, this mode is suitable for the following two situations:

    -Three-band 8-bit data is imported into a RGB single-band 24-bit dataset; -Four-band 8-bit data is imported as a RGBA single-band 32-bit dataset.

COMPOSITE = 2
MULTIBAND = 1
SINGLEBAND = 0
class iobjectspy.enums.CADVersion

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the AutoCAD version type constants. Provides different version types and descriptions of AutoCAD.

Variables:
CAD12 = 12
CAD13 = 13
CAD14 = 14
CAD2000 = 2000
CAD2004 = 2004
CAD2007 = 2007
class iobjectspy.enums.TopologyRule

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants of topology rule types.

This class is mainly used for topological inspection of point, line and area data, and is a parameter of topological inspection. According to the corresponding topology rules, return objects that do not meet the rules.

Variables:
  • TopologyRule.REGION_NO_OVERLAP – There is no overlap in the surface, it is used to check the topology of the surface data. Check a polygon dataset (or a polygon record set) that has overlapping polygon objects. This rule Is mostly used when an area cannot belong to two objects at the same time. Such as administrative divisions, the requirements between adjacent divisions must not have any overlap, On the data of administrative division, There must be a clear regional definition for each region. Such data also include: land use plots, ZIP code coverage zoning, referendum district zoning, etc. The overlap is generated as an error in the result dataset. Error dataset type: surface. Note: Only check a dataset or the record set itself.
  • TopologyRule.REGION_NO_GAPS

    There is no gap in the surface, which is used to check the topology of the surface data. Return a region object with gaps between adjacent regions in a region dataset (or region record set). This rule Is mostly used to check a single area or gaps between adjacent areas in a surface data. Generally, for data such as land use pattern, It is required that there should not be patches with undefined land use type, and this rule can be used for inspection.

    note:

    -Only check a dataset or record set itself. -If there is a self-intersecting area object in the area dataset (or record set) being inspected, the inspection may fail or the result may be wrong. It is suggested to Check the REGION_NO_SELF_INTERSECTION rule first, or modify the self-intersecting plane by yourself, and then check the no-gap rule in the plane after It is confirmed that there is no self-intersecting object.

  • TopologyRule.REGION_NO_OVERLAP_WITH – There is no overlap between the surface and the surface, which is used to check the topology of the surface data. Check all objects that overlap in the two face dataset. This rule checks the data in the First face, all objects that overlap with the second surface data. If the water area data is superimposed with the dry land data, this rule can be used to check. Overlap as The error is generated in the result dataset, the type of the error dataset: surface
  • TopologyRule.REGION_COVERED_BY_REGION_CLASS – The polygon is covered by multiple polygons, which is used to check the topology of the polygon data. Checks for objects in the first facet dataset (or facet record set) That are not overwritten by the second facet dataset (or facet record set). For example, each province in county boundary AREA1 Must be completely covered by the province’s face in county boundary AREA2. The uncovered part will be generated as an error in the result dataset. Type of error dataset: surface
  • TopologyRule.REGION_COVERED_BY_REGION – The surface is contained by the surface and is used to check the topology of the surface data. Check that the first polygon dataset (or polygon record set) does not contain any objects in the second polygon dataset (or polygon record set). That is, the area of surface data 1 must be completely contained by a certain area of surface data 2. The area objects that are not included will be generated as an error in the result dataset. The type of the error dataset: area.
  • TopologyRule.REGION_BOUNDARY_COVERED_BY_LINE – The surface boundary is covered by multiple lines, which is used to check the topology of the surface data. Check that the boundary of the object in the polygon dataset (or polygon record set) is not covered by the lines in the line dataset (or line record set). It is usually used to check the boundaries of administrative districts or land parcels and the line data stored with boundary line attributes. Some boundary line attributes cannot be stored in the surface data, At this time, special boundary line data is needed to store different attributes of the area boundary, and the boundary line and the area boundary line are required to completely overlap. The uncovered boundary will be generated as an error in the result dataset. Error dataset type: line.
  • TopologyRule.REGION_BOUNDARY_COVERED_BY_REGION_BOUNDARY – The boundary of the surface is covered by the boundary, which is used to check the topology of the surface data. Check that the boundary of the polygon dataset (or polygon record set) is not covered by the boundary of the object (there can be multiple) in the other polygon dataset (or polygon record set). The uncovered boundary will be generated as an error in the result dataset. Error dataset type: line.
  • TopologyRule.REGION_CONTAIN_POINT – The polygon contains points, which are used to check the topology of the polygon data. Check that the polygon dataset (or polygon record set) does not contain any point objects in the point dataset (or point record set). For example, provincial data and provincial capital data are checked. Each province must have a provincial capital city. Area objects that do not contain any point data will be checked. Area objects that do not contain points will be generated as errors in the result dataset. The type of error dataset: area.
  • TopologyRule.LINE_NO_INTERSECTION – There is no intersection within the line, which is used to check the topology of the line data. Check a line dataset (or line record set) that has intersecting line objects (not including end and internal contact and end and end contact). The intersection point will be generated as an error in the result dataset. Error dataset type: point. Note: Only check a dataset or the record set itself.
  • TopologyRule.LINE_NO_OVERLAP – There is no overlap in the line, which is used to check the topology of the line data. Check for overlapping line objects in a line dataset (or line record set). The overlapping parts between objects will be generated as errors in the result dataset. Error dataset type: line. Note: Only check a dataset or the record set itself.
  • TopologyRule.LINE_NO_DANGLES – There are no dangling lines in the line, which is used to check the topology of the line data. Check the objects defined as overhangs in a line dataset (or line record set), including excessive lines and long overhangs. Dangling points will be generated as errors in the result dataset. Error dataset type: point. Note: Only check a dataset or the record set itself.
  • TopologyRule.LINE_NO_PSEUDO_NODES – There are no false nodes in the line, which is used to check the topology of the line data. Return a line object containing false nodes in a line dataset (or line record set). The false node will be generated as an error in the result dataset, the type of the error dataset: point. Note: Only check a dataset or the record set itself.
: var TopologyRule.LINE_NO_OVERLAP_WITH: no overlap line and line, the data line topology for inspection check. Check all objects in the first line dataset (or line record set) that overlap with the objects in the second line dataset (or line record set). For example, roads and railways in traffic routes cannot overlap.
The overlapping part is generated as an error in the result dataset, the type of error dataset: line.
Variables:
  • TopologyRule.LINE_NO_INTERSECT_OR_INTERIOR_TOUCH – There is no intersection or internal contact in the line, used to check the topology of the line data. Return the line objects that intersect with other line objects in a line dataset (or line record set), that is, all intersected or internally contacted line objects except for the contact between the endpoints. The intersection point is generated as an error in the result dataset, the type of the error dataset: point. Note: All intersection points in the line dataset (or line record set) must be the end points of the line, that is, the intersecting arc must be interrupted, otherwise this rule is violated (self-intersection is not checked).
  • TopologyRule.LINE_NO_SELF_OVERLAP – There is no self-overlapping in the line, which is used to check the topology of the line data. Check a line dataset (or line record set) for line objects that overlap each other (the intersection is a line). The self-overlapping part (line) will be generated as an error in the result dataset. Error dataset type: line. Note: Only check a dataset or the record set itself.
  • TopologyRule.LINE_NO_SELF_INTERSECT – There is no self-intersection within the line, which is used to check the topology of the line data. Check the self-intersecting line objects in a line dataset (or line record set) (including self-overlapping). The intersection point will be generated as an error in the result dataset. Error dataset type: point. Note: Only check a dataset or the record set itself.
  • TopologyRule.LINE_BE_COVERED_BY_LINE_CLASS – The line is completely covered by multiple lines, which is used to check the topology of the line data. Check that there are no objects in the first line dataset (or line record set) that overlap with the objects in the second line dataset (or line record set). The uncovered part will be generated as an error in the result dataset. Error dataset type: line. Note: Each object in a line dataset (or line record set) must be covered by one or more line objects in another line dataset (or line record set). For example, a certain bus route in Line1 must be covered by a series of connected streets in Line2.
  • TopologyRule.LINE_COVERED_BY_REGION_BOUNDARY – The line is covered by the polygon boundary, which is used to check the topology of the line data. Check that there is no object in the line dataset (or line record set) that coincides with the boundary of an object in the area dataset (or area record set). (Can be covered by the boundary of multiple faces). The part that is not covered by the boundary will be generated as an error in the result dataset, the type of the error dataset: line.
  • TopologyRule.LINE_END_POINT_COVERED_BY_POINT – The end of the line must be covered by a point for topological inspection of the line data. Check that the endpoints in the line dataset (or line record set) do not coincide with any point in the point dataset (or point record set). Endpoints that are not covered will be generated as errors in the result dataset. Error dataset type: point.
: var TopologyRule.POINT_COVERED_BY_LINE: point must be on-line, the dot data for topology detection check.
Return the objects in the point dataset (or point record set) that are not covered by an object in the line dataset (or line record set). Such as a toll station on a highway. Points that are not covered will be generated as errors in the result dataset. Error dataset type: point.
Variables:
  • TopologyRule.POINT_COVERED_BY_REGION_BOUNDARY – The point must be on the boundary of the surface, which is used to check the topology of the point data. Check that there is no object on the boundary of an object in the area dataset (or area record set) in the point dataset (or point record set). Points not on the boundary of the surface will be generated as errors in the result dataset. Error dataset type: point.
  • TopologyRule.POINT_CONTAINED_BY_REGION – The point is completely contained by the surface, which is used to check the topology of the point data. Checkpoint dataset (or point record set) does not contain any point object contained in the area dataset (or area record set). Points not in the surface will be generated as errors in the result dataset. Error dataset type: point.
  • TopologyRule.POINT_BECOVERED_BY_LINE_END_POINT – The point must be covered by the line end point, which is used to check the topology of the point data. Return the objects in the point dataset (or point record set) that are not covered by the endpoints of any object in the line dataset (or line record set).
  • TopologyRule.NO_MULTIPART – No complex objects. Check a dataset or a complex object that contains sub-objects in a dataset or record set. It is suitable for areas and lines. Complex objects will be generated as errors in the result dataset, the type of error dataset: line or area.
  • TopologyRule.POINT_NO_IDENTICAL – No duplicate points, used for topological inspection of point data. Check the duplicate point objects in the point dataset. All overlapping objects in the point dataset will be generated as topological errors. All duplicate points will be generated as errors in the result dataset, the type of error dataset: point. Note: Only check a dataset or the record set itself.
  • TopologyRule.POINT_NO_CONTAINED_BY_REGION – The point is not contained by the polygon. Checkpoint dataset (or point record set) is a point object contained within a certain object in the area dataset (or area record set). The points contained by the surface will be generated as errors in the result dataset. Error dataset type: point. Note: If the point is located on the surface boundary, this rule is not violated.
  • TopologyRule.LINE_NO_INTERSECTION_WITH_REGION – The line cannot intersect the surface or be included. Check the line objects in the line dataset (or line record set) that intersect with the area objects in the area dataset (or area record set) or are contained by the area objects. The intersection of line and surface will be generated as an error in the result dataset. The type of error dataset: line.
  • TopologyRule.REGION_NO_OVERLAP_ON_BOUNDARY – There is no overlap on the surface boundary, which is used to check the topology of the surface data. Check that the boundary of the surface object in the polygon dataset or record set overlaps with the boundary of the object in the other surface dataset or record set. The overlapping part of the boundary will be generated as an error in the result dataset. Error dataset type: line.
  • TopologyRule.REGION_NO_SELF_INTERSECTION – There is no self-intersection in the surface, which is used to check the topology of the surface data. Check whether there are self-intersecting objects in the polygon data. The intersection point of the area object’s self-intersection will be generated as an error in the result dataset, the type of the error dataset: point.
  • TopologyRule.LINE_NO_INTERSECTION_WITH – The line does not intersect with the line, that is, the line object and the line object cannot intersect. Check that there are no objects in the first line dataset (or line record set) that intersect with objects in the second line dataset (or line record set). The intersection point will be generated as an error in the result dataset. Error dataset type: point.
  • TopologyRule.VERTEX_DISTANCE_GREATER_THAN_TOLERANCE – The node distance must be greater than the tolerance. Check whether the node distance between the two dataset of point, line, and surface type or between the two dataset is less than the tolerance. Nodes that are not larger than the tolerance will be generated as errors in the result dataset. Error dataset type: point. Note: If the two nodes overlap, that is, the distance is 0, it is not regarded as a topology error.
  • TopologyRule.LINE_EXIST_INTERSECT_VERTEX – There must be an intersection at the intersection of the line segments. A node must exist at the intersection of a line segment and a line segment in a dataset of line or area type or between two dataset, and this node must exist in at least one of the two intersecting line segments. If it is not satisfied, calculate the intersection point as an error and generate it into the result dataset. Error dataset type: point. Note: The fact that the endpoints of two line segments meet does not violate the rules.
  • TopologyRule.VERTEX_MATCH_WITH_EACH_OTHER – The nodes must match each other, that is, there are vertical foot points on the line segment within the tolerance range. Check the line and area type dataset inside or between two dataset, point dataset and line dataset, point dataset and area data, within the tolerance range of current node P, there should be a node on line segment L Q is matched, that is, Q is within the tolerance range of P. If it is not satisfied, then calculate the “vertical foot” point A from P to L (that is, A matches P) and generate it as an error in the result dataset. Error dataset type: point.
  • TopologyRule.NO_REDUNDANT_VERTEX – There are no redundant nodes on the line or surface boundary. Check whether there are objects with redundant nodes in the line dataset or polygon dataset. If there are other collinear nodes between two nodes on the boundary of a line object or a region object, these collinear nodes are redundant nodes. Redundant nodes will be generated as errors in the result dataset, error data type: point
  • TopologyRule.LINE_NO_SHARP_ANGLE – No discount in the line. Check whether there is a discount on the line object in the line dataset (or record set). If the two included angles formed by four consecutive nodes on a line are less than the given sharp angle tolerance , the line segment is considered to be discounted here. The first turning point that produces a sharp corner is generated as an error in the result dataset, the error data type: point. Note: When using the topology_validate() method to check the rule, the tolerance parameter of the method is used to set the sharp angle
  • TopologyRule.LINE_NO_SMALL_DANGLES – There is no short dangling line in the line, which is used to check the topology of the line data. Check whether the line object in the line dataset (or record set) is a short hanging line. A line object whose length is less than the suspension tolerance is a short suspension The end point of the short suspension line is generated as an error in the result dataset, and the error data type: point. Note: When using the topology_validate() method to check this rule, the tolerance parameter of this method is used to set the short suspension tolerance.
  • TopologyRule.LINE_NO_EXTENDED_DANGLES – There is no long hanging wire in the line, which is used to check the topology of the line data. Check whether the line object in the line dataset (or record set) is a long hanging line. If a suspension line extends a specified length (suspension line tolerance) according to its traveling direction and then has an intersection with a certain arc, the line object is a long suspension line. The long suspension line needs to extend the end point of one end as an error generated in the result dataset, the error data type: point. Note: When using the topology_validate() method to check this rule, the tolerance parameter of this method is used to set the long suspension tolerance.
  • TopologyRule.REGION_NO_ACUTE_ANGLE – There are no acute angles in the surface, which is used to check the topology of the surface data. Check whether there are sharp angles in the surface object in the surface dataset (or record set). If the included angle formed by three consecutive nodes on the surface boundary line is less than the given acute angle tolerance, the included angle is considered to be an acute angle. The second node that produces the acute angle is generated as an error in the result dataset, and the error data type: point. Note: When using the topology_validate() method to check this rule, the tolerance parameter of this method is used to set the acute angle tolerance.
LINE_BE_COVERED_BY_LINE_CLASS = 16
LINE_COVERED_BY_REGION_BOUNDARY = 17
LINE_END_POINT_COVERED_BY_POINT = 18
LINE_EXIST_INTERSECT_VERTEX = 31
LINE_NO_DANGLES = 10
LINE_NO_EXTENDED_DANGLES = 36
LINE_NO_INTERSECTION = 8
LINE_NO_INTERSECTION_WITH = 29
LINE_NO_INTERSECTION_WITH_REGION = 26
LINE_NO_INTERSECT_OR_INTERIOR_TOUCH = 13
LINE_NO_OVERLAP = 9
LINE_NO_OVERLAP_WITH = 12
LINE_NO_PSEUDO_NODES = 11
LINE_NO_SELF_INTERSECT = 15
LINE_NO_SELF_OVERLAP = 14
LINE_NO_SHARP_ANGLE = 34
LINE_NO_SMALL_DANGLES = 35
NO_MULTIPART = 23
NO_REDUNDANT_VERTEX = 33
POINT_BECOVERED_BY_LINE_END_POINT = 22
POINT_CONTAINED_BY_REGION = 21
POINT_COVERED_BY_LINE = 19
POINT_COVERED_BY_REGION_BOUNDARY = 20
POINT_NO_CONTAINED_BY_REGION = 25
POINT_NO_IDENTICAL = 24
REGION_BOUNDARY_COVERED_BY_LINE = 5
REGION_BOUNDARY_COVERED_BY_REGION_BOUNDARY = 6
REGION_CONTAIN_POINT = 7
REGION_COVERED_BY_REGION = 4
REGION_COVERED_BY_REGION_CLASS = 3
REGION_NO_ACUTE_ANGLE = 37
REGION_NO_GAPS = 1
REGION_NO_OVERLAP = 0
REGION_NO_OVERLAP_ON_BOUNDARY = 27
REGION_NO_OVERLAP_WITH = 2
REGION_NO_SELF_INTERSECTION = 28
VERTEX_DISTANCE_GREATER_THAN_TOLERANCE = 30
VERTEX_MATCH_WITH_EACH_OTHER = 32
class iobjectspy.enums.GeoSpatialRefType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the spatial coordinate system type.

The type of space coordinate system is used to distinguish plane coordinate system, geographic coordinate system, and projected coordinate system. The geographic coordinate system is also called the latitude and longitude coordinate system.

Variables:
  • GeoSpatialRefType.SPATIALREF_NONEARTH – Plane coordinate system. When the coordinate system is a plane coordinate system, projection conversion cannot be performed.
  • GeoSpatialRefType.SPATIALREF_EARTH_LONGITUDE_LATITUDE – geographic coordinate system. The geographic coordinate system consists of a geodetic reference system, a central meridian, and coordinate units. In the geographic coordinate system, the unit can be degrees, minutes, and seconds. The east-west direction (horizontal direction) ranges from -180 degrees to 180 degrees. The north-south direction (vertical direction) ranges from -90 degrees to 90 degrees.
  • GeoSpatialRefType.SPATIALREF_EARTH_PROJECTION – Projected coordinate system. The projection coordinate system is composed of map projection method, projection parameters, coordinate unit and geographic coordinate system. SuperMap Objects Java provides many predefined projection systems that users can use directly. In addition, users can also customize their own projection systems.
SPATIALREF_EARTH_LONGITUDE_LATITUDE = 1
SPATIALREF_EARTH_PROJECTION = 2
SPATIALREF_NONEARTH = 0
class iobjectspy.enums.GeoCoordSysType

Bases: iobjectspy._jsuperpy.enums.JEnum

An enumeration.

GCS_ADINDAN = 4201
GCS_AFGOOYE = 4205
GCS_AGADEZ = 4206
GCS_AGD_1966 = 4202
GCS_AGD_1984 = 4203
GCS_AIN_EL_ABD_1970 = 4204
GCS_AIRY_1830 = 4001
GCS_AIRY_MOD = 4002
GCS_ALASKAN_ISLANDS = 37260
GCS_AMERSFOORT = 4289
GCS_ANNA_1_1965 = 37231
GCS_ANTIGUA_ISLAND_1943 = 37236
GCS_ARATU = 4208
GCS_ARC_1950 = 4209
GCS_ARC_1960 = 4210
GCS_ASCENSION_ISLAND_1958 = 37237
GCS_ASTRO_1952 = 37214
GCS_ATF_PARIS = 4901
GCS_ATS_1977 = 4122
GCS_AUSTRALIAN = 4003
GCS_AYABELLE = 37208
GCS_AZORES_CENTRAL_1948 = 4183
GCS_AZORES_OCCIDENTAL_1939 = 4182
GCS_AZORES_ORIENTAL_1940 = 4184
GCS_BARBADOS = 4212
GCS_BATAVIA = 4211
GCS_BATAVIA_JAKARTA = 4813
GCS_BEACON_E_1945 = 37212
GCS_BEDUARAM = 4213
GCS_BEIJING_1954 = 4214
GCS_BELGE_1950 = 4215
GCS_BELGE_1950_BRUSSELS = 4809
GCS_BELGE_1972 = 4313
GCS_BELLEVUE = 37215
GCS_BERMUDA_1957 = 4216
GCS_BERN_1898 = 4217
GCS_BERN_1898_BERN = 4801
GCS_BERN_1938 = 4306
GCS_BESSEL_1841 = 4004
GCS_BESSEL_MOD = 4005
GCS_BESSEL_NAMIBIA = 4006
GCS_BISSAU = 37209
GCS_BOGOTA = 4218
GCS_BOGOTA_BOGOTA = 4802
GCS_BUKIT_RIMPAH = 4219
GCS_CACANAVERAL = 37239
GCS_CAMACUPA = 4220
GCS_CAMPO_INCHAUSPE = 4221
GCS_CAMP_AREA = 37253
GCS_CANTON_1966 = 37216
GCS_CAPE = 4222
GCS_CARTHAGE = 4223
GCS_CARTHAGE_DEGREE = 37223
GCS_CHATHAM_ISLAND_1971 = 37217
GCS_CHINA_2000 = 37313
GCS_CHUA = 4224
GCS_CLARKE_1858 = 4007
GCS_CLARKE_1866 = 4008
GCS_CLARKE_1866_MICH = 4009
GCS_CLARKE_1880 = 4034
GCS_CLARKE_1880_ARC = 4013
GCS_CLARKE_1880_BENOIT = 4010
GCS_CLARKE_1880_IGN = 4011
GCS_CLARKE_1880_RGS = 4012
GCS_CLARKE_1880_SGA = 4014
GCS_CONAKRY_1905 = 4315
GCS_CORREGO_ALEGRE = 4225
GCS_COTE_D_IVOIRE = 4226
GCS_DABOLA = 37210
GCS_DATUM_73 = 4274
GCS_DEALUL_PISCULUI_1933 = 4316
GCS_DEALUL_PISCULUI_1970 = 4317
GCS_DECEPTION_ISLAND = 37254
GCS_DEIR_EZ_ZOR = 4227
GCS_DHDNB = 4314
GCS_DOS_1968 = 37218
GCS_DOS_71_4 = 37238
GCS_DOUALA = 4228
GCS_EASTER_ISLAND_1967 = 37219
GCS_ED_1950 = 4230
GCS_ED_1987 = 4231
GCS_EGYPT_1907 = 4229
GCS_ETRS_1989 = 4258
GCS_EUROPEAN_1979 = 37201
GCS_EVEREST_1830 = 4015
GCS_EVEREST_BANGLADESH = 37202
GCS_EVEREST_DEF_1967 = 4016
GCS_EVEREST_DEF_1975 = 4017
GCS_EVEREST_INDIA_NEPAL = 37203
GCS_EVEREST_MOD = 4018
GCS_EVEREST_MOD_1969 = 37006
GCS_FAHUD = 4232
GCS_FISCHER_1960 = 37002
GCS_FISCHER_1968 = 37003
GCS_FISCHER_MOD = 37004
GCS_FORT_THOMAS_1955 = 37240
GCS_GANDAJIKA_1970 = 4233
GCS_GAN_1970 = 37232
GCS_GAROUA = 4234
GCS_GDA_1994 = 4283
GCS_GEM_10C = 4031
GCS_GGRS_1987 = 4121
GCS_GRACIOSA_1948 = 37241
GCS_GREEK = 4120
GCS_GREEK_ATHENS = 4815
GCS_GRS_1967 = 4036
GCS_GRS_1980 = 4019
GCS_GUAM_1963 = 37220
GCS_GUNUNG_SEGARA = 37255
GCS_GUX_1 = 37221
GCS_GUYANE_FRANCAISE = 4235
GCS_HELMERT_1906 = 4020
GCS_HERAT_NORTH = 4255
GCS_HITO_XVIII_1963 = 4254
GCS_HJORSEY_1955 = 37204
GCS_HONG_KONG_1963 = 37205
GCS_HOUGH_1960 = 37005
GCS_HUNGARIAN_1972 = 4237
GCS_HU_TZU_SHAN = 4236
GCS_INDIAN_1954 = 4239
GCS_INDIAN_1960 = 37256
GCS_INDIAN_1975 = 4240
GCS_INDONESIAN = 4021
GCS_INDONESIAN_1974 = 4238
GCS_INTERNATIONAL_1924 = 4022
GCS_INTERNATIONAL_1967 = 4023
GCS_ISTS_061_1968 = 37242
GCS_ISTS_073_1969 = 37233
GCS_ITRF_1993 = 4915
GCS_JAMAICA_1875 = 4241
GCS_JAMAICA_1969 = 4242
GCS_JAPAN_2000 = 37301
GCS_JOHNSTON_ISLAND_1961 = 37222
GCS_KALIANPUR = 4243
GCS_KANDAWALA = 4244
GCS_KERGUELEN_ISLAND_1949 = 37234
GCS_KERTAU = 4245
GCS_KKJ = 4123
GCS_KOC_ = 4246
GCS_KRASOVSKY_1940 = 4024
GCS_KUDAMS = 4319
GCS_KUSAIE_1951 = 37259
GCS_LAKE = 4249
GCS_LA_CANOA = 4247
GCS_LC5_1961 = 37243
GCS_LEIGON = 4250
GCS_LIBERIA_1964 = 4251
GCS_LISBON = 4207
GCS_LISBON_1890 = 4904
GCS_LISBON_LISBON = 4803
GCS_LOMA_QUINTANA = 4288
GCS_LOME = 4252
GCS_LUZON_1911 = 4253
GCS_MADEIRA_1936 = 4185
GCS_MAHE_1971 = 4256
GCS_MAKASSAR = 4257
GCS_MAKASSAR_JAKARTA = 4804
GCS_MALONGO_1987 = 4259
GCS_MANOCA = 4260
GCS_MASSAWA = 4262
GCS_MERCHICH = 4261
GCS_MGI_ = 4312
GCS_MGI_FERRO = 4805
GCS_MHAST = 4264
GCS_MIDWAY_1961 = 37224
GCS_MINNA = 4263
GCS_MONTE_MARIO = 4265
GCS_MONTE_MARIO_ROME = 4806
GCS_MONTSERRAT_ISLAND_1958 = 37244
GCS_MPORALOKO = 4266
GCS_NAD_1927 = 4267
GCS_NAD_1983 = 4269
GCS_NAD_MICH = 4268
GCS_NAHRWAN_1967 = 4270
GCS_NAPARIMA_1972 = 4271
GCS_NDG_PARIS = 4902
GCS_NGN = 4318
GCS_NGO_1948_ = 4273
GCS_NORD_SAHARA_1959 = 4307
GCS_NSWC_9Z_2_ = 4276
GCS_NTF_ = 4275
GCS_NTF_PARIS = 4807
GCS_NWL_9D = 4025
GCS_NZGD_1949 = 4272
GCS_OBSERV_METEOR_1939 = 37245
GCS_OLD_HAWAIIAN = 37225
GCS_OMAN = 37206
GCS_OSGB_1936 = 4277
GCS_OSGB_1970_SN = 4278
GCS_OSU_86F = 4032
GCS_OSU_91A = 4033
GCS_OS_SN_1980 = 4279
GCS_PADANG_1884 = 4280
GCS_PADANG_1884_JAKARTA = 4808
GCS_PALESTINE_1923 = 4281
GCS_PICO_DE_LAS_NIEVES = 37246
GCS_PITCAIRN_1967 = 37226
GCS_PLESSIS_1817 = 4027
GCS_POINT58 = 37211
GCS_POINTE_NOIRE = 4282
GCS_PORTO_SANTO_1936 = 37247
GCS_PSAD_1956 = 4248
GCS_PUERTO_RICO = 37248
GCS_PULKOVO_1942 = 4284
GCS_PULKOVO_1995 = 4200
GCS_QATAR = 4285
GCS_QATAR_1948 = 4286
GCS_QORNOQ = 4287
GCS_REUNION = 37235
GCS_RT38_ = 4308
GCS_RT38_STOCKHOLM = 4814
GCS_S42_HUNGARY = 37257
GCS_SAD_1969 = 4291
GCS_SAMOA_1962 = 37252
GCS_SANTO_DOS_1965 = 37227
GCS_SAO_BRAZ = 37249
GCS_SAPPER_HILL_1943 = 4292
GCS_SCHWARZECK = 4293
GCS_SEGORA = 4294
GCS_SELVAGEM_GRANDE_1938 = 37250
GCS_SERINDUNG = 4295
GCS_SPHERE = 4035
GCS_SPHERE_AI = 37008
GCS_STRUVE_1860 = 4028
GCS_SUDAN = 4296
GCS_S_ASIA_SINGAPORE = 37207
GCS_S_JTSK = 37258
GCS_TANANARIVE_1925 = 4297
GCS_TANANARIVE_1925_PARIS = 4810
GCS_TERN_ISLAND_1961 = 37213
GCS_TIMBALAI_1948 = 4298
GCS_TM65 = 4299
GCS_TM75 = 4300
GCS_TOKYO = 4301
GCS_TRINIDAD_1903 = 4302
GCS_TRISTAN_1968 = 37251
GCS_TRUCIAL_COAST_1948 = 4303
GCS_USER_DEFINE = -1
GCS_VITI_LEVU_1916 = 37228
GCS_VOIROL_1875 = 4304
GCS_VOIROL_1875_PARIS = 4811
GCS_VOIROL_UNIFIE_1960 = 4305
GCS_VOIROL_UNIFIE_1960_PARIS = 4812
GCS_WAKE_ENIWETOK_1960 = 37229
GCS_WAKE_ISLAND_1952 = 37230
GCS_WALBECK = 37007
GCS_WAR_OFFICE = 4029
GCS_WGS_1966 = 37001
GCS_WGS_1972 = 4322
GCS_WGS_1972_BE = 4324
GCS_WGS_1984 = 4326
GCS_XIAN_1980 = 37312
GCS_YACARE = 4309
GCS_YOFF = 4310
GCS_ZANDERIJ = 4311
class iobjectspy.enums.ProjectionType

Bases: iobjectspy._jsuperpy.enums.JEnum

An enumeration.

PRJ_ALBERS = 43007
PRJ_BAIDU_MERCATOR = 43048
PRJ_BEHRMANN = 43017
PRJ_BONNE = 43024
PRJ_BONNE_SOUTH_ORIENTATED = 43046
PRJ_CASSINI = 43028
PRJ_CHINA_AZIMUTHAL = 43037
PRJ_CONFORMAL_AZIMUTHAL = 43034
PRJ_ECKERT_I = 43015
PRJ_ECKERT_II = 43014
PRJ_ECKERT_III = 43013
PRJ_ECKERT_IV = 43012
PRJ_ECKERT_V = 43011
PRJ_ECKERT_VI = 43010
PRJ_EQUALAREA_CYLINDRICAL = 43041
PRJ_EQUIDISTANT_AZIMUTHAL = 43032
PRJ_EQUIDISTANT_CONIC = 43027
PRJ_EQUIDISTANT_CYLINDRICAL = 43002
PRJ_GALL_STEREOGRAPHIC = 43016
PRJ_GAUSS_KRUGER = 43005
PRJ_GNOMONIC = 43036
PRJ_HOTINE = 43025
PRJ_HOTINE_AZIMUTH_NATORIGIN = 43042
PRJ_HOTINE_OBLIQUE_MERCATOR = 43044
PRJ_LAMBERT_AZIMUTHAL_EQUAL_AREA = 43033
PRJ_LAMBERT_CONFORMAL_CONIC = 43020
PRJ_LOXIMUTHAL = 43023
PRJ_MERCATOR = 43004
PRJ_MILLER_CYLINDRICAL = 43003
PRJ_MOLLWEIDE = 43009
PRJ_NONPROJECTION = 43000
PRJ_OBLIQUE_MERCATOR = 43043
PRJ_OBLIQUE_STEREOGRAPHIC = 43047
PRJ_ORTHO_GRAPHIC = 43035
PRJ_PLATE_CARREE = 43001
PRJ_POLYCONIC = 43021
PRJ_QUARTIC_AUTHALIC = 43022
PRJ_RECTIFIED_SKEWED_ORTHOMORPHIC = 43049
PRJ_ROBINSON = 43030
PRJ_SANSON = 43040
PRJ_SINUSOIDAL = 43008
PRJ_SPHERE_MERCATOR = 43045
PRJ_STEREOGRAPHIC = 43026
PRJ_TRANSVERSE_MERCATOR = 43006
PRJ_TWO_POINT_EQUIDISTANT = 43031
PRJ_VAN_DER_GRINTEN_I = 43029
PRJ_WINKEL_I = 43018
PRJ_WINKEL_II = 43019
class iobjectspy.enums.GeoPrimeMeridianType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants of the central meridian type.

Variables:
PRIMEMERIDIAN_ATHENS = 8912
PRIMEMERIDIAN_BERN = 8907
PRIMEMERIDIAN_BOGOTA = 8904
PRIMEMERIDIAN_BRUSSELS = 8910
PRIMEMERIDIAN_FERRO = 8909
PRIMEMERIDIAN_GREENWICH = 8901
PRIMEMERIDIAN_JAKARTA = 8908
PRIMEMERIDIAN_LISBON = 8902
PRIMEMERIDIAN_MADRID = 8905
PRIMEMERIDIAN_PARIS = 8903
PRIMEMERIDIAN_ROME = 8906
PRIMEMERIDIAN_STOCKHOLM = 8911
PRIMEMERIDIAN_USER_DEFINED = -1
class iobjectspy.enums.GeoSpheroidType

Bases: iobjectspy._jsuperpy.enums.JEnum

An enumeration.

SPHEROID_AIRY_1830 = 7001
SPHEROID_AIRY_MOD = 7002
SPHEROID_ATS_1977 = 7041
SPHEROID_AUSTRALIAN = 7003
SPHEROID_BESSEL_1841 = 7004
SPHEROID_BESSEL_MOD = 7005
SPHEROID_BESSEL_NAMIBIA = 7006
SPHEROID_CHINA_2000 = 7044
SPHEROID_CLARKE_1858 = 7007
SPHEROID_CLARKE_1866 = 7008
SPHEROID_CLARKE_1866_MICH = 7009
SPHEROID_CLARKE_1880 = 7034
SPHEROID_CLARKE_1880_ARC = 7013
SPHEROID_CLARKE_1880_BENOIT = 7010
SPHEROID_CLARKE_1880_IGN = 7011
SPHEROID_CLARKE_1880_RGS = 7012
SPHEROID_CLARKE_1880_SGA = 7014
SPHEROID_EVEREST_1830 = 7015
SPHEROID_EVEREST_DEF_1967 = 7016
SPHEROID_EVEREST_DEF_1975 = 7017
SPHEROID_EVEREST_MOD = 7018
SPHEROID_EVEREST_MOD_1969 = 40006
SPHEROID_FISCHER_1960 = 40002
SPHEROID_FISCHER_1968 = 40003
SPHEROID_FISCHER_MOD = 40004
SPHEROID_GEM_10C = 7031
SPHEROID_GRS_1967 = 7036
SPHEROID_GRS_1980 = 7019
SPHEROID_HELMERT_1906 = 7020
SPHEROID_HOUGH_1960 = 40005
SPHEROID_INDONESIAN = 7021
SPHEROID_INTERNATIONAL_1924 = 7022
SPHEROID_INTERNATIONAL_1967 = 7023
SPHEROID_INTERNATIONAL_1975 = 40023
SPHEROID_KRASOVSKY_1940 = 7024
SPHEROID_NWL_10D = 7026
SPHEROID_NWL_9D = 7025
SPHEROID_OSU_86F = 7032
SPHEROID_OSU_91A = 7033
SPHEROID_PLESSIS_1817 = 7027
SPHEROID_SPHERE = 7035
SPHEROID_SPHERE_AI = 40008
SPHEROID_STRUVE_1860 = 7028
SPHEROID_USER_DEFINED = -1
SPHEROID_WALBECK = 40007
SPHEROID_WAR_OFFICE = 7029
SPHEROID_WGS_1966 = 40001
SPHEROID_WGS_1972 = 7043
SPHEROID_WGS_1984 = 7030
class iobjectspy.enums.GeoDatumType

Bases: iobjectspy._jsuperpy.enums.JEnum

An enumeration.

DATUM_ADINDAN = 6201
DATUM_AFGOOYE = 6205
DATUM_AGADEZ = 6206
DATUM_AGD_1966 = 6202
DATUM_AGD_1984 = 6203
DATUM_AIN_EL_ABD_1970 = 6204
DATUM_AIRY_1830 = 6001
DATUM_AIRY_MOD = 6002
DATUM_ALASKAN_ISLANDS = 39260
DATUM_AMERSFOORT = 6289
DATUM_ANNA_1_1965 = 39231
DATUM_ANTIGUA_ISLAND_1943 = 39236
DATUM_ARATU = 6208
DATUM_ARC_1950 = 6209
DATUM_ARC_1960 = 6210
DATUM_ASCENSION_ISLAND_1958 = 39237
DATUM_ASTRO_1952 = 39214
DATUM_ATF = 6901
DATUM_ATS_1977 = 6122
DATUM_AUSTRALIAN = 6003
DATUM_AYABELLE = 39208
DATUM_BARBADOS = 6212
DATUM_BATAVIA = 6211
DATUM_BEACON_E_1945 = 39212
DATUM_BEDUARAM = 6213
DATUM_BEIJING_1954 = 6214
DATUM_BELGE_1950 = 6215
DATUM_BELGE_1972 = 6313
DATUM_BELLEVUE = 39215
DATUM_BERMUDA_1957 = 6216
DATUM_BERN_1898 = 6217
DATUM_BERN_1938 = 6306
DATUM_BESSEL_1841 = 6004
DATUM_BESSEL_MOD = 6005
DATUM_BESSEL_NAMIBIA = 6006
DATUM_BISSAU = 39209
DATUM_BOGOTA = 6218
DATUM_BUKIT_RIMPAH = 6219
DATUM_CACANAVERAL = 39239
DATUM_CAMACUPA = 6220
DATUM_CAMPO_INCHAUSPE = 6221
DATUM_CAMP_AREA = 39253
DATUM_CANTON_1966 = 39216
DATUM_CAPE = 6222
DATUM_CARTHAGE = 6223
DATUM_CHATHAM_ISLAND_1971 = 39217
DATUM_CHINA_2000 = 39313
DATUM_CHUA = 6224
DATUM_CLARKE_1858 = 6007
DATUM_CLARKE_1866 = 6008
DATUM_CLARKE_1866_MICH = 6009
DATUM_CLARKE_1880 = 6034
DATUM_CLARKE_1880_ARC = 6013
DATUM_CLARKE_1880_BENOIT = 6010
DATUM_CLARKE_1880_IGN = 6011
DATUM_CLARKE_1880_RGS = 6012
DATUM_CLARKE_1880_SGA = 6014
DATUM_CONAKRY_1905 = 6315
DATUM_CORREGO_ALEGRE = 6225
DATUM_COTE_D_IVOIRE = 6226
DATUM_DABOLA = 39210
DATUM_DATUM_73 = 6274
DATUM_DEALUL_PISCULUI_1933 = 6316
DATUM_DEALUL_PISCULUI_1970 = 6317
DATUM_DECEPTION_ISLAND = 39254
DATUM_DEIR_EZ_ZOR = 6227
DATUM_DHDN = 6314
DATUM_DOS_1968 = 39218
DATUM_DOS_71_4 = 39238
DATUM_DOUALA = 6228
DATUM_EASTER_ISLAND_1967 = 39219
DATUM_ED_1950 = 6230
DATUM_ED_1987 = 6231
DATUM_EGYPT_1907 = 6229
DATUM_ETRS_1989 = 6258
DATUM_EUROPEAN_1979 = 39201
DATUM_EVEREST_1830 = 6015
DATUM_EVEREST_BANGLADESH = 39202
DATUM_EVEREST_DEF_1967 = 6016
DATUM_EVEREST_DEF_1975 = 6017
DATUM_EVEREST_INDIA_NEPAL = 39203
DATUM_EVEREST_MOD = 6018
DATUM_EVEREST_MOD_1969 = 39006
DATUM_FAHUD = 6232
DATUM_FISCHER_1960 = 39002
DATUM_FISCHER_1968 = 39003
DATUM_FISCHER_MOD = 39004
DATUM_FORT_THOMAS_1955 = 39240
DATUM_GANDAJIKA_1970 = 6233
DATUM_GAN_1970 = 39232
DATUM_GAROUA = 6234
DATUM_GDA_1994 = 6283
DATUM_GEM_10C = 6031
DATUM_GGRS_1987 = 6121
DATUM_GRACIOSA_1948 = 39241
DATUM_GREEK = 6120
DATUM_GRS_1967 = 6036
DATUM_GRS_1980 = 6019
DATUM_GUAM_1963 = 39220
DATUM_GUNUNG_SEGARA = 39255
DATUM_GUX_1 = 39221
DATUM_GUYANE_FRANCAISE = 6235
DATUM_HELMERT_1906 = 6020
DATUM_HERAT_NORTH = 6255
DATUM_HITO_XVIII_1963 = 6254
DATUM_HJORSEY_1955 = 39204
DATUM_HONG_KONG_1963 = 39205
DATUM_HOUGH_1960 = 39005
DATUM_HUNGARIAN_1972 = 6237
DATUM_HU_TZU_SHAN = 6236
DATUM_INDIAN_1954 = 6239
DATUM_INDIAN_1960 = 39256
DATUM_INDIAN_1975 = 6240
DATUM_INDONESIAN = 6021
DATUM_INDONESIAN_1974 = 6238
DATUM_INTERNATIONAL_1924 = 6022
DATUM_INTERNATIONAL_1967 = 6023
DATUM_ISTS_061_1968 = 39242
DATUM_ISTS_073_1969 = 39233
DATUM_JAMAICA_1875 = 6241
DATUM_JAMAICA_1969 = 6242
DATUM_JAPAN_2000 = 39301
DATUM_JOHNSTON_ISLAND_1961 = 39222
DATUM_KALIANPUR = 6243
DATUM_KANDAWALA = 6244
DATUM_KERGUELEN_ISLAND_1949 = 39234
DATUM_KERTAU = 6245
DATUM_KKJ = 6123
DATUM_KOC = 6246
DATUM_KRASOVSKY_1940 = 6024
DATUM_KUDAMS = 6319
DATUM_KUSAIE_1951 = 39259
DATUM_LAKE = 6249
DATUM_LA_CANOA = 6247
DATUM_LC5_1961 = 39243
DATUM_LEIGON = 6250
DATUM_LIBERIA_1964 = 6251
DATUM_LISBON = 6207
DATUM_LOMA_QUINTANA = 6288
DATUM_LOME = 6252
DATUM_LUZON_1911 = 6253
DATUM_MAHE_1971 = 6256
DATUM_MAKASSAR = 6257
DATUM_MALONGO_1987 = 6259
DATUM_MANOCA = 6260
DATUM_MASSAWA = 6262
DATUM_MERCHICH = 6261
DATUM_MGI = 6312
DATUM_MHAST = 6264
DATUM_MIDWAY_1961 = 39224
DATUM_MINNA = 6263
DATUM_MONTE_MARIO = 6265
DATUM_MONTSERRAT_ISLAND_1958 = 39244
DATUM_MPORALOKO = 6266
DATUM_NAD_1927 = 6267
DATUM_NAD_1983 = 6269
DATUM_NAD_MICH = 6268
DATUM_NAHRWAN_1967 = 6270
DATUM_NAPARIMA_1972 = 6271
DATUM_NDG = 6902
DATUM_NGN = 6318
DATUM_NGO_1948 = 6273
DATUM_NORD_SAHARA_1959 = 6307
DATUM_NSWC_9Z_2 = 6276
DATUM_NTF = 6275
DATUM_NWL_9D = 6025
DATUM_NZGD_1949 = 6272
DATUM_OBSERV_METEOR_1939 = 39245
DATUM_OLD_HAWAIIAN = 39225
DATUM_OMAN = 39206
DATUM_OSGB_1936 = 6277
DATUM_OSGB_1970_SN = 6278
DATUM_OSU_86F = 6032
DATUM_OSU_91A = 6033
DATUM_OS_SN_1980 = 6279
DATUM_PADANG_1884 = 6280
DATUM_PALESTINE_1923 = 6281
DATUM_PICO_DE_LAS_NIEVES = 39246
DATUM_PITCAIRN_1967 = 39226
DATUM_PLESSIS_1817 = 6027
DATUM_POINT58 = 39211
DATUM_POINTE_NOIRE = 6282
DATUM_PORTO_SANTO_1936 = 39247
DATUM_PSAD_1956 = 6248
DATUM_PUERTO_RICO = 39248
DATUM_PULKOVO_1942 = 6284
DATUM_PULKOVO_1995 = 6200
DATUM_QATAR = 6285
DATUM_QATAR_1948 = 6286
DATUM_QORNOQ = 6287
DATUM_REUNION = 39235
DATUM_S42_HUNGARY = 39257
DATUM_SAD_1969 = 6291
DATUM_SAMOA_1962 = 39252
DATUM_SANTO_DOS_1965 = 39227
DATUM_SAO_BRAZ = 39249
DATUM_SAPPER_HILL_1943 = 6292
DATUM_SCHWARZECK = 6293
DATUM_SEGORA = 6294
DATUM_SELVAGEM_GRANDE_1938 = 39250
DATUM_SERINDUNG = 6295
DATUM_SPHERE = 6035
DATUM_SPHERE_AI = 39008
DATUM_STOCKHOLM_1938 = 6308
DATUM_STRUVE_1860 = 6028
DATUM_SUDAN = 6296
DATUM_S_ASIA_SINGAPORE = 39207
DATUM_S_JTSK = 39258
DATUM_TANANARIVE_1925 = 6297
DATUM_TERN_ISLAND_1961 = 39213
DATUM_TIMBALAI_1948 = 6298
DATUM_TM65 = 6299
DATUM_TM75 = 6300
DATUM_TOKYO = 6301
DATUM_TRINIDAD_1903 = 6302
DATUM_TRISTAN_1968 = 39251
DATUM_TRUCIAL_COAST_1948 = 6303
DATUM_USER_DEFINED = -1
DATUM_VITI_LEVU_1916 = 39228
DATUM_VOIROL_1875 = 6304
DATUM_VOIROL_UNIFIE_1960 = 6305
DATUM_WAKE_ENIWETOK_1960 = 39229
DATUM_WAKE_ISLAND_1952 = 39230
DATUM_WALBECK = 39007
DATUM_WAR_OFFICE = 6029
DATUM_WGS_1966 = 39001
DATUM_WGS_1972 = 6322
DATUM_WGS_1972_BE = 6324
DATUM_WGS_1984 = 6326
DATUM_XIAN_1980 = 39312
DATUM_YACARE = 6309
DATUM_YOFF = 6310
DATUM_ZANDERIJ = 6311
class iobjectspy.enums.CoordSysTransMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the projection conversion method type.

In the projection conversion, if the geographic coordinate system of the source projection and the target projection are different, a reference system conversion is required.

There are two types of reference system conversion, grid-based conversion and formula-based conversion. The conversion methods provided by this class are all based on formula conversion. According to different conversion parameters, it can be divided into three-parameter method and seven-parameter method. Currently The most widely used method is the seven-parameter method. For parameter information, see: py:class:CoordSysTransParameter; if the geographic coordinate system of the source projection and the target projection are the same, the user does not need to perform the conversion of the reference system, that is, it is not necessary To set CoordSysTransParameter parameter information. GeocentricTranslation, Molodensky, and MolodenskyAbridged in this version are based on the three-parameter conversion of the center of the earth Method; PositionVector, CoordinateFrame, BursaWolf are all seven-parameter methods.

Variables:
MTH_BURSA_WOLF = 42607
MTH_COORDINATE_FRAME = 9607
MTH_GEOCENTRIC_TRANSLATION = 9603
MTH_MOLODENSKY = 9604
MTH_MOLODENSKY_ABRIDGED = 9605
MTH_POSITION_VECTOR = 9606
MolodenskyBadekas = 49607
class iobjectspy.enums.StatisticsFieldType

Bases: iobjectspy._jsuperpy.enums.JEnum

Point thinning statistics type, the statistics are the value of the original point set of thinning points

Variables:
AVERAGE = 1
MAXVALUE = 3
MINVALUE = 4
SAMPLESTDDEV = 8
SAMPLEVARIANCE = 6
STDDEVIATION = 7
SUM = 2
VARIANCE = 5
class iobjectspy.enums.VectorResampleType

Bases: iobjectspy._jsuperpy.enums.JEnum

Vector dataset resampling method type constant

Variables:
RTBEND = 1
RTGENERAL = 2
class iobjectspy.enums.ArcAndVertexFilterMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the arc intersection filtering mode.

The arc intersection is used to break the line object at the intersection, and is usually the first step when establishing a topological relationship for the line data.

Variables:
  • ArcAndVertexFilterMode.NONE

    No filtering, that is, the line object is broken at all intersections. In this mode, the filter line expression or filter point dataset is invalid. As shown in the figure below, the line objects A, B, C, and D are interrupted at their intersections, that is, A and B are interrupted at their intersections, and C is interrupted at the intersections with A and D.

    ../_images/FilterMode_None.png
  • ArcAndVertexFilterMode.ARC

    Only filter by the filter line expression, that is, the line object queried by the filter line expression will not be interrupted. In this mode, the filter point record set is invalid. As shown in the figure below, the line object C is an object that satisfies the filter line expression, and the entire line of the line object C will not be interrupted at any position.

    ../_images/FilterMode_Arc.png
  • ArcAndVertexFilterMode.VERTEX

    Only filtered by the filter point record set, that is, the line object is not interrupted at the location of the filter point (or the distance from the filter point is within the tolerance range). The filter line expression set in this mode is invalid. As shown in the figure below, if a filter point is located at the intersection of line objects A and C, then C will not be interrupted at that point, and will still be interrupted at other intersections.

    ../_images/FilterMode_Vertex.png
  • ArcAndVertexFilterMode.ARC_AND_VERTEX

    The filter line expression and the filter point record set jointly determine which positions are not interrupted. The relationship between the two is and, that is, only the line object queried by the filter line expression is at the filter point position (or two Within the tolerance) without interruption. As shown in the figure below, the line object C is an object that satisfies the filter line expression. There is a filter point at the intersection of A and B, and the intersection of C and D. According to the pattern rules, the location of the filter point on the filter line will not be affected. Interrupt, that is, C does not interrupt at the intersection with D.

    ../_images/FilterMode_ArcAndVertex.png
  • ArcAndVertexFilterMode.ARC_OR_VERTEX

    The line object queried by the filter line expression and the line object at the position of the filter point (or the distance from the filter point is within the tolerance range) are not interrupted, and the two are in a union relationship. As shown in the figure below, the line object C is an object that satisfies the filter line expression. There is a filter point at the intersection of A and B, and the intersection of C and D. According to the pattern rule, the result is as shown in the figure on the right. If interrupted, the intersection of A and B will not interrupt.

    ../_images/FilterMode_ArcOrVertex.png
ARC = 2
ARC_AND_VERTEX = 4
ARC_OR_VERTEX = 5
NONE = 1
VERTEX = 3
class iobjectspy.enums.RasterResampleMode

Bases: iobjectspy._jsuperpy.enums.JEnum

The type constant of the raster resampling calculation method

Variables:
  • RasterResampleMode.NEAREST – The nearest neighbor method. The nearest neighbor method assigns the nearest grid value to a new grid. The advantage of this method is that the original grid value will not be changed, it is simple and the processing speed is fast, but the maximum displacement of this method is half the grid size. It is suitable for discrete data representing classification or a certain topic, such as land use, vegetation type, etc.
  • RasterResampleMode.BILINEAR – Bilinear interpolation method. Bilinear interpolation uses the weighted average of the interpolation points in the 4 neighborhoods of the input grid to calculate the new grid value, and the weight is determined according to the distance between the center of each grid in the 4 neighborhoods and the interpolation point. The resampling result of this method will be smoother than that of the nearest neighbor method, but it will change the original grid value. It is suitable for continuous data representing the distribution of a certain phenomenon and topographic surface, such as DEM, temperature, rainfall distribution, slope, etc. These data are originally continuous surfaces obtained by interpolation of sampling points.
  • RasterResampleMode.CUBIC – cubic convolution interpolation method. The cubic convolution interpolation method is more complicated, similar to bilinear interpolation, it also changes the grid value, the difference is that it uses 16 neighborhoods to weight the calculation, which will make the calculation result get some sharpening effect. This method will also change the original raster value, and may exceed the value range of the input raster, and is computationally intensive. Suitable for resampling of aerial photos and remote sensing images.
BILINEAR = 1
CUBIC = 2
NEAREST = 0
class iobjectspy.enums.ResamplingMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for creating pyramid types.

Variables:
AVERAGE = 1
NEAR = 2
class iobjectspy.enums.AggregationType

Bases: iobjectspy._jsuperpy.enums.JEnum

Defines the type constant of the calculation method of the result raster during the aggregation operation

Variables:
AVERRAGE = 3
MAX = 2
MEDIAN = 4
MIN = 1
SUM = 0
class iobjectspy.enums.ReclassPixelFormat

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the storage type constants of the cell values of the raster dataset

Variables:
BIT32 = 320
BIT64 = 64
DOUBLE = 6400
SINGLE = 3200
class iobjectspy.enums.ReclassSegmentType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the reclassification interval type.

Variables:
CLOSEOPEN = 1
OPENCLOSE = 0
class iobjectspy.enums.ReclassType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the grid reclassification type

Variables:
  • ReclassType.UNIQUE – Single value reclassification, that is, re-assign certain single values.
  • ReclassType.RANGE – Range reclassification, that is, re-assign a value in an interval to a value.
RANGE = 2
UNIQUE = 1
class iobjectspy.enums.NeighbourShapeType

Bases: iobjectspy._jsuperpy.enums.JEnum

Variables:
  • NeighbourShapeType.RECTANGLE

    Rectangular neighborhood. The size of the rectangle is determined by the specified width and height . The cells within the rectangle participate in the calculation of neighborhood statistics. The default width and height of the rectangular neighborhood Are both 0 (the unit is geographic unit or grid unit).

    ../_images/Rectangle.png
  • NeighbourShapeType.CIRCLE

    Circular neighborhood. The size of the circular neighborhood is determined according to the specified radius. All cells within the circle area participate in the neighborhood processing, as long as the cell is partially contained within the circular range Will participate in neighborhood statistics. The default radius of a circular neighborhood is 0 (units are geographic units or grid units).

    ../_images/Circle.png
  • NeighbourShapeType.ANNULUS

    Ring neighborhood. The size of the circular neighborhood is determined according to the specified outer radius and inner radius, and the cells in the circular area participate in the neighborhood processing. The default radius of the outer circle and the inner circle in the ring neighborhood Are both 0 (in geographic or grid units)

    ../_images/Annulus.png
  • NeighbourShapeType.WEDGE

    Fan-shaped neighborhood. The size of the fan-shaped neighborhood is determined according to the specified circle radius, starting angle and ending angle. All cells in the fan-shaped area participate in neighborhood processing. The default radius of the fan-shaped neighborhood is 0 (the unit is geographic unit or grid unit), and the default value of the start angle and end angle are both 0 degrees.

    ../_images/Wedge.png
ANNULUS = 3
CIRCLE = 2
RECTANGLE = 1
WEDGE = 4
class iobjectspy.enums.SearchMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of the sample point search method used in interpolation.

For the same interpolation method, the selection method of sample points is different, and the interpolation results obtained will also be different. SuperMap provides four interpolation search methods, namely no search, block (QUADTREE) search, fixed length search (KDTREE_FIXED_RADIUS) and variable length search (KDTREE_FIXED_COUNT).

Variables:
  • SearchMode.NONE – Do not search, use all input points for interpolation analysis.
  • SearchMode.QUADTREE – Block search mode, that is, the dataset is divided into blocks according to the maximum number of points in each block, and the points in the block are used for interpolation. Note: Currently it only works for Kriging and RBF interpolation methods, but not for IDW interpolation methods.
  • SearchMode.KDTREE_FIXED_RADIUS – Fixed-length search mode, that is, all sampling points within the specified radius are involved in the interpolation operation of the grid unit. This method consists of search radius (search_radius) and It is expected that the minimum number of samples involved in the operation (expected_count) are two parameters to finally determine the sampling points involved in the operation. When calculating the unknown value of a certain position, the position will be taken as the center of the circle, and the set fixed length value (that is, the search radius) will be used as the radius. and all sampling points falling within this range will participate in the operation; but if the minimum number of points expected to participate in the calculation is set, if the number of points within the search radius does not reach this value, it will automatically Expand the search radius until the specified number of sampling points are found.
  • SearchMode.KDTREE_FIXED_COUNT – Variable length search mode, that is, the specified number of sampling points closest to the grid unit participate in the interpolation operation. This method consists of the most diverse number of points (EXPECTED_COUNT) that are expected to participate in the calculation and search radius (search_radius) are two parameters to finally determine the sampling points involved in the calculation. When calculating a certain When the unknown value of a location, it will search for N sampling points near the location, and the N value is the set fixed number of points (that is, the most sample points expected to participate in the calculation Number), then these N sampling points will participate in the calculation; but if the search radius is set, if the number of points in the radius is less than the set fixed number of points, then The sampling points outside the range are discarded and do not participate in the calculation.
KDTREE_FIXED_COUNT = 3
KDTREE_FIXED_RADIUS = 2
NONE = 0
QUADTREE = 1
class iobjectspy.enums.Exponent

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constant of the order of the trend surface equation in the sample point data during Universal Kriging interpolation. A certain trend inherent between the sample points in the sample dataset can be presented by function or polynomial fitting.

Variables:
  • SearchMode.EXP1 – The order is 1, indicating that the central trend surface of the sample data shows a first-order trend.
  • SearchMode.EXP2 – The order is 2, indicating that the central trend surface of the sample data shows a second-order trend.
EXP1 = 1
EXP2 = 2
class iobjectspy.enums.VariogramMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the semi-variable function type constants for Kriging interpolation. Define the type of semivariable function for Kriging interpolation. Including exponential, spherical and Gaussian. The type of semi-variable function Selected by the user can affect the prediction of the unknown point,especially the different shape of the curve at the origin is of great significance. The steeper the curve at the origin, the greater the influence of the closer area on the predicted value. Therefore, the output surface will be less smooth. Each type has its own application.

Variables:
  • VariogramMode.EXPONENTIAL

    Exponential function (Exponential Variogram Mode). This type is suitable for situations where the spatial autocorrelation relationship decreases exponentially with increasing distance. The figure below shows that the spatial autocorrelation relationship completely disappears at infinity. Exponential functions are more commonly used.

    ../_images/VariogramMode_Exponential.png
  • VariogramMode.GAUSSIAN

    Gaussian function (Gaussian Variogram Mode).

    ../_images/variogrammode_Gaussian.png
  • VariogramMode.SPHERICAL

    Spherical Variogram Mode. This type shows that the spatial autocorrelation relationship gradually decreases (that is, the value of the semivariable function increases Gradually), until a certain distance is exceeded, the spatial autocorrelation is 0. Spherical functions are more commonly used.

    ../_images/VariogramMode_Spherical.png
EXPONENTIAL = 0
GAUSSIAN = 1
SPHERICAL = 9
class iobjectspy.enums.ComputeType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the calculation method type constant for the shortest path analysis of the distance raster

Variables:
  • ComputeType.CELL

    Cell path, each grid unit corresponding to the target object generates a shortest path. As shown in the figure below, the red point is the source and the black polygon is the target. The method is used to Analyze the shortest path of raster, and the shortest path represented by blue cell is obtained.

    ../_images/ComputeType_CELL.png
  • ComputeType.ZONE

    Zone path, only one shortest path is generated for the grid area corresponding to each target object. As shown in the figure below, the red point is the source and the black polygon is the target. The method is used to analyze the shortest path of raster, and the shortest path represented by blue cell is obtained.

    ../_images/ComputeType_ZONE.png
  • ComputeType.ALL

    Single path, all cells corresponding to the target object generate only one shortest path, that is, the shortest path among all paths for the entire target area dataset. As shown below, The red point is the source, and the black polygon is the target. This method is used to analyze the shortest path of the grid to obtain the shortest path represented by the blue cell.

    ../_images/ComputeType_ALL.png
ALL = 2
CELL = 0
ZONE = 1
class iobjectspy.enums.SmoothMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the smooth method type constants. It is used to smooth the boundary line of the isoline or isosurface when generating isoline or isosurface from Grid or DEM data.

The isoline is generated by interpolating the original raster data, and then connecting the equivalent points, so the result is a sharp polyline, and the isosurface is generated by interpolating the original raster data, then connecting the contour points To get the contour lines, which are closed by adjacent contour lines, so the result is a polygonal surface with sharp edges and corners. Both of these need to be smoothed. SuperMap provides two smoothing methods. B-spline method and angle grinding method.

Variables:
  • SmoothMethod.NONE – No smoothing.
  • SmoothMethod.BSPLINE

    B-spline method. The B-spline method uses a B-spline curve that passes through some nodes in the polyline instead of the original polyline to achieve smoothness. The B-spline curve is a extension of Bezier curve. As shown in the figure below, the B-spline curve does not have to pass through all the nodes of the original line object. Except for some points on the original polyline that passed, other points on the curve pass The B-spline function is fitted.

    ../_images/BSpline.png

    After using the B-spline method on a non-closed line object, the relative positions of its two end points remain unchanged.

  • SmoothMethod.POLISH

    Angle grinding method. The angle grinding method is a smooth method with relatively simple calculation and faster processing speed, but the effect is relatively limited. Its main process is to combine two adjacent line segments On a polyline, add nodes at one-third of the length of the line segment from the vertex of the included angle, and connect the two newly added nodes on both sides of the included angle to smooth out the nodes of the original line segment, so it is called Angle grinding method. The figure below is a schematic diagram of the process of one-step angle grinding.

    ../_images/Polish.png

    The angle can be sharpened multiple times to get a nearly smooth line. After using the angle grinding method on the non-closed line object, the relative position of its two end points remains unchanged.

BSPLINE = 0
NONE = -1
POLISH = 1
class iobjectspy.enums.ShadowMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of the shaded image rendering mode.

Variables:
IllUMINATION = 3
IllUMINATION_AND_SHADOW = 1
SHADOW = 2
class iobjectspy.enums.SlopeType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the unit type constant of the slope.

Variables:
  • SlopeType.DEGREE – Express the slope in angles.
  • SlopeType.RADIAN – Express the slope in radians.
  • SlopeType.PERCENT – Express the slope as a percentage. The percentage is the ratio of the vertical height to the horizontal distance multiplied by 100, that is, the height per unit horizontal distance multiplied by 100, or the tangent of the slope multiplied by 100.
DEGREE = 1
PERCENT = 3
RADIAN = 2
class iobjectspy.enums.NeighbourUnitType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines unit type constants for neighborhood analysis.

Variables:
  • NeighbourUnitType.CELL – grid coordinates, that is, the number of grids is used as the neighborhood unit.
  • NeighbourUnitType.MAP – Geographical coordinates, that is, use the length unit of the map as the neighborhood unit.
CELL = 1
MAP = 2
class iobjectspy.enums.InterpolationAlgorithmType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of the algorithms supported by interpolation analysis.

For a region, if only part of the discrete point data is known, if you want to create or simulate a surface or field, you need to estimate the value of the unknown point, usually using the method of interpolating the surface. Provided in SuperMap Three interpolation methods, using to simulate or create a surface, namely: inverse distance weighting method (IDW), Kriging interpolation method (Kriging), and radial basis function interpolation method (RBF). The method you choose to interpolate Usually depends on the distribution of the sample data and the type of surface to create.

Variables:
  • InterpolationAlgorithmType.IDW – Inverse Distance Weighted interpolation method. The method by calculating the average area discrete point group to estimate The value of the cell, and generates the raster dataset. This is a simple and effective data interpolation method, and the calculation speed is relatively fast. The closer the point to the discrete center, the more affected its estimated value.
  • InterpolationAlgorithmType.SIMPLEKRIGING – Simple Kriging (Simple Kriging) interpolation method. Simple Kriging is one of the commonly used Kriging interpolation methods, this method assumes That the expectation (mean) of the field values used for interpolation is known for some constant.
  • InterpolationAlgorithmType.KRIGING – Common Kriging interpolation method. One of the most commonly used Kriging interpolation methods. This method assumes that the expectation (mean) of the field values used for interpolation is unknown and constant. It uses a certain mathematical function to estimate the value of a cell by fitting a given space point. Generate grid dataset. It can not only generate a surface, but also give a measure of the accuracy or certainty of the prediction result. Therefore, the calculation accuracy Of this method is high, and it is often used in social sciences and geology.
  • InterpolationAlgorithmType.UNIVERSALKRIGING – Universal Kriging (Universal Kriging) interpolation method. Universal Kriging is also one of the commonly used Kriging interpolation methods. The method assumes that the expected (mean) variable of the field value used for interpolation is unknown. The existence of a dominant trend in the data sample, and the trend can be fitted by a certain Function or polynomial in the case of pan-Clutkin interpolation method.
  • InterpolationAlgorithmType.RBF

    Radial Basis Function (Radial Basis Function) interpolation method. This method assumes that the change is smooth, and it has two characteristics:

    -The surface must pass through the data points accurately; -The surface must have a minimum curvature.

    This interpolation has advantages in creating visually demanding curves and contours.

  • InterpolationAlgorithmType.DENSITY – Point density (Density) interpolation method
DENSITY = 9
IDW = 0
KRIGING = 2
RBF = 6
SIMPLEKRIGING = 1
UNIVERSALKRIGING = 3
class iobjectspy.enums.GriddingLevel

Bases: iobjectspy._jsuperpy.enums.JEnum

For the query of the geometric area object (GeometriesRelation), by setting the grid of the area object, the judgment speed can be speeded up, such as the judgment that the area contains points. Grid of a single area object The higher the level, the more memory is required, which is generally applicable to situations where there are few area objects but a single area object is relatively large.

Variables:
HIGHER = 4
LOWER = 1
MIDDLE = 2
NONE = 0
NORMAL = 3
class iobjectspy.enums.RegionToPointMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Face to point

Variables:
  • RegionToPointMode.VERTEX – node mode, each node of the region object is converted into a point object
  • RegionToPointMode.INNER_POINT – Inner point mode, which converts the inner point of a region object into a point object
  • RegionToPointMode.SUB_INNER_POINT – Sub-object interior point mode, which converts the interior points of each sub-object of the region into a point object
  • RegionToPointMode.TOPO_INNER_POINT – Topological interior point mode. The interior points of multiple region objects obtained after protective decomposition of complex region objects are converted into a sub-object.
INNER_POINT = 2
SUB_INNER_POINT = 3
TOPO_INNER_POINT = 4
VERTEX = 1
class iobjectspy.enums.LineToPointMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Line to point

Variables:
  • LineToPointMode.VERTEX – node mode, each node of the line object is converted into a point object
  • LineToPointMode.INNER_POINT – Inner point mode, convert the inner point of the line object into a point object
  • LineToPointMode.SUB_INNER_POINT – Sub-object inner point mode, which converts the inner point of each sub-object of the line object into a point object. If the number of sub-objects of the line is 1, it will be the same as the result of INNER_POINT.
  • LineToPointMode.START_NODE – Starting point mode, which converts the first node of the line object, the starting point, to a point object
  • LineToPointMode.END_NODE – End point mode, which converts the last node of the line object, the end point, to a point object
  • LineToPointMode.START_END_NODE – start and end point mode, convert the start and end points of the line object into a point object respectively
  • LineToPointMode.SEGMENT_INNER_POINT – Line segment inner point mode, which converts the inner point of each line segment of the line object into a point object. The line segment refers to the line formed by two adjacent nodes.
  • LineToPointMode.SUB_START_NODE – Sub-object starting point mode, which converts the first point of each sub-object of the line object into a point object respectively
  • LineToPointMode.SUB_END_NODE – Sub-object end point mode, which converts the next point of each sub-object of the line object into a point object respectively
  • LineToPointMode.SUB_START_END_NODE – Sub-object start and end point mode, which converts the first point and the last point of each sub-object of the line object into a point object.
END_NODE = 5
INNER_POINT = 2
SEGMENT_INNER_POINT = 7
START_END_NODE = 6
START_NODE = 4
SUB_END_NODE = 9
SUB_INNER_POINT = 3
SUB_START_END_NODE = 10
SUB_START_NODE = 8
VERTEX = 1
class iobjectspy.enums.EllipseSize

Bases: iobjectspy._jsuperpy.enums.JEnum

Output ellipse size constant

Variables:
  • EllipseSize.SINGLE

    One standard deviation. The semi-major axis and semi-minor axis of the output ellipse are twice the corresponding standard deviation. When geometric objects have spatial normal distribution, that is, these Geometric objects are concentrated in the center and less toward the periphery, the generated ellipse will contain about 68% of the geometric objects.

    ../_images/EllipseSize_SINGLE.png
  • EllipseSize.TWICE

    Two standard deviations. The semi-major axis and semi-minor axis of the output ellipse are twice the corresponding standard deviation. When geometric objects have spatial normal distribution, that is, these Geometric objects are concentrated at the center and less toward the periphery, the generated ellipse will contain approximately 95% of the geometric objects.

    ../_images/EllipseSize_TWICE.png
  • EllipseSize.TRIPLE

    Three standard deviations. The semi-major axis and semi-minor axis of the output ellipse are three times the corresponding standard deviation. When geometric objects have spatial normal distribution, that is, these Geometric objects are concentrated at the center and less toward the periphery, the generated ellipse will contain approximately 99% of the geometric objects.

    ../_images/EllipseSize_TRIPLE.png
SINGLE = 1
TRIPLE = 3
TWICE = 2
class iobjectspy.enums.SpatialStatisticsType

Bases: iobjectspy._jsuperpy.enums.JEnum

The field statistical type constant after spatial measurement of the dataset

Variables:
FIRST = 5
LAST = 6
MAX = 1
MEAN = 4
MEDIAN = 7
MIN = 2
SUM = 3
class iobjectspy.enums.DistanceMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

Distance calculation method constant

Variables:
  • DistanceMethod.EUCLIDEAN

    Euclidean distance. Calculate the straight-line distance between two points.

    DistanceMethod_EUCLIDEAN.png

  • DistanceMethod.MANHATTAN

    Manhattan distance. Calculate the sum of the absolute value of the difference between the x and y coordinates of two points. This type is temporarily unavailable, only as a test, the result of use is unknown.

    DistanceMethod_MANHATTAN.png

EUCLIDEAN = 1
MANHATTAN = 2
class iobjectspy.enums.KernelFunction

Bases: iobjectspy._jsuperpy.enums.JEnum

Geographically weighted regression analysis kernel function type constant.

Variables:
  • KernelFunction.GAUSSIAN

    Gaussian kernel function.

    Gaussian kernel function calculation formula:

    W_ij=e^(-((d_ij/b)^2)/2).

    Where W_ij is the weight between point i and point j, d_ij is the distance between point i and point j, and b is the bandwidth range.

  • KernelFunction.BISQUARE

    Quadratic kernel function. Quadratic kernel function calculation formula:

    If d_ij≤b, W_ij=(1-(d_ij/b)^2))^2; otherwise, W_ij=0.

    Where W_ij is the weight between point i and point j, d_ij is the distance between point i and point j, and b is the bandwidth range.

  • KernelFunction.BOXCAR

    Box-shaped kernel function.

    Box kernel function calculation formula:

    If d_ij≤b, W_ij=1; otherwise, W_ij=0.

    Where W_ij is the weight between point i and point j, d_ij is the distance between point i and point j, and b is the bandwidth range.

  • KernelFunction.TRICUBE

    Cube kernel function.

    Cube kernel function calculation formula:

    If d_ij≤b, W_ij=(1-(d_ij/b)^3))^3; otherwise, W_ij=0.

    Where W_ij is the weight between point i and point j, d_ij is the distance between point i and point j, and b is the bandwidth range.

BISQUARE = 2
BOXCAR = 3
GAUSSIAN = 1
TRICUBE = 4
class iobjectspy.enums.KernelType

Bases: iobjectspy._jsuperpy.enums.JEnum

Geographically weighted regression analysis bandwidth type constant

Variables:
  • KernelType.FIXED – Fixed type bandwidth. For each regression analysis point, a fixed value is used as the bandwidth range.
  • KernelType.ADAPTIVE – Variable type bandwidth. For each regression analysis point, the distance between the regression point and the Kth nearest neighbor point is used as the bandwidth range. Among them, K is the number of neighbors.
ADAPTIVE = 2
FIXED = 1
class iobjectspy.enums.BandWidthType

Bases: iobjectspy._jsuperpy.enums.JEnum

Geographically weighted regression analysis bandwidth determination method is constant.

Variables:
  • BandWidthType.AICC – Use “Akaike Information Criteria (AICc)” to determine the bandwidth range.
  • BandWidthType.CV – Use “cross-validation” to determine the bandwidth range.
  • BandWidthType.BANDWIDTH – Determine the bandwidth range according to a given fixed distance or fixed adjacent number.
AICC = 1
BANDWIDTH = 3
CV = 2
class iobjectspy.enums.AggregationMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

The aggregation method constant used to create a dataset for analysis by event points

Variables:
  • AggregationMethod.NETWORKPOLYGONS – Calculate the appropriate mesh size, create a mesh surface dataset, the generated grid surface dataset with the point count of the surface grid cell Will be used as the analysis field to perform hot spot analysis. The grid will overlay the input event point, and calculate the Number of points. If the boundary surface data of the area where the incident occurred is not provided (see: py:func:optimized_hot_spot_analyst bounding_polygons parameter), It will use the input event point dataset range to divide the grid, and will delete the surface grid cells without points, and only analyze the remaining Surface grid elements; if boundary surface data is provided, only the surface grid elements within the boundary surface dataset will be retained and analyzed.
  • AggregationMethod.AGGREGATIONPOLYGONS – Need to provide a polygon dataset that aggregates event points to obtain event counts (see: py:func:optimized_hot_spot_analyst’s aggregate_polygons parameter), The number of point events in each area object will be calculated, and then the hot spot analysis will be performed on the area dataset with the number of point events as the analysis field.
  • AggregationMethod.SNAPNEARBYPOINTS – Calculate the capture distance for the input event point dataset and use the distance to aggregate nearby event points, providing a point count for each Aggregation point, which represents the number of event points aggregated together, and then perform a hot spot analysis on the number of point events aggregated together as an analysis field for The generated aggregation point dataset
AGGREGATIONPOLYGONS = 2
NETWORKPOLYGONS = 1
SNAPNEARBYPOINTS = 3
class iobjectspy.enums.StreamOrderType

Bases: iobjectspy._jsuperpy.enums.JEnum

The method type constant of the basin water system number (ie river classification)

Variables:
  • StreamOrderType.STRAHLER

    Strahler river classification. The Strahler river classification method was proposed by Strahler in 1957. Its rules are defined as follows: the river Directly originating from the source of a river is a class 1 river; The level of the river formed by the confluence of two rivers of the same level is increased by 1 level. The grade of a river formed by the confluence of two rivers of different grades is equal to that of the middle grade and higher of the original river.

    ../_images/Strahler.png
  • StreamOrderType.SHREVE

    Shreve river classification. Shreve river classification method was proposed by Shreve in 1966. Its rules are defined as follows: the level of a river directly originating From a river source is level 1, and the level of a river formed by the confluence of two rivers is the sum of the levels of two rivers. For example, two class 1 rivers meet to form a class 2 river, And a class 2 river and a class 3 river meet to form a class 5 river.

    ../_images/Shreve.png
SHREVE = 2
STRAHLER = 1
class iobjectspy.enums.TerrainInterpolateType

Bases: iobjectspy._jsuperpy.enums.JEnum

Terrain interpolation type constant

Variables:
  • TerrainInterpolateType.IDW – Inverse distance weight interpolation method. Reference: py:attr:.InterpolationAlgorithmType.IDW
  • TerrainInterpolateType.KRIGING – Kriging interpolation. Reference: py:attr:.InterpolationAlgorithmType.KRIGING
  • TerrainInterpolateType.TIN – Irregular TIN. First generate a TIN model from the given line dataset, and then generate a DEM model based on the given extreme point information and lake information.
IDW = 1
KRIGING = 2
TIN = 3
class iobjectspy.enums.TerrainStatisticType

Bases: iobjectspy._jsuperpy.enums.JEnum

Topographic statistics type constant

Variables:
MAJORITY = 5
MAX = 4
MEAN = 2
MEDIAN = 6
MIN = 3
UNIQUE = 1
class iobjectspy.enums.EdgeMatchMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This enumeration defines the constants of the way the frame is joined.

Variables:
  • EdgeMatchMode.THEOTHEREDGE – Join the edge to one side. The edge connection point is the end point of the record associated with the edge in the edge target dataset, and the end point of the record associated with the edge in the source dataset will be moved to this connection point.
  • EdgeMatchMode.THEMIDPOINT – Connect edges at the midpoint. The edge connection point is the midpoint between the edge connection target dataset and the end point of the edge connection record in the source dataset, and the end point of the record where the edge connection occurs in the source and target dataset will move to this connection point.
  • EdgeMatchMode.THEINTERSECTION – Connect edges at the intersection. The edge connection point is the intersection of the connection line and the edge line of the edge connection record in the target dataset and the source dataset. The end point of the record where the connection connection occurs in the source and target dataset will move to this connection point.
THEINTERSECTION = 3
THEMIDPOINT = 2
THEOTHEREDGE = 1
class iobjectspy.enums.FunctionType

Bases: iobjectspy._jsuperpy.enums.JEnum

Transformation function type constant

Variables:
  • FunctionType.NONE – Do not use transformation functions.
  • FunctionType.LOG – The transformation function is log, and the original value is required to be greater than 0.
  • FunctionType.ARCSIN – The transformation function is arcsin, and the original value is required to be in the range [-1,1].
ARCSIN = 3
LOG = 2
NONE = 1
class iobjectspy.enums.StatisticsCompareType

Bases: iobjectspy._jsuperpy.enums.JEnum

Comparison type constant

Variables:
EQUAL = 3
GREATER = 4
GREATER_OR_EQUAL = 5
LESS = 1
LESS_OR_EQUAL = 2
class iobjectspy.enums.GridStatisticsMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Raster statistics type constant

Variables:
MAJORITY = 8
MAX = 2
MEAN = 3
MEDIAN = 10
MIN = 1
MINORITY = 9
RANGE = 7
STDEV = 4
SUM = 5
VARIETY = 6
class iobjectspy.enums.ConceptualizationModel

Bases: iobjectspy._jsuperpy.enums.JEnum

Spatial relationship conceptualization model constant

Variables:
  • ConceptualizationModel.INVERSEDISTANCE – Inverse distance model. Any element will affect the target element, but as the distance increases, the impact will be smaller. The weight between the features is one part of the distance.
  • ConceptualizationModel.INVERSEDISTANCESQUARED – Inverse distance square model. Similar to the “inverse distance model”, as the distance increases, the impact decreases faster. The weight between features is one square of the distance.
  • ConceptualizationModel.FIXEDDISTANCEBAND – Fixed distance model. The elements within the specified fixed distance range have equal weight (weight 1), and the elements outside the specified fixed distance range will not affect the calculation (weight 0).
  • ConceptualizationModel.ZONEOFINDIFFERENCE – Indifferent zone model. This model is a combination of “inverse distance model” and “fixed distance model”. The elements within the specified fixed distance range have the same weight (weight is 1); the elements outside the specified fixed distance range have less influence as the distance increases.
  • ConceptualizationModel.CONTIGUITYEDGESONLY – Surface adjacency model. Only areas with shared boundaries, overlaps, inclusions, and inclusions will affect the target element (weight 1), otherwise, they will be excluded from the calculation of the target element (weight 0).
  • ConceptualizationModel.CONTIGUITYEDGESNODE – Surface adjacency model. Only when the surface has contact will affect the target element (weight is 1), otherwise, it will be excluded from the calculation of the target element (weight is 0).
  • ConceptualizationModel.KNEARESTNEIGHBORS – K nearest neighbor model. The K elements closest to the target element are included in the calculation of the target element (weight is 1), and the remaining elements will be excluded from the calculation of the target element (weight is 0).
  • ConceptualizationModel.SPATIALWEIGHTMATRIXFILE – Provides spatial weight matrix file.
CONTIGUITYEDGESNODE = 6
CONTIGUITYEDGESONLY = 5
FIXEDDISTANCEBAND = 3
INVERSEDISTANCE = 1
INVERSEDISTANCESQUARED = 2
KNEARESTNEIGHBORS = 7
SPATIALWEIGHTMATRIXFILE = 8
ZONEOFINDIFFERENCE = 4
class iobjectspy.enums.AttributeStatisticsMode

Bases: iobjectspy._jsuperpy.enums.JEnum

A mode for performing attribute statistics when connecting points to form lines and when updating vector dataset attributes.

Variables:
  • AttributeStatisticsMode.MAX – The maximum value of statistics, which can perform statistics on numeric, text, and time fields.
  • AttributeStatisticsMode.MIN – Statistic minimum value, which can perform statistics on numeric, text and time fields.
  • AttributeStatisticsMode.SUM – Count the sum of a set of numbers, only valid for numeric fields
  • AttributeStatisticsMode.MEAN – Count the average value of a group of numbers, only valid for numeric fields
  • AttributeStatisticsMode.STDEV – Statistic the standard deviation of a group of numbers, only valid for numeric fields
  • AttributeStatisticsMode.VAR – Count the variance of a group of numbers, only valid for numeric fields
  • AttributeStatisticsMode.MODALVALUE – take the mode, the mode is the value with the highest frequency, which can be any type of field
  • AttributeStatisticsMode.RECORDCOUNT – Count the number of records in a group. The number of statistical records is not for a specific field, only for a group.
  • AttributeStatisticsMode.MAXINTERSECTAREA – Take the largest intersection area. If the area object intersects multiple area objects that provide attributes, the attribute value of the object with the largest intersection area with the original area object is used for update. It is valid for any type of field. Only valid for vector dataset attribute update (update_attributes())
COUNT = 8
MAX = 1
MAXINTERSECTAREA = 9
MEAN = 4
MIN = 2
MODALVALUE = 7
STDEV = 5
SUM = 3
VAR = 6
class iobjectspy.enums.VCTVersion

Bases: iobjectspy._jsuperpy.enums.JEnum

VCT version

Variables:
CNSDTF_VCT = 1
LANDUSE_VCT = 3
LANDUSE_VCT30 = 6
class iobjectspy.enums.RasterJoinType

Bases: iobjectspy._jsuperpy.enums.JEnum

Defines the statistical type constant of the mosaic result raster value.

Variables:
  • RasterJoinType.RJMFIRST – Take the value in the first raster dataset after mosaicking the overlapping area of the raster.
  • RasterJoinType.RJMLAST – Take the value in the last raster dataset after mosaicking the overlapping area of the raster.
  • RasterJoinType.RJMMAX – Take the maximum value of the corresponding position in all raster datasets after mosaicking the overlapping area of the raster.
  • RasterJoinType.RJMMIN – Take the minimum value of the corresponding position in all raster datasets after mosaicing the overlapping area of raster.
  • RasterJoinType.RJMMean – Take the average value of the corresponding positions in all raster datasets after mosaicking the overlapping raster regions.
RJMFIRST = 0
RJMLAST = 1
RJMMAX = 2
RJMMIN = 3
RJMMean = 4
class iobjectspy.enums.RasterJoinPixelFormat

Bases: iobjectspy._jsuperpy.enums.JEnum

Defines the pixel format type constant of the mosaic result.

Variables:
RJPBYTE = 8
RJPDOUBLE = 6400
RJPFBIT = 4
RJPFIRST = 10000
RJPFLOAT = 3200
RJPLAST = 20000
RJPLONG = 320
RJPLONGLONG = 64
RJPMAJORITY = 50000
RJPMAX = 30000
RJPMIN = 40000
RJPMONO = 1
RJPRGB = 24
RJPRGBAFBIT = 32
RJPTBYTE = 16
class iobjectspy.enums.PlaneType

Bases: iobjectspy._jsuperpy.enums.JEnum

Plane type constant :var PlaneType.PLANEXY: A plane formed by the X and Y coordinate directions, that is, the XY plane :var PlaneType.PLANEYZ: The plane formed by the X and Z coordinate directions, that is, the YZ plane :var PlaneType.PLANEXZ: A plane formed by the Y and Z coordinate directions, that is, the XZ plane

PLANEXY = 0
PLANEXZ = 2
PLANEYZ = 1
class iobjectspy.enums.ChamferStyle

Bases: iobjectspy._jsuperpy.enums.JEnum

The constant of the chamfering style of lofting :var ChamferStyle.SOBC_CIRCLE_ARC: the second order bezier curve arc :var ChamferStyle.SOBC_ELLIPSE_ARC: the second order bezier curve elliptic arc

SOBC_CIRCLE_ARC = 0
SOBC_ELLIPSE_ARC = 1
class iobjectspy.enums.ViewShedType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constant of the visual field when the visual field is analyzed for multiple observation points (observed points). :var ViewShedType.VIEWSHEDINTERSECT: Common visual field, taking the intersection of the visual field range of multiple observation points. :var ViewShedType.VIEWSHEDUNION: Non-common viewing area, taking the union of multiple viewing points.

VIEWSHEDINTERSECT = 0
VIEWSHEDUNION = 1
class iobjectspy.enums.ImageType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the image type constants for map output

Variables:
  • ImageType.BMP – BMP is a standard format used by Windows for storing device-independent and application-independent images. The bit-per-pixel value (1, 4, 8, 15, 24, 32, or 64) of a given BMP file is specified in the file header. BMP files with 24 bits per pixel are universal. BMP files are usually uncompressed, so they are not suitable for transmission over the Internet.
  • ImageType.GIF – GIF is a general format used to display images on web pages. GIF files are suitable for drawing lines, pictures with solid color blocks, and pictures with clear boundaries between colors. GIF files are compressed, but no information is lost during the compression process; the decompressed image is exactly the same as the original image. A color in a GIF file can be designated as transparent, so that the image will have the background color of any web page on which it is displayed. Storing a series of GIF images in a single file can form an animated GIF. GIF files can store up to 8 bits per pixel, so they are limited to 256 colors.
  • ImageType.JPG – JPEG is a compression scheme adapted to natural landscapes (such as scanned photos). Some information will be lost in the compression process, but the loss is imperceptible to the human eye. JPEG files store 24 bits per pixel, so they can display more than 16,000,000 colors. JPEG files do not support transparency or animation. JPEG is not a file format. “JPEG File Interchange Format (JFIF)” is a file format commonly used to store and transmit images that have been compressed according to the JPEG scheme. The JFIF file displayed by the web browser has a .jpg extension.
  • ImageType.PDF – PDF (Portable Document Format) file format is an electronic file format developed by Adobe. This file format has nothing to do with the operating system platform, that is to say, the PDF file is universal whether it is in Windows, Unix or Apple’s Macos operating system
  • ImageType.PNG – PNG type. The PNG format not only retains many of the advantages of the GIF format, but also provides functions beyond GIF. Like GIF files, PNG files do not lose information when compressed. PNG files can store colors at 8, 24, or 48 bits per pixel and grayscale at 1, 2, 4, 8, or 16 bits per pixel. In contrast, GIF files can only use 1, 2, 4, or 8 bits per pixel. PNG files can also store an alpha value for each pixel, which specifies the degree to which the color of the pixel blends with the background color. The advantage of PNG over GIF is that it can display an image progressively (that is, the displayed image will become more and more complete as the image is transmitted through the network connection). PNG files can contain grayscale correction and color correction information so that the image can be accurately presented on a variety of display devices.
  • ImageType.TIFF – TIFF is a flexible and extensible format, which is supported by various platforms and image processing applications. TIFF files can store images in any bit per pixel and can use various compression algorithms. A single multi-page TIFF file can store several images. You can store information related to the image (scanner manufacturer, host, compression type, printing direction and sampling per pixel, etc.) in a file and use tags to arrange the information. The TIFF format can be extended by approving and adding new tags as needed.
BMP = 121
GIF = 124
JPG = 122
PDF = 33
PNG = 123
TIFF = 103
class iobjectspy.enums.FillGradientMode

Bases: iobjectspy._jsuperpy.enums.JEnum

The gradient mode that defines the gradient fill mode. All gradient modes are gradients between two colors, that is, the gradient from the start color of the gradient to the end color of the gradient.

For different gradient mode styles, you can rotate the angle in GeoStyle, the start color (foreground color) and end color (background color) of the gradient, and the position of the center point of the gradient fill (for linear gradients) Invalid) and so on. By default, the gradient rotation angle is 0, and the gradient fill center point is the center point of the filled area. The following descriptions of various gradient modes use the default gradient rotation angle and center point. For more information about gradient fill rotation, please refer to the set_fill_gradient_angle() method in the GeoStyle class; For the setting of the gradient fill center point, please refer to the set_fill_gradient_offset_ratio_x() and set_fill_gradient_offset_ratio_y() methods in the GeoStyle class. The calculation of the gradient style is based on the bounding rectangle of the filled area, that is, the smallest enclosing rectangle, so the range of the filled area mentioned below is the smallest enclosing rectangle of the filled area.

Variables:
  • FillGradientMode.NONE – No gradient. When using the normal fill mode, set the gradient mode to no gradient
  • FillGradientMode.LINEAR

    Linear gradient. The gradient from the start point to the end point of the horizontal line segment. As shown in the figure, from the start point to the end point of the horizontal line segment, its color gradually changes from the start color to the end color. The color on the straight line perpendicular to the line segment is the same, and no gradient occurs.

    ../_images/Gra_Linear.png
  • FillGradientMode.RADIAL

    Radiation gradient. A circular gradient with the center point of the filled area as the starting point of the gradient filling, and the boundary point farthest from the center point as the ending point. Note that the color does not change on the same circle, and the color changes between different circles. As shown in the figure, from the start point to the end point of the gradient fill, the color of each circle with the start point as the center gradually changes from the start color to the end color as the radius of the circle increases.

    ../_images/Gra_Radial.png
  • FillGradientMode.CONICAL
    Conical gradient. From the start bus to the end bus, the gradual change occurs in both counterclockwise and clockwise directions, both from the start color to the end color. Note that the center point of the filled area is the vertex of the cone, and the color does not change on the generatrix of the cone.
    As shown in the figure, the starting generatrix of the gradient is on the right side of the center point of the filled area and passing through the horizontal line. The color of the upper cone gradually changes counterclockwise, and the color of the lower half cone changes clockwise, in two directions. The start bus and the end bus of the gradation are the same. In the process from the start bus to the end bus in the counterclockwise and clockwise directions, the gradation is uniformly gradual from the start color to the end color
    ../_images/Gra_Conical.png
  • FillGradientMode.SQUARE

    Four-corner gradient. A square gradient with the center point of the filled area as the starting point of the gradient filling, and the midpoint of the shorter side of the smallest bounding rectangle of the filled area as the ending point. Note that the color on each square does not change, and the color changes between different squares. As shown in the figure, from the start point to the end point of the gradient filling, the color of the square with the start point as the center gradually changes from the start color to the end color as the side length increases.

    ../_images/Gra_Square2.png
CONICAL = 3
LINEAR = 1
NONE = 0
RADIAL = 2
SQUARE = 4
class iobjectspy.enums.ColorSpaceType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the color space type constants.

Due to the different color forming principles, the difference in the way of color generation is determined between color devices such as displays and projectors that directly synthesize colors by color light and printing devices that use pigments such as printers and printers. For the above-mentioned different color forming methods, SuperMap provides 7 color spaces, namely RGB, CMYK, RGBA, CMY, YIQ, YUV, and YCC, which can be applied to different systems.

Variables:
  • ColorSpaceType.RGB – This type is mainly used in display systems. RGB is an abbreviation for red, green, and blue . The RGB color mode uses the RGB model to assign an intensity value in the range of 0~255 to the RGB component of each pixel in the image
  • ColorSpaceType.CMYK – This type is mainly used in printing systems. CMYK is cyan, magenta, yellow, and black. It mixes pigments of various colors by adjusting the density of the three basic colors of cyan, magenta and yellow, and uses black to adjust brightness and purity.
  • ColorSpaceType.RGBA – This type is mainly used in display systems. RGB is the abbreviation for red, green, and blue, and A is used to control transparency.
  • ColorSpaceType.CMY – This type is mainly used in printing systems. CMY (Cyan, Magenta, Yellow) are cyan, magenta, and yellow respectively. This type mixes pigments of various colors by adjusting the density of the three basic colors of cyan, magenta and yellow.
  • ColorSpaceType.YIQ – This type is mainly used in North American Television System (NTSC).
  • ColorSpaceType.YUV – This type is mainly used in the European Television System (PAL).
  • ColorSpaceType.YCC – This type is mainly used for JPEG image format.
  • ColorSpaceType.UNKNOW – unknown
CMY = 3
CMYK = 4
RGB = 1
RGBA = 2
UNKNOW = 0
YCC = 7
YIQ = 5
YUV = 6
class iobjectspy.enums.ImageInterpolationMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines constants for the image interpolation mode.

Variables:
DEFAULT = 3
HIGH = 2
HIGHQUALITYBICUBIC = 5
HIGHQUALITYBILINEAR = 4
LOW = 1
NEARESTNEIGHBOR = 0
class iobjectspy.enums.ImageDisplayMode

Bases: iobjectspy._jsuperpy.enums.JEnum

Image display mode, currently supports combination mode and stretch mode.

Variables:
  • ImageDisplayMode.COMPOSITE – Combination mode. The combination mode is for multi-band images. The images are combined into RGB display according to the set band index sequence. Currently, only RGB and RGBA color spaces are supported.
  • ImageDisplayMode.STRETCHED – Stretch mode. Stretch mode supports all images (including single band and multi-band). For multi-band images, when this display mode is set, the first band of the set band index will be displayed.
COMPOSITE = 0
STRETCHED = 1
class iobjectspy.enums.MapColorMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the map color mode type.

This color mode is only for map display, and only works for vector elements. When each color mode is converted, the thematic style of the map will not change, and the conversion of various color modes is based on The thematic style color of the map comes. SuperMap component products provide 5 color modes when setting the map style.

Variables:
  • MapColorMode.DEFAULT – Default color mode, corresponding to 32-bit enhanced true color mode. 32 bits are used to store colors, of which red, green, blue and alpha are represented by 8 bits each.
  • MapColorMode.BLACK_WHITE – Black and white mode. According to the thematic style of the map (default color mode), map elements are displayed in two colors: black and white. The elements whose thematic style color is white are still displayed in white, and the other colors are displayed in black.
  • MapColorMode.GRAY – Grayscale mode. According to the thematic style of the map (default color mode), set different weights for the red, green, and blue components and display them in grayscale.
  • MapColorMode.BLACK_WHITE_REVERSE – Black and white reverse color mode. According to the thematic style of the map (default color mode), elements whose thematic style color is black are converted to white, and the remaining colors are displayed in black
  • MapColorMode.ONLY_BLACK_WHITE_REVERSE – Reverse black and white, other colors remain unchanged. According to the thematic style of the map (default color mode), the elements whose thematic style color is black are converted to white, and the elements whose thematic style color is white are converted to black, and the other colors remain unchanged.
BLACK_WHITE = 1
BLACK_WHITE_REVERSE = 3
DEFAULT = 0
GRAY = 2
ONLY_BLACK_WHITE_REVERSE = 4
class iobjectspy.enums.LayerGridAggregationType

Bases: iobjectspy._jsuperpy.enums.JEnum

The grid type of the grid aggregation graph.

Variables:
HEXAGON = 2
QUADRANGLE = 1
class iobjectspy.enums.NeighbourNumber

Bases: iobjectspy._jsuperpy.enums.JEnum

Number of neighborhood pixels for spatial connectivity

Variables:
  • NeighbourNumber.FOUR

    4 pixels up, down, left and right are regarded as neighboring pixels. Only if the pixels with the same value are directly connected to each of the four nearest pixels, The connectivity between these pixels will be defined. Orthogonal pixels retain the corners of the rectangular area.

    api\../image/four.png
  • NeighbourNumber.EIGHT

    8 adjacent pixels are regarded as neighboring pixels. Only when the pixels with the same value are located in the 8 nearest neighbors to each other, the connection between these pixels will be defined. Eight adjacent pixels make the rectangle smooth.

    api\../image/eight.png
EIGHT = 2
FOUR = 1
class iobjectspy.enums.MajorityDefinition

Bases: iobjectspy._jsuperpy.enums.JEnum

Before replacing, specify the number of adjacent (spatially connected) cells that must have the same value, that is, when the same value of adjacent cells is continuous, the replacement is performed.

Variables:
  • MajorityDefinition.HALF – Indicates that half of the pixels must have the same value and are adjacent, that is, the connected pixels greater than or equal to two-quarters or four-eights must have the same value. A smoother effect can be obtained.
  • MajorityDefinition.MAJORITY – Indicates that most pixels must have the same value and be adjacent, that is, connected pixels greater than or equal to three-quarters or five-eighths must have the same value.
HALF = 1
MAJORITY = 2
class iobjectspy.enums.BoundaryCleanSortType

Bases: iobjectspy._jsuperpy.enums.JEnum

Sorting method of boundary cleaning. That is, specify the sorting type to be used in the smoothing process. This will determine the priority of pixels that can be extended to neighboring pixels

Variables:
  • BoundaryCleanSortType.NOSORT – Do not sort by size. Areas with larger values have higher priority and can be extended to several areas with smaller values.
  • BoundaryCleanSortType.DESCEND – Sort the regions in descending order of size. Areas with a larger total area have higher priority and can be extended to several areas with A small total area..
  • BoundaryCleanSortType.ASCEND – Sort areas in ascending order of size. Areas with a smaller total area have higher priority and can be extended to several areas with a large total area..
ASCEND = 3
DESCEND = 2
NOSORT = 1
class iobjectspy.enums.OverlayAnalystOutputType

Bases: iobjectspy._jsuperpy.enums.JEnum

Overlay analysis Return the result geometric object type. Only valid for the face-to-face intersection operator.

Variables:
INPUT = 0
POINT = 1
class iobjectspy.enums.FieldSign

Bases: iobjectspy._jsuperpy.enums.JEnum

Field identifier constant

Variables:
EDGEID = 4
FNODE = 2
GEOMETRY = 12
ID = 11
NODEID = 1
TNODE = 3
class iobjectspy.enums.PyramidResampleType

Bases: iobjectspy._jsuperpy.enums.JEnum

The resampling method used when building the image pyramid.

Variables:
AVERAGE = 2
AVERAGE_MAGPHASE = 4
GAUSS = 3
NEAREST = 1
NONE = 0
class iobjectspy.enums.DividePolygonType

Bases: iobjectspy._jsuperpy.enums.JEnum

The type of the cutting surface object. Specific reference :py:fun:`.divide_polygon`

Variables:
AREA = 1
PART = 2
class iobjectspy.enums.DividePolygonOrientation

Bases: iobjectspy._jsuperpy.enums.JEnum

When cutting surface polygons, the starting cutting direction is the position of a cutting surface in the resulting surface object after cutting. Specific reference :py:fun:`.divide_polygon`

Variables:
EAST = 3
NORTH = 2
SOUTH = 4
WEST = 1
class iobjectspy.enums.RegularizeMethod

Bases: iobjectspy._jsuperpy.enums.JEnum

Defines the regularization processing method of the building. Users can choose the appropriate regularization method according to the shape of the building.

Variables:RegularizeMethod.RIGHTANGLES – Used for buildings mainly defined by right angles

:var RegularizeMethod.RIGHTANGLESANDDIAGONALS:Used for buildings composed of right angles and diagonal edges :var RegularizeMethod.ANYANGLE: Used for irregular buildings :var RegularizeMethod.CIRCLE: Used in buildings with round features, such as granaries and water towers.

ANYANGLE = 3
CIRCLE = 4
RIGHTANGLES = 1
RIGHTANGLESANDDIAGONALS = 2
class iobjectspy.enums.OverlayOutputAttributeType

Bases: iobjectspy._jsuperpy.enums.JEnum

Multi-layer overlay analysis field attribute return type

Variables:
ALL = 0
ONLYATTRIBUTES = 2
ONLYID = 1
class iobjectspy.enums.TimeDistanceUnit

Bases: iobjectspy._jsuperpy.enums.JEnum

Time distance unit

Variables:
DAYS = 5
HOURS = 4
MINUTES = 3
MONTHS = 7
SECONDS = 2
WEEKS = 6
YEARS = 8
class iobjectspy.enums.GJBLayerType

Bases: iobjectspy._jsuperpy.enums.JEnum

Variables:GJBLayerType.GJB_A – Measurement control point :var GJBLayerType.GJB_B: Industrial and agricultural social and cultural facilities :var GJBLayerType.GJB_C: Residential area and ancillary settings :var GJBLayerType.GJB_D: Land transportation :var GJBLayerType.GJB_E: pipeline :var GJBLayerType.GJB_F: water, land :var GJBLayerType.GJB_G: Submarine landform and bottom quality :var GJBLayerType.GJB_H: reef, shipwreck, obstacle :var GJBLayerType.GJB_I: Hydrology :var GJBLayerType.GJB_J: Landform and soil quality :var GJBLayerType.GJB_K: Realm and political area :var GJBLayerType.GJB_L: vegetation :var GJBLayerType.GJB_M: Geomagnetic elements :var GJBLayerType.GJB_N: Navigation aids and navigation channels :var GJBLayerType.GJB_O: maritime area boundary :var GJBLayerType.GJB_P: aviation elements :var GJBLayerType.GJB_Q: military area :var GJBLayerType.GJB_R: Note :var GJBLayerType.GJB_S: Metadata
GJB_A = 1
GJB_B = 2
GJB_C = 3
GJB_D = 4
GJB_E = 5
GJB_F = 6
GJB_G = 7
GJB_H = 8
GJB_I = 9
GJB_J = 10
GJB_K = 11
GJB_L = 12
GJB_M = 13
GJB_N = 14
GJB_O = 15
GJB_P = 16
GJB_Q = 17
GJB_R = 18
GJB_S = 19
class iobjectspy.enums.PromptSampleType

Bases: enum.IntEnum

An enumeration.

EQUALDISTANCE = 2
LOCALSIMILARITY = 1

iobjectspy.env module

iobjectspy.env.is_auto_close_output_datasource()

Whether to automatically close the result datasource object. When processing data or analysis, if the result datasource information set is automatically opened by the program (that is, the datasource does not exist in the current workspace), by default The application closes automatically after completing a single function. The user can make the result datasource not automatically closed by setting :py:meth: ‘set_auto_close_output_datasource’, so that the result datasource will exist in the current workspace.

Return type:bool
iobjectspy.env.set_auto_close_output_datasource(auto_close)

Set whether to close the result datasource object. When processing data or analysis, if the set result datasource information is automatically opened by the program (it is not opened by the user calling the open datasource interface, that is, the datasource does not exist in the current workspace), by default, The application closes automatically after completing a single function. The user can set auto_close to False through this interface so that the result datasource will not be closed automatically, so that the result datasource will exist in the current workspace.

Parameters:auto_close (bool) – Whether to automatically close the datasource object opened in the program.
iobjectspy.env.is_use_analyst_memory_mode()

Whether to use memory mode for spatial analysis

Return type:bool
iobjectspy.env.set_analyst_memory_mode(is_use_memory)

Set whether to enable memory mode for spatial analysis.

Parameters:is_use_memory (bool) – Enable the memory mode to set True, otherwise set to False
iobjectspy.env.get_omp_num_threads()

Get the number of threads used in parallel computing

Return type:int
iobjectspy.env.set_omp_num_threads(num_threads)

Set the number of threads used for parallel computing

Parameters:num_threads (int) – the number of threads used in parallel computing
iobjectspy.env.set_iobjects_java_path(bin_path, is_copy_jars=True)

Set the Bin directory address of the iObjects Java component. The set Bin directory address will be saved in the env.json file.

Parameters:
  • bin_path (str) – iObjects Java component Bin directory address
  • is_copy_jars (bool) – Whether to copy the jars of the iObjects Java component to the iObjectsPy directory at the same time.
iobjectspy.env.get_iobjects_java_path()

Get the Bin directory address of the iObjects Java component set. Only the directory address that is actively set or saved in the env.json file can be obtained. The default value is None.

Return type:str
iobjectspy.env.get_show_features_count()

获取打印矢量数据集时显示的记录数目。

Return type:int
iobjectspy.env.set_show_features_count(features_count)

设置打印矢量数据集时显示的记录数目

>>> set_show_features_count(100)
>>> dt = open_datasource('/iobjectspy/example_data.udbx')['zone']
>>> print(dt)
Parameters:features_count (int) – 打印矢量数据集时显示的记录数目

iobjectspy.mapping module

class iobjectspy.mapping.Map

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The map category is responsible for the management of the map display environment.

A map is a visualization of geographic data, usually composed of one or more layers. The map must be associated with a workspace in order to display the data in the workspace. In addition, the Settings For how the map is displayed will affect all layers within it. This class provides the return and setting of various map display modes, such as map display range, scale, coordinate system and default display mode of text and dot layers, etc., And provide methods for related operations on the map, such as opening and closing the map, zooming, full-frame display, and map output.

add_aggregation(dataset, min_color=None, max_color=None)

Make a grid aggregation map with default style according to the given point dataset.

Parameters:
  • dataset (DatasetVector) – Participate in the production of grid aggregation data. The data must be a point vector dataset.
  • min_color (Color or tuple[int,int,int]) – The color corresponding to the minimum value of the grid cell statistics. The grid aggregation map will determine the color scheme of the gradient through maxColor and minColor, and then sort the grid cells based on the size of the grid cell statistics.
  • max_color (Color or tuple[int,int,int]) – The color corresponding to the maximum value of the grid cell statistics. The grid aggregation map will determine the color scheme of the gradient through maxColor and minColor, and then sort the grid cells based on the size of the grid cell statistics to render the grid cells.
Returns:

Grid aggregate map layer object.

Return type:

LayerGridAggregation

add_dataset(dataset, is_add_to_head=True, layer_setting=None)

Add dataset to the map

Parameters:
Returns:

return the layer object if added successfully, otherwise it Return None

Return type:

Layer

add_heatmap(dataset, kernel_radius, max_color=None, min_color=None)

Make a heat map according to the given point dataset and parameter settings, that is, display the given point data in a heat map rendering mode. Heat map is a map representation method that describes population distribution, density, and change trends through color distribution. Therefore, it can very intuitively present some data that is not easy to understand or express, such as density, frequency, temperature, etc. The heat map layer can not only reflect the relative density of point features, but also express the point density weighted according to attributes, so as to consider the contribution of the weight of the point itself to the density.

Parameters:
  • dataset (DatasetVector) – The data involved in making the heat map. The data must be a point vector dataset.
  • kernel_radius (int) – The search radius used to calculate the density.
  • max_color (Color or tuple[int,int,int]) – Color with low dot density. The heat map layer will determine the color scheme of the gradient by the high dot density color (maxColor) and the low dot density color (minColor).
  • min_color (Color or tuple[int,int,int]) – Color with high dot density. The heat map layer will determine the color scheme of the gradient by the high dot density color (maxColor) and the low dot density color (minColor).
Returns:

Heat map layer object

Return type:

LayerHeatmap

add_to_tracking_layer(geos, style=None, is_antialias=False, is_symbol_scalable=False, symbol_scale=None)

Add geometric objects to the tracking layer

Parameters:
  • geos (list[Geometry] or list[Feature] or list[Point2D] or list[Rectangle]) – geometric objects to be added
  • style (GeoStyle) – the object style of the geometric object
  • is_antialias (bool) – whether antialiasing
  • is_symbol_scalable (bool) – Whether the symbol size of the tracking layer is scaled with the image
  • symbol_scale (float) – The symbol scaling reference scale of this tracking layer
Returns:

the tracking layer object of the current map object

Return type:

TrackingLayer

clear_layers()

Delete all layers in this layer collection object.

Returns:self
Return type:Map
close()

Close the current map.

find_layer(layer_name)

Return the layer object of the specified layer name.

Parameters:layer_name (str) – The specified layer name.
Returns:Return the layer object of the specified layer name.
Return type:Layer
from_xml(xml, workspace_version=None)

Create a map object based on the specified XML string. Any map can be exported as an xml string, and the xml string of a map can also be imported as a map for display. The XML string of the map stores information about the display settings of the map And its layers, the associated data, and so on.

Parameters:
  • xml (str) – The xml string used to create the map.
  • workspace_version (WorkspaceVersion or str) – The version of the workspace corresponding to the xml content. When using this parameter, please make sure that the specified version matches the xml content. If they do not match, the style of some layers may be lost.
Returns:

If the map object is created successfully, return true, otherwise return false.

Return type:

bool

get_angle()

Return the rotation angle of the current map. The unit is degree, and the accuracy is 0.1 degree. The counterclockwise direction is positive. If the user enters a negative value, the map will rotate in a clockwise direction.

Returns:The rotation angle of the current map.
Return type:float
get_bounds()

Return the spatial extent of the current map. The spatial range of the map is the smallest bounding rectangle of the range of each dataset displayed, that is, the smallest rectangle that contains the range of each dataset. When the dataset displayed on the map is added or deleted, its spatial extent will change accordingly.

Returns:The spatial extent of the current map.
Return type:Rectangle
get_center()

Return the center point of the display range of the current map.

Returns:The center point of the display range of the map.
Return type:Point2D
get_clip_region()

Return the cropped area on the map. The user can arbitrarily set a map display area, and the map content outside the area will not be displayed.

Returns:The map shows the cropped area.
Return type:GeoRegion
get_color_mode()

Return the color mode of the current map. The color modes of the map include color mode, black-and-white mode, gray-scale mode, and black-and-white reverse color mode. For details, please refer to the MapColorMode class.

Returns:the color mode of the map
Return type:MapColorMode
get_description()

Return the description information of the current map.

Returns:The description of the current map.
Return type:str
get_dpi()

Return the DPI of the map, representing how many pixels per inch

Returns:DPI of the map
Return type:float
get_dynamic_prj_trans_method()

Return the geographic coordinate system conversion algorithm used when the map is dynamically projected. The default value is: py:attr:CoordSysTransMethod.MTH_GEOCENTRIC_TRANSLATION

Returns:The projection algorithm used when the map is dynamically projected
Return type:CoordSysTransMethod
get_dynamic_prj_trans_parameter()

When setting the map dynamic projection, when the source projection and the target target projection are based on different geographic coordinate systems, you can use this method to set the conversion parameters.

Returns:The conversion parameters of the dynamic projection coordinate system.
Return type:CoordSysTransParameter
get_image_size()

Return the size of the picture when it is output, in pixels

Returns:return the width and height of the picture when the picture is output
Return type:tuple[int,int]
get_layer(index_or_name)

Return the layer object with the specified name in this layer collection.

Parameters:index_or_name (str or int) – the name or index of the layer
Returns:The layer object with the specified name in this layer collection.
Return type:Layer
get_layers()

Return all the layers contained in the current map.

Returns:All layer objects contained in the current map.
Return type:list[Layer]
get_layers_count()

Return the total number of layer objects in this layer collection.

Returns:the total number of layer objects in this layer collection
Return type:int
get_max_scale()

Return the maximum scale of the map.

Returns:The maximum scale of the map.
Return type:float
get_min_scale()

Return the minimum scale of the map

Returns:The minimum scale of the map.
Return type:float
get_name()

Return the name of the current map.

Returns:The name of the current map.
Return type:str
get_prj()

Return the projected coordinate system of the map

Returns:The projected coordinate system of the map.
Return type:PrjCoordSys
get_scale()

Return the display scale of the current map.

Returns:The display scale of the current map.
Return type:float
get_view_bounds()

Return the visible range of the current map, also known as the display range. The visible range of the current map can be set by the set_view_bounds() method, the visible range of the current map can also be set by setting the center point of the display range (set_center()) and the display scale (set_scale()).

Returns:The visible range of the current map.
Return type:Rectangle
index_of_layer(layer_name)

Return the index of the layer with the specified name in this layer collection.

Parameters:layer_name (str) – The name of the layer to be searched.
Returns:Return the layer index when the specified layer is found, otherwise -1.
Return type:int
is_clip_region_enabled()

Return whether the clipped area displayed on the map is valid, true means valid.

Returns:whether the map display clipping area is valid
Return type:bool
is_contain_layer(layer_name)

Determine whether this layer collection object contains the layer with the specified name.

Parameters:layer_name (str) – The name of the layer object that may be included in this layer set.
Returns:If this layer contains a layer with the specified name, it Return true, otherwise it Return false.
Return type:bool
is_dynamic_projection()

Return whether the dynamic projection display of the map is allowed. The dynamic projection display of the map means that if the projection information of the map in the current map window is different from the projection information of the datasource, by using dynamic map projection display Can convert the current map projection information to the projection information of datasource.

Returns:Whether to allow dynamic projection display of the map.
Return type:bool
is_fill_marker_angle_fixed()

Return whether to fix the fill angle of the fill symbol.

Returns:Whether to fix the fill angle of the fill symbol.
Return type:bool
is_line_antialias()

Return whether the map line type is displayed in anti-aliasing.

Returns:Whether the map line type is displayed in anti-aliasing.
Return type:bool
is_map_thread_drawing_enabled()

Return whether to start the thread to draw the map element. True means to start the thread to draw the map element, which can improve the drawing performance of large data maps.

Returns:Indicates whether to start another thread to draw map elements, true means start another thread to draw map elements, which can improve the drawing performance of large data maps.
Return type:bool
is_marker_angle_fixed()

Return a Boolean value specifying whether the angle of the point symbol is fixed. For all point layers in the map.

Returns:Used to specify whether the angle of the point symbol is fixed.
Return type:bool
is_overlap_displayed()

Return whether to display objects when overlapping.

Returns:Whether to display objects when overlapping.
Return type:bool
is_use_system_dpi()

Whether to use system DPI

Returns:whether to use system DPI
Return type:bool
move_layer_to(src_index, tag_index)

Move the layer with the specified index in this layer collection to the specified target index.

Parameters:
  • src_index (int) – the original index of the layer to be moved
  • tag_index (int) – The target index to move the layer to.
Returns:

Return true if the move is successful, otherwise false.

Return type:

bool

open(name)

Open the map with the specified name. The specified name is the name of a map in the map collection object in the workspace associated with the map, and it should be distinguished from the display name of the map.

Parameters:name (str) – The name of the map.
Returns:
Return type:bool
output_to_file(file_name, output_bounds=None, dpi=0, image_size=None, is_back_transparent=False, is_show_to_ipython=False)

Among the files outputting the current map, BMP, PNG, JPG, GIF, PDF, TIFF files are supported. Do not save the tracking layer.

Parameters:
  • file_name (str) – The path of the result file. The file extension must be included.
  • image_size (tuple[int,int]) – Set the size of the image when outputting, in pixels. If not set, use the image_size of the current map, refer to: py:meth:.get_image_size
  • output_bounds (Rectangle) – Map output bounds. If not set, the view range of the current map is used by default. For details, please refer to:py:meth:.get_view_bounds
  • dpi (int) – DPI of the map, representing how many pixels per inch. If not set, the DPI of the current map will be used by default, please refer to:py:meth:.get_dpi
  • is_back_transparent (bool) – Whether the background is transparent. This parameter is only valid when the type parameter is set to GIF and PNG.
  • is_show_to_ipython (bool) – Whether to display in ipython. Note that it can only be displayed in the jupyter python environment, so ipython and jupyter environment. Only PNG, JPG and GIF are supported for display in jupyter.
Returns:

Return True if the output is successful, otherwise False

Return type:

bool

output_tracking_layer_to_png(file_name, output_bounds=None, dpi=0, is_back_transparent=False, is_show_to_ipython=False)

Output the tracking layer of the current map as a png file. Before calling this interface, the user can set the image size through set_image_size.

Parameters:
  • file_name (str) – The path of the result file. The file extension must be included.
  • output_bounds (Rectangle) – Map output bounds. If not set, the view range of the current map is used by default. For details, please refer to:py:meth:.get_view_bounds
  • dpi (int) – DPI of the map, representing how many pixels per inch. If not set, the DPI of the current map will be used by default, please refer to:py:meth:.get_dpi
  • is_back_transparent (bool) – Whether the background is transparent.
  • is_show_to_ipython (bool) – Whether to display in ipython. Note that it can only be displayed in the jupyter python environment, so ipython and jupyter environment.
Returns:

Return True if the output is successful, otherwise False

Return type:

bool

refresh(refresh_all=False)

Redraw the current map.

Parameters:refresh_all (bool) – When refresh_all is TRUE, when the map is refreshed, the snapshot layer is refreshed at the same time. Snapshot layer, a special layer group, The layer contained in this layer group is used as a snapshot layer of the map. It adopts a special drawing method. The snapshot layer is drawn only when it is displayed for the first time. After that, if the map display range does not change, the snapshot layer will use this display, that is, the snapshot layer will not be redrawn with the map refresh; If the map display range changes, it will automatically trigger the refresh drawing of the snapshot layer. The snapshot layer is one of the means to improve the display performance of the map. If the display range of the map does not change, the snapshot layer will not be refreshed when the map is refreshed; if you need to force a refresh, you can use refresh_all To refresh the map and refresh the snapshot layer at the same time.
Returns:self
Return type:Map
refresh_tracking_layer()

Used to refresh the tracking layer in the map window.

Returns:self
Return type:Map
remove_layer(index_or_name)

Delete a layer with the specified name from this layer collection. Return true if the deletion is successful.

Parameters:index_or_name (str or int) – The name or index of the layer to be deleted
Returns:Return true if the deletion is successful, otherwise Return false.
Return type:bool
set_angle(value)

Set the rotation angle of the current map. The unit is degree, and the accuracy is 0.1 degree. The counterclockwise direction is positive, if the user enters a negative value, the map will rotate clockwise

Parameters:value (float) – Specify the rotation angle of the current map.
Returns:self
Return type:Map
set_center(center)

Set the center point of the display range of the current map.

Parameters:center (Point2D) – The center point of the display range of the current map.
Returns:self
Return type:Map
set_clip_region(region)

Set the cropped area of the map display. The user can arbitrarily set a map display area, and the map content outside the area will not be displayed.

Parameters:region (GeoRegion or Rectangle) – The region where the map is cropped.
Returns:self
Return type:Map
set_clip_region_enabled(value)

Set whether the map display clipping area is valid, true means valid.

Parameters:value (bool) – Display whether the cropping area is valid.
Returns:self
Return type:Map
set_color_mode(value)

Set the color mode of the current map

Parameters:value (str or MapColorMode) – Specify the color mode of the current map.
Returns:self
Return type:Map
set_description(value)

Set the description information of the current map.

Parameters:value (str) – Specify the description information of the current map.
Returns:self
Return type:Map
set_dpi(dpi)

Set the DPI of the map, which represents how many pixels per inch, and the value range is (60, 180).

Parameters:dpi (float) – DPI of the picture
Returns:self
Return type:Map
set_dynamic_prj_trans_method(value)

When setting the map dynamic projection, when the source projection and the target projection are based on different geographic coordinate systems, you need to set the conversion algorithm.

Parameters:value (CoordSysTransMethod or str) – Geographical coordinate system conversion algorithm
Returns:self
Return type:Map
set_dynamic_prj_trans_parameter(parameter)

Set the conversion parameters of the dynamic projection coordinate system.

Parameters:parameter (CoordSysTransParameter) – The transformation parameter of the dynamic projection coordinate system.
Returns:self
Return type:Map
set_dynamic_projection(value)

Set whether to allow dynamic projection display of the map. The dynamic projection display of the map means that if the projection information of the map in the current map window is different from the projection information of the datasource, the projection information of the current map can be converted into the projection information of the datasource By using the map dynamic projection display.

Parameters:value (bool) – Whether to allow dynamic projection display of the map.
Returns:self
Return type:Map
set_fill_marker_angle_fixed(value)

Set whether to fix the fill angle of the fill symbol.

Parameters:value (bool) – Whether to fix the filling angle of the filling symbol.
Returns:self
Return type:Map
set_image_size(width, height)

Set the size of the picture when outputting, in pixels.

Parameters:
  • width (int) – the width of the picture when outputting
  • height (int) – the height of the picture when the picture is output
Returns:

self

Return type:

Map

set_line_antialias(value)

Set whether the map line type is displayed in anti-aliasing.

Parameters:value (bool) – Whether the map line type is displayed in anti-aliasing.
Returns:self
Return type:Map
set_map_thread_drawing_enabled(value)

Set whether to start another thread to draw map elements, true means start another thread to draw map elements, which can improve the drawing performance of large data maps.

Parameters:value (bool) – A boolean value indicating whether to start a thread to draw map elements, true means to start a thread to draw map elements, which can improve the drawing performance of large data maps.
Returns:self
Return type:Map
set_mark_angle_fixed(value)

Set a Boolean value to specify whether the angle of the point symbol is fixed. For all point layers in the map.

Parameters:value (bool) – Specify whether the angle of the point symbol is fixed
Returns:self
Return type:Map
set_max_scale(scale)

Set the maximum scale of the map

Parameters:scale (float) – The maximum scale of the map.
Returns:self
Return type:Map
set_min_scale(scale)

Set the minimum scale of the map.

Parameters:scale (float) – The minimum scale of the map.
Returns:self
Return type:Map
set_name(name)

Set the name of the current map.

Parameters:name (str) – The name of the current map.
Returns:self
Return type:Map
set_overlap_displayed(value)

Set whether to display objects when overlapping.

Parameters:value (bool) – Whether to display the object when overlapping
Returns:self
Return type:Map
set_prj(prj)

Set the projection coordinate system of the map

Parameters:prj (PrjCoordSys) – The projected coordinate system of the map.
Returns:self
Return type:Map
set_scale(scale)

Set the display scale of the current map.

Parameters:scale (float) – Specify the display scale of the current map.
Returns:self
Return type:Map
set_use_system_dpi(value)

Set whether to use system DPI

Parameters:value (bool) – Whether to use system DPI. True means to use the DPI of the system, False means to use the map settings.
Returns:self
Return type:Map
set_view_bounds(bounds)

Set the visible range of the current map, also known as the display range. The visible range of the current map can be set by the set_view_bounds() method, and can also be set by setting the center point of the display range (set_center()) and display scale (set_scale()).

Parameters:bounds (Rectangle) – Specify the visible range of the current map.
Returns:self
Return type:Map
show_to_ipython()

Display the current map in ipython. Note that it can only be displayed in the jupyter python environment, so ipython and jupyter environment are required

Returns:Return True if successful, otherwise False
Return type:bool
to_xml()

Return the description of this map object in the form of an XML string. Any map can be exported as an xml string, and the xml string of a map can also be imported as a map for display. The xml string of the map stores information about the display Settings of the map and its layers, the associated data, and so on. In addition, you can save the xml string of the map as an xml file.

Returns:description of the map in XML format
Return type:str
tracking_layer

TrackingLayer – Return the tracking layer object of the current map

view_entire()

Show this map in full.

Returns:self
Return type:Map
zoom(ratio)

Enlarge or reduce the map by the specified scale. The scale of the map after zooming=original scale*ratio, where the ratio must be a positive number. When the ratio is greater than 1, the map is enlarged; When the ratio is less than 1, the map is reduced.

Parameters:ratio (float) – zoom map ratio, this value cannot be negative.
Returns:self
Return type:Map
class iobjectspy.mapping.LayerSetting

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The base class for layer settings. This class is the base class for setting the display style of the layer. Use the methods provided in the LayerSettingVector, LayerSettingGrid and LayerSettingImage classes to set the layer styles of vector datasets, raster datasets and image datasets respectively. All elements in the vector layer use the same rendering style, the raster layer uses a color table to display its pixels, and the style setting of the image layer is the setting of the brightness, contrast, and transparency of the image.

class iobjectspy.mapping.LayerSettingImage

Bases: iobjectspy._jsuperpy.mapping.LayerSetting

Image layer setting class.

get_background_color()

Get the display color of the background value

Returns:the display color of the background value
Return type:Color
get_background_value()

Get the value in the image that is regarded as the background

Returns:the value considered as background in the image
Return type:float
get_brightness()

Return the brightness of the image layer. The value range is -100 to 100, increasing the brightness is positive, and decreasing the brightness is negative. The brightness value can be saved to the workspace.

Returns:The brightness value of the image layer.
Return type:int
get_contrast()

Return the contrast of the image layer. The value range is -100 to 100, increasing the contrast is positive, and decreasing the contrast is negative.

Returns:The contrast of the image layer.
Return type:int
get_display_band_indexes()

Return the band index displayed by the current image layer. Assuming that the current image layer has several bands, when you need to set the display band according to the set color mode (such as RGB), specify the band index (such as 0, 2, 1 ) corresponding to the color (such as red, green, and blue in RGB) ).

Returns:The band index of the current image layer.
Return type:list[int]
get_display_color_space()

Return the color display mode of the image layer. It will display the image layer in this color mode according to the current color format and displayed band of the image layer.

Returns:The color display mode of the image layer.
Return type:ColorSpaceType
get_display_mode()

Return to the image display mode.

Returns:Video display mode.
Return type:ImageDisplayMode
get_image_interpolation_mode()

Set the interpolation algorithm used when displaying the image.

Returns:the interpolation algorithm used when displaying the image
Return type:ImageInterpolationMode
get_opaque_rate()

Return the opacity of the image layer display. The opacity is a number between 0-100. 0 means no display; 100 means completely opaque. It is only valid for image layers, and it is also valid when the map is rotated.

Returns:The opacity of the image layer display.
Return type:int
get_special_value()

Get the special value in the image. The special value can be specified by set_special_value_color() to specify the display color.

Returns:special value in the image
Return type:float
get_special_value_color()

Get the display color of a special value

Returns:display color of special value
Return type:Color
get_transparent_color()

Return the background transparent color

Returns:background transparent color
Return type:Color
get_transparent_color_tolerance()

Return the background transparent color tolerance, the tolerance range is [0,255].

Returns:background transparent color tolerance
Return type:int
is_transparent()

Set whether to make the background of the image layer transparent

Returns:A boolean value specifies whether to make the background of the image layer transparent.
Return type:bool
set_background_color(value)

Set the display color of the specified background value.

Parameters:value (Color or tuple[int,int,int] or tuple[int,int,int,int]) – the display color of the specified background value
Returns:self
Return type:LayerSettingImage
set_background_value(value)

Set the value of the image to be regarded as the background

Parameters:value (float) – The value in the image that is regarded as the background
Returns:self
Return type:LayerSettingImage
set_brightness(value)

Set the brightness of the image layer. The value range is -100 to 100, increasing the brightness is positive, and decreasing the brightness is negative.

Parameters:value (int) – The brightness value of the image layer.
Returns:self
Return type:LayerSettingImage
set_contrast(value)

Set the contrast of the image layer. The value range is -100 to 100, increasing the contrast is positive, and decreasing the contrast is negative.

Parameters:value (int) – The contrast of the image layer.
Returns:self
Return type:LayerSettingImage
set_display_band_indexes(indexes)

Set the band index of the current image layer display. Assuming that the current image layer has several bands, when you need to set the display band according to the set color mode (such as RGB), specify the band index (such as 0, 2, 1) corresponding to the color (such as red, green, and blue in RGB) ).

Parameters:indexes (list[int] or tuple[int]) – The band index of the current image layer display.
Returns:self
Return type:LayerSettingImage
set_display_color_space(value)

Set the color display mode of the image layer. It will display the image layer in this color mode according to the current color format and displayed band of the image layer.

Parameters:value (ColorSpaceType) – The color display mode of the image layer.
Returns:self
Return type:LayerSettingImage
set_display_mode(value)

Set the image display mode.

Parameters:value (ImageDisplayMode) – Image display mode, multi-band supports two display modes, single-band only supports stretch display mode.
Returns:self
Return type:LayerSettingImage
set_image_interpolation_mode(value)

Set the interpolation algorithm used when displaying the image

Parameters:value (ImageInterpolationMode or str) – the specified interpolation algorithm
Returns:self
Return type:LayerSettingImage
set_opaque_rate(value)

Set the opacity of the image layer display. The opacity is a number between 0-100. 0 means no display; 100 means completely opaque. Only valid for image layers, also valid when the map is rotated

Parameters:value (int) – The opacity of the image layer display
Returns:self
Return type:LayerSettingImage
set_special_value(value)

Set the special value in the image. The special value can be specified by set_special_value_color() to specify the display color.

Parameters:value (float) – special value in the image
Returns:self
Return type:LayerSettingImage
set_special_value_color(color)

Setting: the display color of the special value set by py:meth:set_special_value

Parameters:color (Color or tuple[int,int,int] or tuple[int,int,int,int]) – display color of special value
Returns:self
Return type:LayerSettingImage
set_transparent(value)

Set whether to make the background of the image layer transparent

Parameters:value (bool) – A boolean value specifies whether to make the background of the image layer transparent
Returns:self
Return type:LayerSettingImage
set_transparent_color(color)

Set the background transparent color.

Parameters:color (Color or tuple[int,int,int] or tuple[int,int,int,int]) – background transparent color
Returns:self
Return type:LayerSettingImage
set_transparent_color_tolerance(value)

Set the background transparent color tolerance, the tolerance range is [0,255].

Parameters:value (int) – background transparent color tolerance
Returns:self
Return type:LayerSettingImage
class iobjectspy.mapping.LayerSettingVector(style=None)

Bases: iobjectspy._jsuperpy.mapping.LayerSetting

Vector layer setting class.

This class is mainly used to set the display style of the vector layer. The vector layer draws all elements with a single symbol or style. When you just want to visually display your spatial data, and you only care about where the elements are In the spatial data, and you don’t care about how the elements are different in quantity or nature, you can use ordinary layer to display element data.

get_style()

Return the style of the vector layer.

Returns:The style of the vector layer.
Return type:GeoStyle
set_style(style)

Set the style of the vector layer.

Parameters:style (GeoStyle) – The style of the vector layer.
Returns:self
Return type:LayerSettingVector
class iobjectspy.mapping.LayerSettingGrid

Bases: iobjectspy._jsuperpy.mapping.LayerSetting

Raster layer setting class.

The raster layer settings are for ordinary layers. The raster layer uses a color table to display its pixels. The color table of SuperMap displays pixels according to the 8-bit RGB color coordinate system. You can set the display color value of the pixel according to its attribute value, thereby visually and intuitively representing the phenomenon reflected by the raster data.

get_brightness()

Return the brightness of the Grid layer. The value range is -100 to 100. Increasing brightness is positive, and decreasing brightness is negative.

Returns:The brightness of the Grid layer.
Return type:int
get_color_dictionary()

Return the color comparison table of the layer.

Returns:The color comparison table of the layer.
Return type:dict[float, Color]
get_color_table()

Return to color table

Returns:color table
Return type:Colors
get_contrast()

Return the contrast of the Grid layer, the value range is -100 to 100, increasing the contrast is positive, reducing the contrast is negative.

Returns:The contrast of the Grid layer.
Return type:int
get_image_interpolation_mode()

Return the interpolation algorithm used when displaying the image.

Returns:the interpolation algorithm used when displaying the image
Return type:ImageInterpolationMode
get_opaque_rate()

Return the opacity of the Grid layer display. The opacity is a number between 0-100. 0 means no display; 100 means completely opaque. It is only valid for raster layers, and it is also valid when the map is rotated.

Returns:Grid layer display opacity.
Return type:int
get_special_value()

Return the special value of the layer. When adding a Grid layer, the return value of this method is equal to the NoValue property value of the dataset.

Returns:
Return type:float
get_special_value_color()

Return the color of the special value data of the raster dataset.

Returns:The color of the special value data of the raster dataset.
Return type:Color
is_special_value_transparent()

Return whether the area of the layer’s special value (SpecialValue) is transparent.

Returns:A boolean value, the area where the layer’s special value (SpecialValue) is transparent Return true, otherwise it Return false.
Return type:bool
set_brightness(value)

Set the brightness of the Grid layer, the value range is -100 to 100, increase the brightness to positive, and reduce the brightness to negative.

Parameters:value (int) –
Returns:self
Return type:LayerSettingGrid
set_color_dictionary(colors)

Set the color comparison table of the layer

Parameters:colors (dict[float, Color]) – The color comparison table of the specified layer.
Returns:self
Return type:LayerSettingGrid
set_color_table(value)

Set the color table.

Parameters:value (Colors) – color table
Returns:self
Return type:LayerSettingGrid
set_contrast(value)

Set the contrast of the Grid layer, the value range is -100 to 100, increase the contrast to be positive, and reduce the contrast to be negative.

Parameters:value (int) – The contrast of the Grid layer.
Returns:self
Return type:LayerSettingGrid
set_image_interpolation_mode(value)

Set the interpolation algorithm used when displaying the image.

Parameters:value (ImageInterpolationMode or str) – The specified interpolation algorithm.
Returns:self
Return type:LayerSettingGrid
set_opaque_rate(value)

Set the opacity of the Grid layer display. The opacity is a number between 0-100. 0 means no display; 100 means completely opaque. It is only valid for raster layers, and it is also valid when the map is rotated.

Parameters:value (int) – Grid layer display opacity.
Returns:self
Return type:LayerSettingGrid
set_special_value(value)

Set the special value of the layer.

Parameters:value (float) – the special value of the layer
Returns:self
Return type:LayerSettingGrid
set_special_value_color(value)

Set the color of the special value data of the raster dataset.

Parameters:value (Color or tuple[int,int,int] or tuple[int,int,int,int]) – The color of the special value data of the raster dataset.
Returns:self
Return type:LayerSettingGrid
set_special_value_transparent(value)

Set whether the area of the layer’s special value (SpecialValue) is transparent.

Parameters:value (bool) – Whether the area where the special value of the layer is located is transparent.
Returns:self
Return type:LayerSettingGrid
class iobjectspy.mapping.TrackingLayer(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Tracking layer class.

In SuperMap, each map window has a trace layer, or to be precise, each map is displayed with a trace layer. The trace layer is a blank, Transparent layer that is always on top of each layer of the map. It is used to temporarily store graphics objects, text, etc., during a process or analysis. The trace layer will exist as long as the map is displayed. You cannot remove the trace layer or change its position.

The main functions of tracking layers in SuperMap are as follows:

-When you do not want to add a geometric object to the record set, but you need this geometric object, you can temporarily add the geometric object to the tracking layer, and clear the trace layer when you are done with the geometry object.
For example, when you need to measure the distance, you need to pull a line on the map, but this line does not exist on the map, you can use the tracking layer to achieve this.
-When the target needs to be dynamically tracked, if the target is placed in the record set, the entire layer must be constantly refreshed to achieve dynamic tracking, which will greatly affect the efficiency.
If the target to be tracked is placed on the tracking layer, then only the tracking layer needs to be refreshed to realize the dynamic tracking.
-When you need to batch add geometric objects to the record set, you can temporarily place these objects on the tracking layer, and then batch add geometry objects from the trace layer to the recordset after you determine
That you need to add them.

Please be careful to avoid using the trace layer as a container to store a large number of temporary geometry objects. If there is a large number of temporary data, it is recommended to create a temporary datasource in the local computer temporary directory (e.g. c: emp) And create the corresponding temporary dataset in the temporary datasource to store the temporary data.

You can control the trace layer, including whether the trace layer is displayed and whether the symbol scales with the image. Unlike a normal layer, objects in a trace layer are not saved, but are temporarily stored in memory While the map is displayed. When the map is closed, the objects in the trace layer will still exist, and the corresponding memory will not disappear until the map is opened again. When the map is opened again, the trace layer will appear as a blank and transparent layer.

This class provides management methods for adding, deleting, and so on geometry objects on trace layers. You can also classify the geometry objects on the trace layer by setting labels. You can think of labels as descriptions of the geometry objects. Geometry objects with the same purpose can have the same label.

add(geo, tag)

Add a geometric object to the current tracking layer and give its label information.

Parameters:
Returns:

The index of the geometric object added to the tracking layer.

Return type:

int

clear()

Clear all geometric objects in this tracking layer.

Returns:self
Return type:TrackingLayer
finish_edit_bulk()

Complete batch update

Returns:self
Return type:TrackingLayer
flush_bulk_edit()

During batch update, it is forced to refresh and save the data of this batch editing.

Returns:Force refresh to return true, otherwise return false.
Return type:bool
get(index)

Return the geometric object with the specified index in this tracking layer.

Parameters:index (int) –
Returns:The geometric object of the specified index.
Return type:Geometry
get_symbol_scale()

Return the symbol zoom reference scale of this tracking layer.

Returns:The symbol zoom reference scale of the tracking layer.
Return type:float
get_tag(index)

Return the label of the geometric object with the specified index in this tracking layer.

Parameters:index (int) – The index of the geometric object whose label is to be returned.
Returns:The label of the geometric object with the specified index in this tracking layer.
Return type:str
index_of(tag)

Return the index value of the first geometric object with the same label as the specified label.

Parameters:tag (str) – The tag that needs index check.
Returns:Return the index value of the first geometric object with the same label as the specified label.
Return type:int
is_antialias()

Return a Boolean value specifying whether to anti-alias the tracking layer. After the text and line type are set to anti-aliasing, some display jagged can be removed to make the display more beautiful. The picture shows the line type and text respectively Comparison of the effect before and after anti-aliasing

Returns:The anti-aliasing tracking layer Return true; otherwise, it Return false.
Return type:bool
is_symbol_scalable()

Return whether the symbol size of the tracking layer scales with the image. true means that when the map zooms in and zooms in, the symbol will also be zoomed in as the map zooms in.

Returns:A Boolean value indicating whether the symbol size of the tracking layer is scaled with the image.
Return type:bool
is_visible()

Return whether this tracking layer is visible. true means this tracking layer is visible, false means this tracking layer is invisible. When this tracking layer is not visible, other settings will be invalid.

Returns:Indicates whether this layer is visible.
Return type:bool
remove(index)

Delete the geometric object with the specified index in the current tracking layer.

Parameters:index (int) – The index of the geometric object to be deleted.
Returns:Return true if the deletion is successful; otherwise, Return false.
Return type:bool
set(index, geo)

Replace the geometric object at the specified index in the tracking layer with the specified geometric object. If there are other geometric objects at this index, they will be deleted.

Parameters:
  • index (int) – The index of the geometric object to be replaced.
  • geo (Geometry or Point2D or Rectangle or Feature) – The new Geometry object to replace.
Returns:

Return true if the replacement is successful; otherwise, Return false.

Return type:

bool

set_antialias(value)

Set a Boolean value to specify whether to anti-alias the tracking layer.

Parameters:value (bool) – Specify whether to anti-alias the tracking layer.
Returns:self
Return type:TrackingLayer
set_symbol_scalable(value)

Set whether the symbol size of the tracking layer is scaled with the image. true means that when the map zooms in and zooms in, the symbol will also be zoomed in as the map zooms in.

Parameters:value (bool) – A Boolean value indicating whether the symbol size of the tracking layer is scaled with the image.
Returns:self
Return type:TrackingLayer
set_symbol_scale(value)

Set the symbol zoom reference scale of this tracking layer.

Parameters:value (float) – The symbol zoom reference scale of this tracking layer.
Returns:self
Return type:TrackingLayer
set_tag(index, tag)

Set the label of the geometric object with the specified index in this tracking layer

Parameters:
  • index (int) – The index of the geometric object whose label is to be set.
  • tag (str) – The new tag of the geometric object.
Returns:

Return true if the setting is successful; otherwise, Return false.

Return type:

bool

set_visible(value)

Set whether this tracking layer is visible. true means this tracking layer is visible, false means this tracking layer is invisible. When this tracking layer is not visible, other settings will be invalid.

Parameters:value (bool) – Indicates whether this layer is visible.
Returns:self
Return type:TrackingLayer
start_edit_bulk()

Start batch update

Returns:self
Return type:TrackingLayer
class iobjectspy.mapping.Layer(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The layer class.

This class provides a series of methods to facilitate map management such as layer display and control. When the dataset is loaded into the map window for display, a layer is formed, so the layer is a visualization display Of the dataset. A layer is a reference or reference to a dataset. The layers are divided into ordinary layers and thematic layers. All elements in the ordinary layers of the vector adopt the same rendering style, and the raster layer uses a color table to display its pixels; while The thematic layer uses the specified type of thematic map style to render the elements or pixels in it. The image data only corresponds to ordinary layers. The style of ordinary layers is passed: py:meth:get_layer_setting

And set_layer_setting() method to return or set.

Instances of this class cannot be created. It can only be created by: py:class:Map class: py:meth:Map.add_dataset method

bounds

Rectangle – the extent of the layer

caption

str – Return the title of the layer. The title of the layer is the display name of the layer, for example, the name of the layer displayed in the legend or layout drawing is the map The title of the layer. Note the difference with the name of the layer.

dataset

Dataset – Return the dataset object corresponding to this layer. A layer is a reference to a dataset, so a layer corresponds to a dataset.

from_xml(xml)

Create a layer object based on the specified XML string. Any layer can be exported as an xml string, and the xml string of a layer can also be imported as a layer for display. The layer’s xml string stores all the Settings for the layer’s display and associated data. You can save the xml string of the layer as an xml file.

Parameters:xml (str) – The XML string used to create the layer
Returns:Return true if the creation is successful, otherwise Return false.
Return type:bool
get_clip_region()

Return the clipping area of the layer.

Returns:Return the clipping area of the layer.
Return type:GeoRegion
get_display_filter()

Return to the layer display filter condition. By setting the display filter conditions, some elements in the layer can be displayed while other elements are not displayed, so as to focus on analyzing the elements of interest and filter out other elements.

Note: This method only supports attribute query, not spatial query

Returns:Layer display filter conditions.
Return type:QueryParameter
get_layer_setting()

Return the style setting of the normal layer. The setting of common layer style is different for vector data layer, raster data layer and image data layer. LayerSettingVector, LayerSettingGrid, LayerSettingImage classes are used to Set and modify according to the style of the layer, raster data layer and image data layer.

Returns:the style setting of the normal layer
Return type:LayerSetting
get_max_visible_scale()

Return the maximum visible scale of this layer. The maximum visible scale cannot be negative. When the current display scale of the map is greater than or equal to the maximum visible scale of the layer, the layer will not be displayed.

Returns:The maximum visible scale of the layer.
Return type:float
get_min_visible_scale()
Returns:
Return type:float
get_opaque_rate()

Return the opacity of the layer.

Returns:The opacity of the layer.
Return type:int
get_theme()

Return the thematic map object of the thematic layer, for the thematic layer.

Returns:Thematic map object of the thematic layer
Return type:Theme
is_antialias()

Return whether the layer has anti-aliasing enabled.

Returns:Indicates whether anti-aliasing is turned on for the layer. true to enable anti-aliasing, false to disable.
Return type:bool
is_clip_region_enabled()

Return whether the crop area is valid.

Returns:Specify whether the crop area is valid. true means valid, false means invalid.
Return type:bool
is_symbol_scalable()

Return whether the symbol size of the layer is scaled with the image. The default is false. true means that when the layer is enlarged or reduced, the symbol will also be enlarged or reduced accordingly.

Returns:Whether the symbol size of the layer is scaled with the image.
Return type:bool
is_visible()

Return whether this layer is visible. true means the layer is visible, false means the layer is not visible. When the layer is not visible, the settings of all other properties will be invalid

Returns:Whether the layer is visible.
Return type:bool
is_visible_scale(scale)

Return whether the specified scale is visible dimensions, i.e. the minimum and maximum setting display scale display scale between scale

Parameters:scale (float) – The specified display scale.
Returns:Return true, indicating that the specified scale is a visible scale; otherwise, it is false.
Return type:bool
set_antialias(value)

Set whether to enable anti-aliasing for the layer.

Parameters:value (bool) – Indicates whether anti-aliasing is enabled for the layer. true to enable anti-aliasing, false to disable.
Returns:self
Return type:Layer
set_caption(value)

Set the title of the layer. The title of the layer is the display name of the layer. For example, the name of the layer displayed in the legend or layout drawing is the title of the layer. Note the difference with the name of the layer.

Parameters:value (str) – Specify the title of the layer.
Returns:self
Return type:Layer
set_clip_region(region)

Set the clipping area of the layer.

Parameters:region (GeoRegion or Rectangle) – The clipping region of the layer.
Returns:self
Return type:Layer
set_clip_region_enabled(value)

Set whether the crop area is valid.

Parameters:value (bool) – Whether the specified clipping area is valid, true means valid, false means invalid.
Returns:self
Return type:Layer
set_dataset(dt)

Set the dataset object corresponding to this layer. A layer is a reference to a dataset, therefore, a layer corresponds to a dataset

Parameters:dt (Dataset) – the dataset object corresponding to this layer
Returns:self
Return type:Layer
set_display_filter(parameter)

Set the layer display filter condition. By setting the display filter conditions, some elements in the layer can be displayed while other elements are not displayed, so as to focus on analyzing the elements of interest and filter out other elements. For example, the field of an external table is used as the expression field of the thematic map by joining (JoinItem). When the thematic map is generated and displayed, this method needs to be called, otherwise the creation of the thematic map will fail.

Note: This method only supports attribute query, not spatial query

Parameters:parameter (QueryParameter) – Specify the layer display filter condition.
Returns:self
Return type:Layer
set_layer_setting(setting)

Set the style of ordinary layers

Parameters:setting (LayerSetting) – The style setting of common layer.
Returns:self
Return type:Layer
set_max_visible_scale(value)

Set the maximum visible scale of this layer. The maximum visible scale cannot be negative. When the current display scale of the map is greater than or equal to the maximum visible scale of the layer, the layer will not be displayed.

Parameters:value (float) – Specify the maximum visible scale of the layer.
Returns:self
Return type:Layer
set_min_visible_scale(value)

Return the minimum visible scale of this layer. The minimum visible scale cannot be negative. When the current display scale of the map is smaller than the minimum visible scale of the layer, this layer will not be displayed.

Parameters:value (float) – The minimum visible scale of the layer.
Returns:self
Return type:Layer
set_opaque_rate(value)

Set the opacity of the layer.

Parameters:value (int) – The opacity of the layer.
Returns:self
Return type:Layer
set_symbol_scalable(value)

Set whether the symbol size of the layer is scaled with the image. The default is false. true means that when the layer is enlarged or reduced, the symbol will also be enlarged or reduced accordingly.

Parameters:value (bool) – Specify whether the symbol size of the layer is scaled with the image.
Returns:self
Return type:Layer
set_visible(value)

Set whether this layer is visible. true means the layer is visible, false means the layer is not visible. When the layer is not visible, the settings of all other properties will be invalid.

Parameters:value (bool) – Specify whether the layer is visible.
Returns:self
Return type:Layer
to_xml()

Return the description of this layer object in XML string form. Any layer can be exported as an xml string, and the xml string of a layer can also be imported as a layer for display. The layer’s xml string stores all the Settings for the layer’s display and associated data. You can save the xml string of the layer as an xml file.

Returns:Return the description of this layer object in XML string form.
Return type:str
class iobjectspy.mapping.LayerHeatmap(java_object)

Bases: iobjectspy._jsuperpy.mapping.Layer

Heat map layer class, which inherits from Layer class.

Heat map is a map representation method that describes population distribution, density, and change trends through color distribution. Therefore, it can very intuitively present some data that is not easy to understand or express, such as density, frequency, temperature, etc. The heat map layer can not only reflect the relative density of point features, but also express the point density weighted according to attributes, so as to consider the contribution of the weight of the point itself to the density. The heat map layer will change as the map zooms in or out. It is a dynamic raster surface. For example, it draws a heat map of the visitor flow of tourist attractions across the country. When the map is zoomed in, the heat map can reflect a certain province. Or the distribution of visitor flow to tourist attractions in a local area.

get_colorset()

Return the color set used to display the current heat map.

Returns:The color set used to display the current heat map
Return type:Colors
get_fuzzy_degree()

Return the blur degree of the color gradient in the heat map.

Returns:The blur degree of the color gradient in the heat map
Return type:float
get_intensity()

Return the high density color (MaxColor) and low density color (MinColor) in the heat map to determine the proportion of the high density color (MAXCOLOR) in the gradient color band. The greater the value, The greater the proportion of the high density color in the color band.

Returns:The high dot density color (MaxColor) and low dot density color (MinColor) in the heat map determine the proportion of the high dot density color (MaxColor) in the gradient color band
Return type:float
get_kernel_radius()

Return the core radius used to calculate the density. The unit is: screen coordinates.

Returns:core radius used to calculate density
Return type:int
get_max_color()

Return the color of high dot density, the heat map layer will determine the color scheme of the gradient by the high dot density color (MaxColor) and the low dot density color (MinColor).

Returns:color with high dot density
Return type:Color
get_max_value()

Return a maximum value. The grid between the maximum value (MaxValue) and the minimum value (MinValue) in the current heat map layer will be rendered using the color band determined by MaxColor and MinColor, Other grids larger than MaxValue will be rendered in MaxColor; and grids smaller than MinValue will be rendered in MinColor.

Returns:maximum
Return type:float
get_min_color()

Return the color of low point density, the heat map layer will determine the color scheme of the gradient by the high point density color (MaxColor) and the low point density color (MinColor).

Returns:low-density color
Return type:Color
get_min_value()

Return a minimum value. The grid between the maximum value (MaxValue) and the minimum value (MinValue) in the current heat map layer will be rendered using the color band determined by MaxColor and MinColor, Other grids larger than MaxValue will be rendered in MaxColor; and grids smaller than MinValue will be rendered in MinColor.

Returns:minimum value
Return type:float
get_weight_field()

Return the weight field. The heat map layer can not only reflect the relative density of point features, but also express the point density weighted according to the weight field, so as to consider the contribution of the weight of the point itself to the density.

Returns:weight field
Return type:str
set_colorset(colors)

Set the color set used to display the current heat map.

Parameters:colors (Colors) – The color set used to display the current heat map
Returns:self
Return type:LayerHeatmap
set_fuzzy_degree(value)

Set the blur degree of the color gradient in the heat map.

Parameters:value (float) – The blur degree of the color gradient in the heat map.
Returns:self
Return type:LayerHeatmap
set_intensity(value)

Set the high dot density color (MaxColor) and low dot density color (MinColor) in the heat map to determine the proportion of the high dot density color (MaxColor) in the gradient color band. The larger the value, the higher the density color in the color band. Bigger.

Parameters:value (float) – The high dot density color (MaxColor) and low dot density color (MinColor) in the heat map determine the proportion of the high dot density color (MaxColor) in the gradient color band
Returns:self
Return type:LayerHeatmap
set_kernel_radius(value)

Set the core radius used to calculate the density. The unit is: screen coordinates. The role of the core radius in the heat map is as follows:

-The heat map will establish a buffer for each discrete point according to the set core radius value. The unit of the nuclear radius value is: screen coordinates;

-After establishing a buffer for each discrete point, for each discrete point buffer, use a progressive gray band (the complete gray band is 0~255) from the inside out, from light to deep;

-Because the gray value can be superimposed (the larger the value, the brighter the color is, and the whiter it appears in the gray band. In practice, any channel in the ARGB model can be selected as the superimposed gray value). The gray value can be superimposed on the area of, so the more the buffer crosses, the larger the gray value, the hotter the area;

-Use the superimposed gray value as an index, map colors from a 256-color ribbon (for example, rainbow colors), and recolor the image to achieve a heat map.

The larger the search radius, the smoother and more generalized the density raster is; the smaller the value, the more detailed the information displayed by the generated raster.

Parameters:value (int) – Calculate the core radius of the density
Returns:self
Return type:LayerHeatmap
set_max_color(value)

Set the color of high dot density, the heat map layer will determine the color scheme of the gradient through the high dot density color (MaxColor) and the low dot density color (MinColor).

Parameters:value (Color or tuple[int,int,int]) – color of high dot density
Returns:self
Return type:LayerHeatmap
set_max_value(value)

Set a maximum value. The grid between the maximum value (MaxValue) and the minimum value (MinValue) in the current heat map layer will be rendered using the color band determined by MaxColor and MinColor, Other grids larger than MaxValue will be rendered in MaxColor; and grids smaller than MinValue will be rendered in MinColor. If the maximum and minimum values are not specified, the system will automatically Calculate the maximum and minimum values in the current heat map layer.

Parameters:value (float) – maximum
Returns:self
Return type:LayerHeatmap
set_min_color(value)

Set the color of low dot density, the heat map layer will determine the color scheme of the gradient by the high dot density color (MaxColor) and the low dot density color (MinColor).

Parameters:value (Color or tuple[int,int,int]) – low-density color
Returns:self
Return type:LayerHeatmap
set_min_value(value)

Set a minimum value. The grid between the maximum value (MaxValue) and the minimum value (MinValue) in the current heat map layer will be rendered using the color band determined by MaxColor and MinColor, Other grids larger than MaxValue will be rendered in MaxColor; and grids smaller than MinValue will be rendered in MinColor.

Parameters:value (float) – minimum value
Returns:self
Return type:LayerHeatmap
set_weight_field(value)

Set the weight field. The heat map layer can not only reflect the relative density of point features, but also express the point density weighted according to the weight field, so as to consider the contribution of the weight of the point itself to the density. According to the discrete point buffer determined by the kernel radius (KernelRadius), its superposition determines the heat distribution density, while the weight determines the influence of the point on the density, and the weight value of the point determines The influence of the point buffer on the density, that is, if the original influence coefficient of the point buffer is 1, and the weight value of the point is 10, after the weight is introduced, the influence coefficient of the point buffer is 1*10=10, so By analogy, the density influence coefficient of other discrete point buffers.

Then, after the weight is introduced, a new superimposed gray value index will be obtained, and the specified color band is used to color it, so as to realize the heat map of the weight.

Parameters:value (str) – Weight field. The heat map layer can not only reflect the relative density of point features, but also express the point density weighted according to the weight field, so as to consider the contribution of the weight of the point itself to the density.
Returns:self
Return type:LayerHeatmap
class iobjectspy.mapping.LayerGridAggregation(java_object)

Bases: iobjectspy._jsuperpy.mapping.Layer

Grid aggregation graph

get_colorset()

Return the color corresponding to the maximum value of the grid cell statistics. The grid aggregation map will determine the color scheme of the gradient through MaxColor and MinColor, and then sort the grid cells based on the size of the grid cell statistics, and perform color rendering on the grid cells.

Returns:the color corresponding to the maximum value of the grid cell statistics
Return type:Colors
get_grid_height()

Return the height of the rectangular grid. The unit is: screen coordinates.

Returns:the height of the rectangular grid
Return type:int
get_grid_type()

Return the grid type of the grid aggregation graph

Returns:grid type of grid aggregation graph
Return type:LayerGridAggregationType
get_grid_width()

Return the side length of a hexagonal grid, or the width of a rectangular grid. The unit is: screen coordinates.

Returns:the side length of a hexagonal grid, or the width of a rectangular grid
Return type:int
get_label_style()

Return the style of the statistical value label in the grid cell.

Returns:The style of the statistical value label in the grid cell.
Return type:TextStyle
get_line_style()

Return the style of the rectangular border line of the grid cell.

Returns:the style of the rectangle border line of the grid cell
Return type:GeoStyle
get_max_color()

Return the color corresponding to the maximum value of the grid cell statistics. The grid aggregation map will determine the color scheme of the gradient through MaxColor and MinColor , and then sort the grid cells based on the size of the grid cell statistics, and perform color rendering on the grid cells.

Returns:the color corresponding to the maximum value of the grid cell statistics
Return type:Color
get_min_color()

Return the color corresponding to the minimum value of the grid cell statistics. The grid aggregation graph will determine the color scheme of the gradient through MaxColor and MinColor, and then sort the grid cells based on the size of the grid cell statistics, and perform color rendering on the grid cells.

Returns:the color corresponding to the minimum value of the grid cell statistics
Return type:Color
get_original_point_style()

Return the style of point data display. Magnify and browse the grid aggregation graph. When the scale is large, the aggregate grid effect will not be displayed, but the original point data content will be displayed.

Returns:The style of point data display.
Return type:GeoStyle
get_weight_field()

Return the weight field. The statistical value of each grid cell of the grid aggregation graph defaults to the number of point objects that fall in the cell. In addition, point weight information can be introduced, and the weighted value of the points in the grid cell is considered as the grid statistical value .

Returns:weight field
Return type:str
is_show_label()

Whether to display grid cell labels

Returns:Whether to display the grid cell label, true means to display; false means not to display.
Return type:bool
set_colorset(colors)

Set the color corresponding to the maximum value of the grid unit statistical value. The grid aggregation graph will determine the color scheme of the gradient through MaxColor and MinColor, and then sort the grid unit based on the size of the grid unit statistical value to perform color rendering on the grid unit.

Parameters:colors (Colors) – the color corresponding to the maximum value of the grid cell statistics
Returns:self
Return type:LayerGridAggregation
set_grid_height(value)

Set the height of the rectangular grid. The unit is: screen coordinates.

Parameters:value (int) – the height of the rectangular grid
Returns:self
Return type:LayerGridAggregation
set_grid_type(value)

Set the grid type of the grid aggregation graph, which can be a rectangular grid or a hexagonal grid.

Parameters:value (LayerGridAggregationType or str) – Grid type of grid aggregation graph
Returns:self
Return type:LayerGridAggregation
set_grid_width(value)

Set the side length of the hexagonal grid or the width of the rectangular grid. The unit is: screen coordinates.

Parameters:value (int) – the side length of a hexagonal grid, or the width of a rectangular grid
Returns:self
Return type:LayerGridAggregation
set_label_style(value)

Set the style of the statistical value label in the grid cell.

Parameters:value (TextStyle) – The style of the statistical value label in the grid cell.
Returns:self
Return type:LayerGridAggregation
set_line_style(value)

Set the style of the rectangular border line of the grid cell.

Parameters:value (GeoStyle) – the style of the rectangle border line of the grid cell
Returns:self
Return type:LayerGridAggregation
set_max_color(value)

Set the color corresponding to the maximum value of the grid unit statistical value. The grid aggregation graph will determine the color scheme of the gradient through MaxColor and MinColor, and then sort the grid unit based on the size of the grid unit statistical value to perform color rendering on the grid unit.

Parameters:value (Color or tuple[int,int,int]) – the color corresponding to the maximum value of the grid cell statistics
Returns:self
Return type:LayerGridAggregation
set_min_color(value)

Set the color corresponding to the minimum value of the grid unit statistical value. The grid aggregation map will determine the gradient color scheme through MaxColor and MinColor, and then sort the grid unit based on the size of the grid unit statistical value to perform color rendering of the grid unit.

Parameters:value (Color or tuple[int,int,int]) – the color corresponding to the minimum value of the grid cell statistics
Returns:self
Return type:LayerGridAggregation
set_original_point_style(value)

Set the style of point data display. Magnify and browse the grid aggregation graph. When the scale is large, the aggregate grid effect will not be displayed, but the original point data content will be displayed.

Parameters:value (GeoStyle) – point data display style
Returns:self
Return type:LayerGridAggregation
set_show_label(value)

Set whether to display grid cell labels.

Parameters:value (bool) – Indicates whether to display the grid cell label, true means to display; false means not to display.
Returns:self
Return type:LayerGridAggregation
set_weight_field(value)

Set the weight field. The statistical value of each grid cell of the grid aggregation graph defaults to the number of point objects that fall in the cell. In addition, point weight information can be introduced, and the weighted value of the points in the grid cell is considered as the grid statistical value .

Parameters:value (str) – Weight field. The statistical value of each grid cell of the grid aggregation graph defaults to the number of point objects that fall in the cell. In addition, point weight information can be introduced, and the weighted value of the points in the grid cell is considered as the grid statistical value .
Returns:self
Return type:LayerGridAggregation
update_data()

Automatically update the current grid aggregation graph according to data changes

Returns:self
Return type:LayerGridAggregation
class iobjectspy.mapping.ThemeType

Bases: iobjectspy._jsuperpy.enums.JEnum

The thematic map type constant.

Both vector data and raster data can be used to make thematic maps. The difference is that vector data thematic maps are based on the attribute information in the attribute table, while raster data is based on pixel values. SuperMap provides thematic maps for vector data (points, lines, planes, and composite datasets),including single value thematic map, the scope of piecewise thematic map, point density thematic map, statistic thematic map and thematic map level symbols, labels projects and custom chart, Also provide suitable for raster data (grid dataset) of thematic map function, including grid section thematic map and thematic map grid single values.

Variables:
  • ThemeType.UNIQUE – Unique value thematic map. In the unique values map, elements with the same value of thematic variables are grouped into one category, and a rendering style is set for each category, such as color or symbol, etc., While elements with the same value of fields or expressions of thematic variables adopt the same rendering style, so as to distinguish different categories.
  • ThemeType.RANGE – Range thematic map. In the range map, the values of thematic variables are divided into multiple range segments, and elements or records in the same range segment are displayed with the same color or symbol style. The available segmentation methods are equidistant segmentation method, square root segmentation method, standard difference segment method, logarithmic segmentation method, and equal counting segmentation method. The thematic variables on which The subsection thematic map is based must be of numerical type.
  • ThemeType.GRAPH

    Statistical thematic map. Statistical thematic map draws a statistical map for each element or record to reflect the value of its corresponding thematic variable. Statistical thematic maps can be based on multiple variables, reflecting multiple Properties, that is, the values of multiple variables can be plotted on a statistical graph. The types of statistical graphs currently provided are: area graphs, ladder graphs, line graphs, dot graphs, histograms, and three-dimensional column graphs, Pie charts, 3D pie charts, rose charts, 3D rose charts, stacked column charts and 3D stacked column charts.

    api\../image/graphy.png
  • ThemeType.GRADUATEDSYMBOL

    Thematic map of graduated symbols. The graduated symbol map uses the size of the symbol to express the value of the field or expression (thematic variable) corresponding to the element or record. When using gradient symbols to draw features, elements or records in a range segment are drawn with symbols of the same size. The thematic variables on which the graduated symbols map is based must be numeric.

    api\../image/graduatedSymbol.png
  • ThemeType.DOTDENSITY

    Dot density thematic map. The dot density thematic map uses the number or density of points to reflect the value of the thematic data corresponding to a region or range, one of the points Represents a certain number, then the number of points in a region multiplied by the number of points represents the value of the thematic variable corresponding to this region. The more the number of points is, The denser the density or concentration of the thing or phenomenon reflected by the data is in this region. The thematic variables on which the point density thematic map is based must be numerical.

    api\../image/dotDensity.png
  • ThemeType.LABEL

    Label thematic map. The label thematic map is to display the data in the attribute table directly on the layer in the form of text, which is essentially the labeling of the layer

    api\../image/labelM.png
  • ThemeType.CUSTOM – Custom thematic map. By customizing the thematic map, users can set a specific style for each element or record, and store these settings in one or more fields, and then Based on this or the field to map project. In SuperMap, various symbols, line styles or filling styles have corresponding ID values, and the color value, symbol size, line width, etc. Both can be set with numerical data. Using custom thematic maps, users are very free to express various elements and data.
  • ThemeType.GRIDRANGE

    grid range thematic map. In the grid range thematic map, all the cell values of the grid are divided into multiple range segments, pixels within the same range segment are displayed with the same color. The available segmentation methods are equidistant segmentation method, square root segmentation method, and logarithmic segmentation method.

    api\../image/gridRanges.png
  • ThemeType.GRIDUNIQUE

    Grid unique values map. In the grid unique values map, the pixels with the same pixel value in the grid are grouped into one category, and a color is set for each category to distinguish different categories. For example, in a land use classification map, pixels with the same land use type have the same value and will be rendered with the same color to distinguish different land use types.

    api\../image/gridUnique.png
CUSTOM = 8
DOTDENSITY = 5
GRADUATEDSYMBOL = 4
GRAPH = 3
GRIDRANGE = 12
GRIDUNIQUE = 11
LABEL = 7
RANGE = 2
UNIQUE = 1
class iobjectspy.mapping.Theme

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Thematic map class, which is the base class of all thematic maps. All thematic map classes, such as unique values thematic maps, label thematic maps, and range thematic maps, are inherited from this class.

from_xml(xml)

Import thematic map information from XML string. In SuperMap, the style settings of various thematic maps can be exported into XML format strings. This XML format string records all the settings related to this thematic map, such as The XML format string for a tag topic map will record the topic map type, visible scale, tag style Settings, whether it flows, whether it automatically avoids, etc. All the style Settings for the tag topic map And the fields or expressions used to make the tag topic map. This XML format string can be used to import and set thematic maps

It should be noted that the information recorded in xml must correspond to the type of the current object. For example, if the label map information recorded in xml, the current object must be a ThemeLabel object. If you don’t know the type of thematic map recorded in xml, you can use Theme.make_from_xml() to construct a new thematic map object from xml.

Parameters:xml (str) – XML string or file path containing thematic map information
Returns:Return True if the import is successful, otherwise False.
Return type:bool
static make_from_xml(xml)

Import the thematic map information and construct a new thematic map object.

Parameters:xml (str) – XML string or file path containing thematic map information
Returns:thematic map object
Return type:Theme
to_xml()

Export thematic map information as XML string.

Returns:XML string of thematic map information
Return type:str
type

ThemeType – Thematic map type

class iobjectspy.mapping.ThemeUniqueItem(value, style, caption, visible=True)

Bases: object

The sub-item class of unique values map.

Unique values map is to classify elements with the same thematic value into one category, and set a rendering style for each category, and each category is a thematic map item. For example, use the unique value thematic map to make an administrative division map, , the Name field Represents the name of the province/municipality directly under the Central Government, which is used as the thematic variable. If the field value of this field has 5 different values in total, then the administrative district map has 5 thematic map subitems, In which the element Name field value in each subitem is the same.

Parameters:
  • value (str) – the single value of the unique values map item
  • style (GeoStyle) – The display style of each unique values map item
  • caption (str) – The name of the unique values map item
  • visible (bool) – Whether the unique values map item is visible
caption

str – the name of each unique value map item

set_caption(value)

Set the name of each unique values map item

Parameters:value (str) – The name of each unique values map item
Returns:self
Return type:ThemeUniqueItem
set_style(value)

Set the display style of each unique values map item

Parameters:value (GeoStyle) – The display style of each unique values map item
Returns:self
Return type:ThemeUniqueItem
set_value(value)

Set the unique value of the unique values map item

Parameters:value (str) – the single value of the unique values map item
Returns:self
Return type:ThemeUniqueItem
set_visible(value)

Set whether the unique values map item is visible

Parameters:value (bool) – Whether the unique values map item is visible
Returns:self
Return type:ThemeUniqueItem
style

GeoStyle – the display style of each unique value map item

value

str – the unique value of the unique value map item

visible

bool – Whether the unique values map item is visible

class iobjectspy.mapping.ThemeUnique(expression=None, default_style=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

Unique value thematic map class.

Elements with the same value in a field or expression are displayed in the same style, so as to distinguish different categories. For example, in the surface data representing land, the fields representing land use type include grassland, woodland, residential land, arable land equivalent, When the unique value map is used for rendering, each type of land use type is given a color or filling style, so that the distribution area and scope of each type of land use can be seen. Can be used for geological maps, Landform maps, vegetation maps, land use maps, political and administrative division maps, natural division maps, economic division maps, etc. The unique value thematic map emphasizes the difference of the qualitative phenomena, and generally does not indicate the characteristics of the quantity. Especially if there is crossover or overlap It is not recommended to use this type when it is like, for example: ethnic distribution area.

Note: If you have established a connection with an external table by means of Join or Link, when the thematic variables of the thematic map are used in the fields of the external table, you need to adjust when displaying the thematic map. Use Layer.set_display_filter() method, otherwise the thematic map will fail to output.

The following code demonstrates the creation of a default unique values map through a dataset:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['zones']
>>> mmap = Map()
>>> theme = ThemeUnique.make_default(dt,'zone', ColorGradientType.CYANGREEN)
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/unique_theme.png')

You can also create unique values map in the following ways:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['zones']
>>> mmap = Map()
>>> default_style = GeoStyle().set_fill_fore_color('yellow').set_fill_back_color('green').set_fill_gradient_mode('RADIAL')
>>> theme = ThemeUnique('zone', default_style)
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/unique_theme.png')

Or specify custom items:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['zones']
>>>
>>> theme = ThemeUnique()
>>> theme.set_expression('zone')
>>> color = [Color.gold(), Color.blueviolet(), Color.rosybrown(), Color.coral()]
>>> zone_values = dt.get_field_values(['zone'])['zone']
>>> for index, value in enumerate(zone_values):
>>> theme.add(ThemeUniqueItem(value, GeoStyle().set_fill_fore_color(colors[index% 4]), str(index)))
>>>
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/unique_theme.png')
Parameters:
  • expression (str) – Unique values map field expression. The field or field expression used to make unique values map. This field can be a certain attribute of the feature (such as the age or composition in the geological map), The data type of its value can be numeric or character.
  • default_style (GeoStyle) – The default style of unique values map. Use this style to display the objects that are not listed in the sub-items of the unique values map. If not set, the default style of the layer will be used for display.
add(item)

Add sub-items of unique values map.

Parameters:item (ThemeUniqueItem) – unique values map item
Returns:self
Return type:ThemeUnique
clear()

Delete all the sub-items of the unique values map. After executing this method, all the sub-items of the unique values map are released and are no longer available.

Returns:self
Return type:ThemeUnique
expression

str – unique values map field expression. The field or field expression used to make unique values map. This field can be an attribute of the element (such as Age or composition in the geological map), the data type of its value can be It is either numeric or character type.

extend(items)

Add sub-items of unique values map in batches.

Parameters:items (list[ThemeUniqueItem] or tuple[ThemeUniqueItem]) – list of unique values map items
Returns:self
Return type:ThemeUnique
get_count()
Returns:
Return type:int
get_custom_marker_angle_expression()

Return a field expression, which is used to control the rotation angle of the dot symbol in the point single-value question map corresponding to the object. The field in the field expression must be a numeric field. You can specify A field or a field expression through this interface; you can also specify a value, and all thematic map items will be rotated uniformly at the angle specified by the value.

Returns:field expression
Return type:str
get_custom_marker_size_expression()

Return a field expression, which is used to control the size of the dot symbol in the point single-value question map corresponding to the object. The field in the field expression must be a numeric field. You can specify a field Or a field expression through this interface; you can also specify a value, and all thematic map items will be displayed uniformly in the size specified by the value.

This setting is only valid for point unique values map.

Returns:field expression
Return type:str
get_default_style()

Return the default style of unique values map

Returns:The default style of unique values map.
Return type:GeoStyle
get_item(index)

Return the unique values map item of the specified serial number.

Parameters:index (int) – The serial number of the specified unique values map item.
Returns:unique values map item
Return type:ThemeUniqueItem
get_offset_x()

Return the horizontal offset of the object in the unique values map made by the point, line, and area layer relative to the original position.

Returns:The horizontal offset of the object in the unique values map created by the point, line, and area layer relative to the original position.
Return type:str
get_offset_y()

Return the vertical offset of the object in the unique values map made by the point, line, and area layer relative to the original position.

Returns:The vertical offset of the object in the unique values map created by the point, line, and area layer relative to the original position.
Return type:str
index_of(value)

Return the serial number of the specified sub-item single value in the unique values map in the current sequence.

Parameters:value (str) – The given unique value map item single value.
Returns:
Return type:int
insert(index, item)

Insert the given unique values map item into the position of the specified sequence number.

Parameters:
  • index (int) – The serial number of the specified unique values map sub-item sequence.
  • item (ThemeUniqueItem) – The unique values map item to be inserted.
Returns:

Return True if the insert is successful, otherwise False

Return type:

bool

is_default_style_visible()

Whether the default style of unique values map is visible

Returns:Whether the default style of unique values map is visible.
Return type:bool
is_offset_prj_coordinate_unit()

Get whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system

Returns:Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used.
Return type:bool
static make_default(dataset, expression, color_gradient_type=None, join_items=None)

Generate the default unique values map based on the given vector dataset and unique values map field expression.

Parameters:
  • dataset (DatasetVector or str) – vector dataset
  • expression (str) – Unique values map field expression.
  • color_gradient_type (str or ColorGradientType) – color gradient mode
  • join_items (list[JoinItem] or tuple[JoinItem]) – External table join items. If you want to add the created thematic map to the map as a layer in the map, you need to make the following settings for the thematic map layer, Through the Layer.set_display_filter() method of the Layer object corresponding to the thematic map Layer, The parameter in this method is the QueryParameter object. QueryParameter.set_join_items() method through QueryParameter object is needed here, connect the project external table items (i.e., the current method of join_items parameters) assigned to the project tutu Layer corresponding object Layer, So do project shown in the figure on the map is right.
Returns:

new unique values map

Return type:

ThemeUnique

static make_default_four_colors(dataset, color_field, colors=None)

Generate a default four-color unique value thematic map based on the specified surface dataset, color field name, and color. The four-color unique value thematic map refers to a map, only four colors can make the surface objects with common edges not The same color.

Note: In the case of low complexity of the polygon dataset, four colors can be used to generate a four-color single-value thematic map; if the complexity of the polygon dataset is high, the coloring result may be five colors.

Parameters:
  • dataset (DatasetVector or str) – The specified face dataset. Since this constructor will modify the attribute information of the polygon dataset, it is necessary to ensure that the dataset is not read-only.
  • color_field (str) – The name of the color field. The coloring field must be an integer field. It can be an existing attribute field in the polygon dataset or other self-defined fields. If it is an existing attribute field, The type of the field should be guaranteed to be an integer type. The system will modify the attribute value of the field and assign it to 1, 2, 3 and 4 respectively. If it is a custom field and the field name is valid, The system will first create the field in the surface dataset and assign the values as 1, 2, 3, and 4. Thus, the coloring fields have been assigned values of 1, 2, 3, and 4, Representing four different colors, and the four-color thematic map can be generated according to the values of this field.
  • colors (Colors) – The colors passed by the user to make thematic maps. The system does not specify the number of incoming colors. For example, if the user only passes in one color, when the thematic map is generated, the system Will automatically fill in the colors needed for the drawing.
Returns:

Four-color unique value thematic map

Return type:

ThemeUnique

remove(index)

Delete a sub-item of the unique values map with a specified serial number.

Parameters:index (int) – The serial number of the specified unique values map sub-item sequence to be deleted.
Returns:
Return type:bool
reverse_style()

Display the styles of items in the unique values map in reverse order.

Returns:self
Return type:ThemeUnique
set_custom_marker_angle_expression(value)

Set a field expression, which is used to control the rotation angle of the dot symbol in the point single-value question map corresponding to the object. The field in the field expression must be a numeric field. You can specify a field Or a field expression through this interface; you can also specify a value, and all thematic map items will be rotated uniformly at the angle specified by the value.

This setting is only valid for point unique values map.

Parameters:value (str) – field expression
Returns:self
Return type:ThemeUnique
set_custom_marker_size_expression(value)

Set a field expression, which is used to control the size of the dot symbol in the point single-value question map corresponding to the object. The field in the field expression must be a numeric field. You can specify a field Or a field expression through this interface; you can also specify a value, and all thematic map items will be displayed uniformly in the size specified by the value.

This setting is only valid for point unique values map.

Parameters:value (str) – field expression
Returns:self
Return type:ThemeUnique
set_default_style(style)

Set the default style of the unique values map. Use this style to display the objects that are not listed in the sub-items of the unique values map. If not set, the default style of the layer will be used for display.

Parameters:style (GeoStyle) –
Returns:self
Return type:ThemeUnique
set_default_style_visible(value)

Set whether the default style of the unique values map is visible.

Parameters:value (bool) – Whether the default style of unique values map is visible
Returns:self
Return type:ThemeUnique
set_expression(value)

Set the field expression of the unique values map. The field or field expression used to make unique values map. The field can be a certain attribute of the feature (such as the age or composition in a geological map), and the data type of its value can be numeric or character.

Parameters:value (str) – Specify the unique value map field expression
Returns:self
Return type:ThemeUnique
set_offset_prj_coordinate_unit(value)

Set whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used. For details, check the set_offset_x() and set_offset_y() interfaces.

Parameters:value (bool) – Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system
Returns:self
Return type:ThemeUnique
set_offset_x(value)

Set the horizontal offset of the object in the unique values map made by point, line, and area layer relative to the original position.

The unit of the offset is determined by: py:meth:.set_offset_prj_coordinate_unit, True means the geographic coordinate unit is used, otherwise the device unit is used

Parameters:value (str) – The horizontal offset of the object in the unique values map made by point, line, and area layer relative to the original position.
Returns:self
Return type:ThemeUnique
set_offset_y(value)

Set the vertical offset of the objects in the unique values map made by point, line, and area layers relative to the original position.

The unit of the offset is determined by: py:meth:.set_offset_prj_coordinate_unit, True means the geographic coordinate unit is used, otherwise the device unit is used

Parameters:value (str) – The vertical offset of the object in the unique values map made by point, line, and area layer relative to the original position.
Returns:self
Return type:ThemeUnique
class iobjectspy.mapping.ThemeRangeItem(start, end, caption, style=None, visible=True)

Bases: object

The sub-item class of the range map. In the range map, the value of the range field expression is divided into multiple range segments according to a certain range mode. Each segment has its start value, end value, name and style, etc. The range represented by each segment is [Start, End).

Parameters:
  • start (float) – The starting value of the range map item.
  • end (float) – The end value of the range map item.
  • caption (str) – The name of the sub-item of the range map.
  • style (GeoStyle) – The display style of the sub-items of the range map.
  • visible (bool) – Whether the sub item in the range map is visible.
caption

str – The name of the item in the range map.

end

float – the end value of the range thematic map item

set_caption(value)

Set the name of the item in the range map.

Parameters:value (str) – The name of the item in the range map.
Returns:self
Return type:ThemeRangeItem
set_end(value)

Set the end value of the range map item.

Parameters:value (float) – the end value of the range map item
Returns:self
Return type:ThemeRangeItem
set_start(value)

Set the starting value of the range map item.

If the sub-item is the first sub-item in the segment, then the starting value is the minimum value of the segment; if the sequence number of the sub-item is greater than or equal to 1, the starting value must be the same as the ending value of the previous sub-item Same, otherwise the system will throw an exception.

Parameters:value (float) – the starting value of the range map item
Returns:self
Return type:ThemeRangeItem
set_style(value)

Set the corresponding style of each item in the ranges map.

Parameters:value (GeoStyle) – The corresponding style of each range map item in the ranges map.
Returns:self
Return type:ThemeRangeItem
set_visible(value)

Set whether the sub items in the ranges map are visible.

Parameters:value (bool) – Specify whether the sub item in the range map is visible.
Returns:self
Return type:ThemeRangeItem
start

float – the starting value of the range thematic map item

style

GeoStyle – Return the corresponding style of each range map item in the range map.

visible

bool – Return whether the child item in the range map is visible.

class iobjectspy.mapping.RangeMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants of the range mode of the range map.

In the range map, the value of the field or expression as thematic variable is divided into multiple range segments according to a certain segmentation method, and the element or record is assigned to one of the segments according to the corresponding field value or expression value. In the paragraph, Elements or records in the same range are displayed in the same style. Segmented thematic maps are generally used to show the number or degree characteristics of continuous distribution phenomena, such as the distribution of precipitation, the distribution of soil erosion intensity, etc. This reflects the differences in the concentration or development level of the phenomenon in each region.

SuperMap component products provide a variety of classification methods, including equidistant segmentation method, square root segmentation method, standard difference segment method, logarithmic segmentation method, equal count segmentation method, and custom distance method. Obviously these segmentation methods are segmented according to according to a certain distance, so the thematic variable based on the range map must be numeric.

Variables:
  • RangeMode.EUQALINTERVAL

    Equidistant segmentation. Equidistant segmentation is based on the maximum and minimum values of the fields or expressions as thematic variables, and according to the number of segments set by the user for equal spacing segment. In equidistant segments, each segment has the same length. The formula for calculating the distance interval of equidistant segments is:

    api\../image/EqualInterval_d_s.png

    Among them, d is the distance interval of the segment, Vmax is the maximum value of the thematic variable, Vmin is the minimum value of the thematic variable, and count is the number of segments specified by the user. Then the calculation formula for the segment point of each segment is:

    api\../image/EqualInterval_v_s.png

    Among them, Vi is the value of the segment point, and i is a positive integer from 0 to count-1, representing each segment. When i is equal to 0, Vi is Vmin; when i is equal to count-1, Vi is Vmax.

    For example, if you choose a field as thematic variable, its value is from 1 to 10, and you need to divide it into 4 segments using the equidistant segmentation method, which are 1-2.5, 2.5-5, 5-7.5 and 7.5-10 respectively. . note, “” and “” are used in the segment, so the value of the segment point is assigned to the next segment.

    Note: According to this segmentation method, it is very likely that there is no value in a segment, that is, there are 0 records or elements that fall into the segment.

  • RangeMode.SQUAREROOT

    Square root segmentation. The square root segmentation method is essentially the equidistant segmentation of the square root of the original data. First, it takes the square root of all data for equal-distance segmentation to obtain the segmented points of processed data, And then square the values of these segmented points as the segmented points of the original data, so as to obtain the segmentation scheme of the original data. Therefore, according to This segmenting mode, it is also very likely that there is no value in a segment, that is, there are 0 records or elements falling into that segment. This method is suitable for some specific data. For example, when there is a large difference between the minimum value and the maximum value, The equidistance segmentation method may need to be divided into many segments to distinguish, the square root segmentation method can compress the difference between the data, and the segmentation can be performed more accurately With less number of segments. The calculation formula of sectional interval distance of the square root of thematic variable is as follows:

    api\../image/SquareRoot_d_s.png

    Among them, d is the distance interval of the segment, Vmax is the maximum value of the thematic variable, Vmin is the minimum value of the thematic variable, and count is the number of segments specified by the user. Then the calculation formula for the segment points of thematic variables is:

    api\../image/SquareRoot_v_s.png

    Among them, Vi is the value of the segment point, i is a positive integer from 0 to count-1, representing each segment, when i is equal to 0, Vi is Vmin. Note: If there are negative numbers in the data, this method is not suitable.

  • RangeMode.STDDEVIATION

    Standard differential segment. The standard difference segment method reflects the deviation of a certain attribute value of each element from its average value. The method first calculates the average and standard deviation of thematic variables, Perform segmentation on this basis. The length of each segment of the standard difference segment is a standard deviation, the middle segment is centered on the average, , the left segment point and the right segment point Differ by 0.5 standard deviations from the mean value respectively.Suppose the average value of thematic variable values is mean and the standard deviation is std, then the segmentation effect is shown in the figure:

    api\../image/rangemode_little.png

    For example, if the thematic variable is a value between 1-100, and the average value of the thematic variable is 50 and the standard deviation is 20, then the segments are 40-60, 20-40, 60-80, 0-20, 80-100 A total of 5 paragraphs. The elements that fall within the range of different segments are set to different display styles.

    Note: The number of segments of the standard deviation is determined by the calculation result and cannot be controlled by the user.

  • RangeMode.LOGARITHM

    Logarithmic range. Logarithmic segmentation method principle and the square root of the implementation of the segmentation method is basically the same, the difference is the square root method is to take the square root of the original data, the exponential and Logarithmic segmentation method is for the original data, namely to 10 at the bottom of the original data of numerical equidistance section, the first of all, the log of all the values of the original data of equidistance segmentation, Segmentation points after processing data, and then to 10 at the bottom, the subsection point value as the index of the power to get the original data of each section point, and segmentation scheme is obtained. It is applicable to the situation where the difference between the maximum value and the minimum value is great and the equidistant segmentation is not ideal. The logarithmic segmentation method has higher compression ratio than the square root segmentation method, which makes the difference scale between the data Smaller and optimizes the segmentation results.The formula for calculating the distance interval of the equidistant segment of the logarithm of the thematic variable is:

    api\../image/Logarithm_d_s.png

    Among them, d is the distance interval of the segment, Vmax is the maximum value of the thematic variable , Vmin is the minimum value of the thematic variable, and count is the number of segments specified by the user. Therefore, the formula for calculating the segment points of thematic variables is:

    api\../image/Logarithm_v_s.png

    Among them, Vi is the value of the segment point, and i is a positive integer from 0 to count-1, representing each segment. When i is equal to 0, Vi is Vmin; when i is equal to count-1, Vi is Vmax. Note: If there are negative numbers in the data, this method is not suitable.

  • RangeMode.QUANTILE

    equal count segmentation. In the equal count segmentation, try to ensure that the number of objects in each section is as equal as possible. The equal number is determined by the number of segments specified by the user and The actual number of elements. In the case of equal partition, the number of objects in each segment should be the same, but when the object data of each segment is equal, there will be one more object in the last Segments of the segmented result. For example, if there are 9 objects, divided into 9 segments, each segment has one object; if divided into 8 segments, the first 7 segments are 1 object, and the 8th segment is 2 objects; if it is divided into 7 segments, The first 5 paragraphs are 1 object, and the 6th and 7th paragraphs are 2 objects. This segmentation method is suitable for linearly distributed data. The formula for calculating the number of elements in each segment of the equal counting segment is:

    api\../image/Quantile_n_s.png

    Among them, n is the number of elements in each segment, N is the total number of elements to be segmented, and count is the number of segments specified by the user. When the calculation result of n is not an integer, the rounding method is used.

  • RangeMode.CUSTOMINTERVAL

    Custom segmentation. In custom segmentation, the length of each segment is specified by the user, that is, the interval distance is used for segmentation. The number of segments is calculated by SuperMap based on The specified interval distance and the maximum and minimum values of the topic variables. The calculation formula for each segment point is:

    api\../image/custominterval_s.png

    Among them, Vi is the value of each segment point, Vmin is the minimum value of the thematic variable, d is the distance specified by the user,count is the number of segments calculated, i is the positive integer from 0 to Count-1, denoted each segment, when i is equal to 0, Vi is Vmin; when i is equal to count-1, Vi is Vmax.

  • RangeMode.NONE – Null Range Mode
CUSTOMINTERVAL = 5
EUQALINTERVAL = 0
LOGARITHM = 3
NONE = 6
QUANTILE = 4
SQUAREROOT = 1
STDDEVIATION = 2
class iobjectspy.mapping.ThemeRange(expression=None, items=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

Range thematic map class. The attribute value of the field is segmented according to the provided segmentation method, and the display style of the corresponding object is given according to the segment range where each attribute value is located.

note: To make a range thematic map, if there is no style set for the beginning and end intervals, and no default style is set, then whether it is added to the beginning or the end, the beginning and end intervals default to the style of the first section added by the user, for example: There are 5 sections in total. The add() method adds three sections [0, 1), [1, 2), [2, 4) in sequence, then the first interval (negative infinity, 0), the end interval [ 4. Positive infinity), using the style of [0,1).

The following code demonstrates the creation of a default range thematic map through a dataset:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['zones']
>>> mmap = Map()
>>> theme = ThemeRange.make_default(dt,'SmID', RangeMode.EUQALINTERVAL, 6, ColorGradientType.RAINBOW)
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/range_theme.png')

You can also create unique values map in the following ways:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['zones']
>>>
>>> theme = ThemeRange('SmID')
>>> theme.add(ThemeRangeItem(1, 20, GeoStyle().set_fill_fore_color('gold'), '1'), is_add_to_head=True)
>>> theme.add(ThemeRangeItem(20, 50, GeoStyle().set_fill_fore_color('rosybrown'), '2'), is_add_to_head=False)
>>> theme.add(ThemeRangeItem(50, 90, GeoStyle().set_fill_fore_color('coral'), '3'), is_add_to_head=False)
>>> theme.add(ThemeRangeItem(90, 160, GeoStyle().set_fill_fore_color('crimson'), '4'), is_add_to_head=False)
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/range_theme.png')
Parameters:
  • expression (str) – segment field expression.
  • items (list[ThemeRangeItem] or tuple[ThemeRangeItem]) – List of sub-items of range thematic map
add(item, is_normalize=True, is_add_to_head=False)

Add a range thematic map item

Parameters:
  • item (ThemeRangeItem) – sub-item of range thematic map
  • is_normalize (bool) – Indicates whether to normalize. When is_normalize is True, if the item value is illegal, then normalize is performed. When is_normalize is Fasle, if the item value is illegal, an exception will be thrown.
  • is_add_to_head (bool) – Whether to add to the beginning of the segment list. If it is False, it is added to the end of the segment list.
Returns:

self

Return type:

ThemeRange

clear()

Delete all the sub-items of the range map. After executing this method, all the sub-items of the range map are released and are no longer available.

Returns:self
Return type:ThemeRange
expression

str – segment field expression

extend(items)

Add sub-items of range thematic map in batch

Parameters:items (list[ThemeRangeItem] or tuple[ThemeRangeItem]) – List of sub-items of range thematic map
Returns:self
Return type:ThemeRange
get_count()

Return the number of ranges in the ranges map

Returns:the number of ranges in the range map
Return type:int
get_custom_interval()

Get custom segment length

Returns:self-positioning segment length
Return type:float
get_item(index)

Return the range thematic map item of the specified serial number

Parameters:index (int) – the serial number of the specified range map
Returns:range thematic map item
Return type:ThemeRangeItem
get_offset_x()

Get the horizontal offset

Returns:horizontal offset
Return type:str
get_offset_y()

Get the vertical offset

Returns:vertical offset
Return type:str
get_precision()

Get the rounding precision of the range range map.

Returns:rounding precision
Return type:float
index_of(value)

Return the serial number of the specified range field value in the current range sequence in the ranges map.

Parameters:value (str) – The value of the given segment field.
Returns:The sequence number of the segment field value in the segment sequence. If the value does not exist, -1 is returned.
Return type:int
is_offset_prj_coordinate_unit()

Get whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system

Returns:Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used.
Return type:bool
static make_default(dataset, expression, range_mode, range_parameter, color_gradient_type=None, range_precision=0.1, join_items=None)

According to the given vector dataset, segment field expression, segment mode, corresponding segment parameters, color gradient filling mode, external connection entries and rounding precision, generate the default segment thematic map.

Note: When making thematic maps by connecting external tables, for UDB datasources, the connection type does not support inner joins, that is, the JoinType.INNERJOIN connection type is not supported.

Parameters:
  • dataset (DatasetVector or str) – Vector dataset.
  • expression (str) – segment field expression
  • range_mode (str or RangeMode) – segment mode
  • range_parameter (float) – segment parameter. When the segmentation mode is one of equidistant segmentation method, square root segmentation, logarithmic segmentation method, and equal counting segmentation method, this parameter is the number of segments; When the segment mode is the standard difference segment method, this parameter does not work; when the segment mode is a custom distance, this parameter represents a custom distance.
  • color_gradient_type (ColorGradientType or str) – color gradient mode
  • range_precision (float) – The precision of the segment value. For example, the calculated segment value is 13.02145, and the segment accuracy is 0.001, the segment value is 13.021
  • join_items (list[JoinItem] or tuple[JoinItem]) – External table join items. If you want to add the created thematic map to the map as a layer in the map, you need to make the following settings for the thematic map layer, Through the Layer.set_display_filter() method of the Layer object corresponding to the thematic map Layer, The parameter in this method is the QueryParameter object. QueryParameter.set_join_items() method through QueryParameter object is needed here, connect the project external table items (i.e., the current method of join_items parameters) assigned to the project tutu Layer corresponding object Layer, So do project shown in the figure on the map is right.
Returns:

result range thematic map object

Return type:

ThemeRange

range_mode

RangeMode – Segmentation Mode

reverse_style()

Display the styles of the ranges in the range map in reverse order. For example, the thematic map has three segments, namely item1, item2, and item3. After calling the reverse order display, the style of item3 and item1 will be exchanged, and the display style of item2 remains unchanged.

Returns:self
Return type:ThemeRange
set_expression(value)

Set the segment field expression. By comparing the value of a certain element segment field expression with the segment value of each segment range (determined according to a certain segment mode), the range segment of the element is determined, and the elements falling in different segments Set to different styles.

Parameters:value (str) – Specify the segment field expression.
Returns:self
Return type:ThemeRange
set_offset_prj_coordinate_unit(value)

Set whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used. For details, check the set_offset_x() and set_offset_y() interfaces.

Parameters:value (bool) – Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system
Returns:self
Return type:ThemeRange
set_offset_x(value)

Set the horizontal offset.

The unit of the offset is determined by: py:meth:.set_offset_prj_coordinate_unit, True means the geographic coordinate unit is used, otherwise the device unit is used

Parameters:value (str) – The offset in the horizontal direction.
Returns:self
Return type:ThemeRange
set_offset_y(value)

Set the vertical offset.

The unit of the offset is determined by: py:meth:.set_offset_prj_coordinate_unit, True means the geographic coordinate unit is used, otherwise the device unit is used

Parameters:value (str) –
Returns:self
Return type:ThemeRange
set_precision(value)

Set the rounding precision of the range thematic map.

For example, if the calculated segment value is 13.02145 and the segment accuracy is 0.001, the segment value will be 13.021.

Parameters:value (float) – rounding precision
Returns:self
Return type:ThemeRange
class iobjectspy.mapping.MixedTextStyle(default_style=None, styles=None, separator=None, split_indexes=None)

Bases: object

The text compound style class.

This class is mainly used to style the text content of the label in the label map. Through this type of user, the label text can be displayed in different styles, such as the text “Himalayas”. Through this type, the first three characters can be displayed in red, and the last two characters can be displayed in blue.

Setting different styles for the same text is essentially segmenting the characters of the text, and the characters in the same segment have the same display style. There are two ways to segment characters, one is to segment the text with a separator; the other is to segment the text according to the segment index value.

  • Use separators to segment the text. For example, use “&” as a separator to divide the text “5&109” into two parts: “5” and “109”. When displaying, use “5” and the separator “&” The same style, the string “109” uses the same style.
  • Using segment index value to segment the character index value of the text is an integer starting with 0, such as the text “Mount Everest”, the index value of the first character (“bead”) is 0, the second character (” Mu”) has an index value of 1, And so on; when the segment index value is set to 1, 3, 4, 9, the corresponding character segment range is (-∞, 1), [1, 3), [3, 4), [4, 9 ), [9, +∞), you can see the character whose index number is 0 (ie “bead”) In the first segment, the characters with index numbers 1, 2 (ie “Mu” and “lang”) are in the second segment, and the characters with index number 3 (“玛”) are in the third segment. In the segment, the character with index number 4 (“peak”) is in the fourth segment, and there are no characters in the remaining segments.
Parameters:
  • default_style (TextStyle) – default style
  • styles (list[TextStyle] or tuple[TextStyle]) – A collection of text styles. The styles in the text style collection are used for characters in different segments
  • separator (str) – The separator of the text, the style of the separator adopts the default style, and the separator can only be set to one character
  • split_indexes (list[int] or tuple[int]) – segment index value, segment index value is used to segment characters in the text
default_style

TextStyle – default style

get_separator()

Get text separator

Returns:The separator of the text.
Return type:str
get_split_indexes()

Return the segment index value, which is used to segment characters in the text

Returns:return the segment index value
Return type:list[int]
is_separator_enabled

bool – Is the text separator valid

set_default_style(value)

Set the default style

Parameters:value (TextStyle) – default style
Returns:self
Return type:MixedTextStyle
set_separator(value)

Set the text separator. The separator style adopts the default style, and the separator can only be set with one character.

The text separator is a symbol that separates the text. For example, using “_” as a separator, it divides the text “5_109” into two parts, “5” and “109”, assuming there are style arrays: style1, style2 and default text style ‘DefaultStyle’. When displayed, “5” is displayed using Style1, the separator “_” uses the default style (DefaultStyle), and the characters “1”, “0”, and “9” use the Style2 style.

Parameters:value (str) – Specify the separator of the text
Returns:self
Return type:MixedTextStyle
set_separator_enabled(value)

Set whether the separator of the text is valid. When the separator is valid, the text is segmented by the separator; when it is invalid, the text is segmented according to the position of the characters in the text. After segmentation, the characters in the same segment have the same display style.

Parameters:value (bool) – Whether the separator of the text is valid
Returns:self
Return type:MixedTextStyle
set_split_indexes(value)

Set the segment index value. The segment index value is used to segment the characters in the text.

The index value of the characters in the text is an integer starting from 0, for example, in the text “Mount Everest”, the index value of the first character (“bead”) is 0, and the index value of the second character (“Mu”) is 1. And so on; when set Segment index value is 1, 3, 4, 9, the corresponding character segment range is (-∞, 1), [1, 3), [3, 4), [4, 9), [9, + ∞), it can be seen that the character with index number 0 ( “珠”) is in the first Segment, the characters with index numbers 1 and 2 (ie “穆” and “朗”) are in the second segment, and the characters with index number 3 (“玛”) are in the third segment. The character number 4 (“峰”) is in the fourth segment, There are no characters in the remaining sections.

Parameters:value (list[int] or tuple[int]) – Specify the segment index value
Returns:self
Return type:MixedTextStyle
set_styles(value)

Set the text style collection. The styles in the text style collection are used for characters in different segments.

Parameters:value (list[TextStyle] or tuple[TextStyle]) – text style collection
Returns:self
Return type:MixedTextStyle
styles

list[TextStyle] – Collection of text styles

class iobjectspy.mapping.LabelMatrix(cols, rows)

Bases: object

Matrix label class.

Through this class, complex labels can be made to annotate objects. This class can contain n*n matrix label elements. The type of matrix label elements can be pictures, symbols, label thematic maps, etc. The currently supported matrix tag element; types are: py:class:LabelMatrixImageCell, LabelMatrixSymbolCell, ThemeLabel, passing in another type Will throw an exception. Matrix label elements are not supported. Matrix label elements do not support expressions with special symbols, and do not support labeling along lines.

The following code demonstrates how to use the LabelMatrix class to make complex labels to label objects:

>>> label_matrix = LabelMatrix(2,2)
>>> label_matrix.set(0, 0, LabelMatrixImageCell('path', 5, 5, is_size_fixed=False))
>>> label_matrix.set(1, 0, ThemeLabel().set_label_expression('Country'))
>>> label_matrix.set(0, 1, LabelMatrixSymbolCell('Symbol', GeoStyle.point_style(0, 0, (6,6),'red')))
>>> label_matrix.set(1, 1, ThemeLabel().set_label_expression('Capital'))
>>> theme_label = ThemeLabel()
>>> theme_label.set_matrix_label(label_matrix)
Parameters:
  • cols (int) – number of columns
  • rows (int) – number of rows
cols

int – number of columns

get(col, row)

Set the corresponding object at the specified row and column position.

Parameters:
  • col (int) – the specified number of columns
  • row (int) – the specified number of rows
Returns:

the object corresponding to the specified row and column position

Return type:

LabelMatrixImageCell or LabelMatrixSymbolCell or ThemeLabel

rows

int – number of rows

set(col, row, value)

Set the corresponding object at the specified row and column position.

Parameters:
Returns:

self

Return type:

LabelMatrix

class iobjectspy.mapping.LabelMatrixImageCell(path_field, width=1.0, height=1.0, rotation=0.0, is_size_fixed=False)

Bases: object

The matrix label element class of the image type.

This type of object can be used as a matrix label element in the matrix label object

Specific reference: py:class:LabelMatrix.

Parameters:
  • path_field (str) – Record the field name of the image path used by the image type matrix label element.
  • width (float) – the width of the picture, in millimeters
  • height (float) – The height of the picture in millimeters.
  • rotation (float) – The rotation angle of the picture.
  • is_size_fixed (bool) – Is the size of the picture fixed?
height

float – Return the height of the image in millimeters

is_size_fixed

bool – Is the size of the image fixed

path_field

str

rotation

float – the rotation angle of the image

set_height(value)

Set the height of the picture in millimeters

Parameters:value (float) – the height of the picture
Returns:self
Return type:LabelMatrixImageCell
set_path_field(value)
Parameters:value (str) –
Returns:self
Return type:LabelMatrixImageCell
set_rotation(value)

Set the rotation angle of the picture.

Parameters:value (float) – The rotation angle of the picture.
Returns:self
Return type:LabelMatrixImageCell
set_size_fixed(value)

Set whether the size of the picture is fixed

Parameters:value (bool) – Whether the size of the picture is fixed
Returns:self
Return type:LabelMatrixImageCell
set_width(value)

Set the width of the picture in millimeters

Parameters:value (float) – The width of the picture, in millimeters
Returns:self
Return type:LabelMatrixImageCell
width

float – Return the width of the image in millimeters

class iobjectspy.mapping.LabelMatrixSymbolCell(symbol_id_field, style=None)

Bases: object

The symbol type of the matrix label element class.

This type of object can be used as a matrix label element in the matrix label object.

Specific reference: py:class:LabelMatrix.

Parameters:
  • symbol_id_field (str) – Record the field name of the symbol ID used.
  • style (GeoStyle) – the style of the symbol used
set_style(value)

Set the style of symbols used

Parameters:value (GeoStyle) – the style of the symbol used
Returns:self
Return type:LabelMatrixSymbolCell
set_symbol_id_field(value)

Set the field name of the symbol ID used in the record.

Parameters:value (str) – Record the field name of the symbol ID used.
Returns:self
Return type:LabelMatrixSymbolCell
style

GeoStyle – Return the style of the symbol used

symbol_id_field

str – Return the field name of the symbol ID used by the record.

class iobjectspy.mapping.LabelBackShape

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the shape type constants of the label background in the label map.

The label background is a label display style supported by SuperMap iObjects. It uses various shapes with a certain color as the background of each label, which can highlight the label or make the label thematic map more beautiful.

Variables:
DIAMOND = 4
ELLIPSE = 3
MARKER = 6
NONE = 0
RECT = 1
ROUNDRECT = 2
TRIANGLE = 5
class iobjectspy.mapping.AvoidMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This enumeration defines the type constants of the avoidance method of the label text in the label map.

Variables:
EIGHT = 3
FOUR = 2
FREE = 4
TWO = 1
class iobjectspy.mapping.AlongLineCulture

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constants used to display the label text along the line.

Variables:
  • AlongLineCulture.ENGLISH – It is displayed in English habit. The direction of the text is always perpendicular to the direction of the line.
  • AlongLineCulture.CHINESE – Display in Chinese habit. When the angle between the line and the horizontal direction is [], the text direction is parallel to the line direction, otherwise it is vertical.
CHINESE = 1
ENGLISH = 0
class iobjectspy.mapping.AlongLineDirection

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the type constant of the label along the line. The acute angle between the route and the horizontal direction above 60 degrees indicates the up and down direction, and below 60 degrees indicates the left and right direction

Variables:
ALONG_LINE_NORMAL = 0
LEFT_BOTTOM_TO_RIGHT_TOP = 3
LEFT_TOP_TO_RIGHT_BOTTOM = 1
RIGHT_BOTTOM_TO_LEFT_TOP = 4
RIGHT_TOP_TO_LEFT_BOTTOM = 2
class iobjectspy.mapping.AlongLineDrawingMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the tagging strategy type along the label.

Starting from the version of SuperMap GIS 8C (2017), the drawing strategy of labels along the line has been adjusted. In order to be compatible with the previous version, a “compatible drawing” option is provided.

In the new drawing strategy, users can choose whether to draw the label as a whole or separate the text and letters in the label according to actual application requirements. In general, the label along the line is drawn by splitting, and the label matches the trend of the marked line; if the line is drawn, the label will be taken as a whole. This setting is generally used for labeling along the line with background labels.

api\../image/Labelchaifen.png
Variables:
COMPATIBLE = 0
EACHWORD = 2
WHOLEWORD = 1
class iobjectspy.mapping.OverLengthLabelMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the processing mode of the over-long label in the label map.

The label whose length exceeds the maximum length of the set label is called an overlength label. The maximum length of the label can be returned and set by the ThemeLabel.set_overlength_label() method SuperMap component products provide three processing methods for super long labels to control the display behavior of super long labels.

Variables:
  • OverLengthLabelMode.NONE – Do not process over-length labels.
  • OverLengthLabelMode.OMIT – Omit the excess part. In this mode, the part of the super long label that exceeds the specified maximum label length (MaxLabelLength) is indicated by an ellipsis.
  • OverLengthLabelMode.NEWLINE – New line display. This mode displays the part of the super long label that exceeds the maximum length of the specified label in a new line, that is, the super long label is displayed in multiple lines.
NEWLINE = 2
NONE = 0
OMIT = 1
class iobjectspy.mapping.ThemeLabelRangeItem(start, end, caption, style, visible=True, offset_x=0.0, offset_y=0.0)

Bases: object

The sub-item of the range label thematic map.

The segment label map refers to the segmentation of object labels based on the value of the specified field expression. Object labels in the same section are displayed in the same style, and labels in different sections are displayed in different styles. Among them, a piecewise Corresponds to a segmented label project items.

Parameters:
  • start (float) – The starting value of the corresponding segment of the sub-item.
  • end (float) – The end value of the corresponding sub-item.
  • caption (str) – The name of the subkey.
  • style (TextStyle) – The text style of the child item.
  • visible (bool) – Whether the sub item of the range label map is visible
  • offset_x (float) – The offset of the label in the child item in the X direction.
  • offset_y (float) – The offset of the label in the child item in the Y direction.
caption

str – The name of the sub-item.

end

float – the end value of the sub-item corresponding to the segment.

offset_x

float – The offset of the label in the child item in the X direction.

offset_y

float – The offset of the label in the child item in the Y direction.

set_caption(value)

Set the name of the child.

Parameters:value (str) – The name of the sub-item.
Returns:self
Return type:ThemeLabelRangeItem
set_end(value)

Set the end value of the corresponding sub-item.

Parameters:value (float) –
Returns:self
Return type:ThemeLabelRangeItem
set_offset_x(value)

Set the offset of the label in the child item in the X direction

Parameters:value (float) – The offset of the label in the child item in the X direction
Returns:self
Return type:ThemeLabelRangeItem
set_offset_y(value)

Set the offset of the label in the child item in the Y direction.

Parameters:value (float) – The offset of the label in the child item in the Y direction.
Returns:self
Return type:ThemeLabelRangeItem
set_start(value)

Set the starting value of the corresponding sub-item.

Parameters:value (float) – The starting value of the corresponding segment of the sub-item.
Returns:self
Return type:ThemeLabelRangeItem
set_style(value)

Set the text style of the child

Parameters:value (TextStyle) – the text style of the child
Returns:self
Return type:ThemeLabelRangeItem
set_visible(value)

Set whether the sub-items of the range label thematic map are visible

Parameters:value (bool) – Whether the sub item of the range label map is visible
Returns:self
Return type:ThemeLabelRangeItem
start

float – the starting value of the sub-item corresponding to the segment.

style

TextStyle – the text style of the child

visible

bool – Whether the sub-items of the segment label thematic map are visible

class iobjectspy.mapping.ThemeLabelRangeItems(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The sub-item collection of the range label map.

The segment label map refers to the segmentation of object labels based on the value of the specified field expression. Object labels in the same section are displayed in the same style, and labels in different sections are displayed in different styles. Among them, a piecewise Corresponds to a segmented label project items.

add(item, is_normalize=True, is_add_to_head=False)

Add sub-item of thematic map of range label

Parameters:
  • item (ThemeLabelRangeItem) – sub-item of thematic map of segment labels
  • is_normalize (bool) – Whether to correct the illegal sub-item, True to correct it, False to not correct it and throw an exception to change the sub-item to an illegal value.
  • is_add_to_head (bool) – Whether to add to the head of the list. If it is True, it will be added to the head of the list. If it is False, it will be added to the end.
Returns:

self

Return type:

ThemeLabelRangeItems

clear()

Delete the sub-items of the range label map. After executing this method, all the label thematic map items are released and are no longer available.

Returns:self
Return type:ThemeLabelRangeItems
extend(items, is_normalize=True)

Add sub-items of the range label thematic map in batches. By default, they are added to the end of the sub-item list in order.

Parameters:
  • items (list[ThemeLabelRangeItem] or tuple[ThemeLabelRangeItem]) – the sub-item list of the segment label map
  • is_normalize (bool) – Whether to correct the illegal sub-item, True to correct it, False to not correct it and throw an exception to change the sub-item to an illegal value.
Returns:

self

Return type:

ThemeLabelRangeItems

get_count()

Return the number of sub-items in the sub-item set of the range label map.

Returns:the number of sub-items in the sub-item set of the range label map
Return type:int
get_item(index)

Return the sub-item in the sub-item set of the range label thematic map with the specified serial number.

Parameters:index (int) – The number of the specified sub-item of the range label map.
Returns:The sub-item in the sub-item set of the range label map with the specified serial number.
Return type:ThemeLabelRangeItem
index_of(value)

Return the serial number of the specified range field value in the current range sequence in the label map.

Parameters:value (str) – The value of the given segment field.
Returns:The sequence number of the segment field value in the segment sequence. If the value does not exist, -1 is returned.
Return type:int
reverse_style()

Display the styles of the ranges in the range label map in reverse order.

Returns:self
Return type:ThemeLabelRangeItems
class iobjectspy.mapping.ThemeLabelUniqueItem(unique_value, caption, style, visible=True, offset_x=0.0, offset_y=0.0)

Bases: object

The sub-item of the unique value label map.

The unique value label map refers to the classification of object labels based on the value of the specified field expression. Object labels with the same value are displayed in the same style for one category, and labels of different categories are displayed in different styles; among them, a single value Corresponds to a single value label thematic map subitem.

Parameters:
  • unique_value (str) – single value.
  • caption (str) – The name of the item of the unique value label map.
  • style (TextStyle) – text style corresponding to single value
  • visible (bool) – Whether the items of the unique value label map are visible
  • offset_x (float) – The offset of the label in the child item in the X direction
  • offset_y (float) – The offset of the label in the child item in the Y direction
caption

str – The name of the unique value label thematic map item

offset_x

float – the offset of the label in the child item in the X direction

offset_y

float – the offset of the label in the child item in the Y direction

set_caption(value)

Set the name of the sub-item of the unique value label map

Parameters:value (str) –
Returns:self
Return type:ThemeLabelUniqueItem
set_offset_x(value)

Set the offset of the label in the child item in the X direction.

Parameters:value (float) – The offset of the label in the child item in the X direction
Returns:self
Return type:ThemeLabelUniqueItem
set_offset_y(value)

Set the offset of the label in the child item in the Y direction.

Parameters:value (float) – The offset of the label in the child item in the Y direction
Returns:self
Return type:ThemeLabelUniqueItem
set_style(value)
Parameters:value (TextStyle) –
Returns:self
Return type:ThemeLabelUniqueItem
set_unique_value(value)

Set the single value corresponding to the sub item of the unique value label map.

Parameters:value (str) – The single value corresponding to the sub item of the unique value label map.
Returns:self
Return type:ThemeLabelUniqueItem
set_visible(value)

Set whether the child items of the unique value label map are visible. True means visible, False means invisible.

Parameters:value (bool) – Whether the items of the unique value label map are visible
Returns:self
Return type:ThemeLabelUniqueItem
style

TextStyle

unique_value

str – Return the single value corresponding to the item of the unique value label map.

visible

bool – Return whether the unique value label thematic map item is visible.

class iobjectspy.mapping.ThemeLabelUniqueItems(java_object)

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

The sub-item collection of the unique value label map.

The unique value label map refers to the classification of object labels based on the value of the specified field expression. Object labels with the same value are displayed in the same style for one category, and labels of different categories are displayed in different styles; among them, a single value Corresponds to a single value label thematic map subitem.

add(item)

Add a sub-item to the sub-item set of the unique value label map.

Parameters:item (ThemeLabelUniqueItem) – the unique value label thematic map item to be added to the collection
Returns:self
Return type:ThemeLabelUniqueItems
clear()

Delete the item in the item set of the unique value label map. After executing this method, all the sub-items of the unique value label map are released and are no longer available.

Returns:self
Return type:ThemeLabelUniqueItems
extend(items)

Batch add sub-items of unique value label thematic map

Parameters:items (list[ThemeLabelUniqueItem] or tuple[ThemeLabelUniqueItem]) – unique value label thematic map sub-item collection
Returns:self
Return type:ThemeLabelUniqueItems
get_count()

Return the number of items in the unique value label map item set.

Returns:the number of items in the item set of the unique value label map
Return type:int
get_default_offset_x()

Return the offset in the X direction of the label in the default sub-item of the unique value label map

Returns:The offset of the label in the X direction in the default sub-item of the unique value label map
Return type:float
get_default_offset_y()

Return the offset of the label in the default sub-item of the unique value label map in the Y direction

Returns:The offset of the label in the default sub-item of the unique value label map in the Y direction
Return type:float
get_default_style()

Return the text style of the default item of the unique value label map

Returns:The text style of the default sub-item of the unique value label map
Return type:GeoStyle
get_item(index)

Return the item in the set of unique value label thematic map items with the specified serial number

Parameters:index (int) – specify the serial number
Returns:unique value label thematic map item
Return type:ThemeLabelUniqueItem
insert(index, item)

Insert a sub-item into the sub-item set of the unique value label map.

Parameters:
  • index (int) – Specify the number position of the subitem insertion.
  • item (ThemeLabelUniqueItem) – The specified unique value label thematic map item to be added to the collection.
Returns:

Return True if the insert is successful, otherwise False.

Return type:

bool

remove(index)

Remove the unique value label thematic map item at the specified sequence number in the collection.

Parameters:index (int) – the serial number of the unique value label map item to be removed
Returns:Return True if the removal is successful, otherwise False
Return type:bool
reverse_style()

Display the unique value style in the unique value label map in reverse order.

Returns:self
Return type:ThemeLabelUniqueItems
set_default_offset_x(value)

Set the X-direction offset of the label in the default sub-item of the unique value label map

Parameters:value (float) – The offset of the label in the default sub-item of the unique value label map in the X direction
Returns:self
Return type:ThemeLabelUniqueItems
set_default_offset_y(value)

Set the Y-direction offset of the label in the default sub-item of the unique value label map

Parameters:value (float) – The offset of the label in the default sub-item of the unique value label map in the Y direction
Returns:self
Return type:ThemeLabelUniqueItems
set_default_style(style)

Set the text style of the default sub-item of the unique value label map. The default style is used for the sub-item that does not specify the corresponding single value.

Parameters:style (GeoStyle) – The text style of the default sub-item of the unique value label map
Returns:self
Return type:ThemeLabelUniqueItems
class iobjectspy.mapping.ThemeLabel

Bases: iobjectspy._jsuperpy.mapping.Theme

Label thematic map class.

The label of the label thematic map can be numbers, letters, and text, such as geographic names of rivers, lakes, oceans, mountains, towns, villages, etc., elevation, contour values, river flow velocity, highway section mileage, navigation line mileage, etc.

In the label thematic map, you can set or control the display style and position of the labels. You can set a unified display style and position option for all labels to display; you can also use the unique value label thematic map, based on The value of the specified field expression, the labels of objects with the same value are displayed in the same style as one class, and the labels of different classes are displayed in different styles. It can also be segmented based on the value of The specified field expression through the section label thematic map. Object labels in the same section are displayed in the same style, and labels in different sections are displayed in different styles.

There are many types of label thematic maps: unified label thematic map, single value label thematic map, composite style label thematic map, range label thematic map, and custom label thematic map. ThemeLabel class can be used to achieve All the above style label thematic map Settings, it is recommended that users do not set two or more styles at the same time. If multiple styles are set at the same time, the display of the label thematic map will be displayed according to the priority of the following table:

api\../image/themelabelmore.png

Note: Legends, title, scale, etc. will usually appear on the map. These are all cartographic elements and do not belong to the category of label thematic icon annotations

Note: If you have established a connection with an external table by means of Join or Link, when the thematic variables of the thematic map are used in the fields of the external table, when displaying the thematic map, You need call Layer.set_display_filter() method, otherwise the thematic map will fail to output.

Build a unified style label thematic map:

>>> text_style = TextStyle().set_fore_color(Color.rosybrown()).set_font_name('Microsoft Yahei')
>>> theme = ThemeLabel().set_label_expression('zone').set_uniform_style(text_style)

Build the default unique value label thematic map:

>>> theme = ThemeLabel.make_default_unique(dataset,'zone','color_field', ColorGradientType.CYANGREEN)

Build the default segment label thematic map:

>>> theme = ThemeLabel.make_default_range(dataset,'zone','color_field', RangeMode.EUQALINTERVAL, 6)

Build a composite style label thematic map:

>>> mixed_style = MixedTextStyle()
>>> mixed_style.set_default_style(TextStyle().set_fore_color('rosybrown'))
>>> mined_style.set_separator_enabled(True).set_separator("_")
>>> theme = ThemeLabel().set_label_expression('zone').set_uniform_mixed_style(mixed_style)
get_along_line_culture()

Return the language and cultural habits of the labels along the line. The default value is related to the non-Unicode language of the current system. If it is a Chinese environment, it is CHINESE, otherwise it is ENGLISH.

Returns:the language and cultural habits used in the label along the line
Return type:AlongLineCulture
get_along_line_direction()

Return the labeling direction along the line. Default value: py:attr:AlongLineDirection.ALONG_LINE_NORMAL.

Returns:Label the direction along the line.
Return type:AlongLineDirection
get_along_line_drawing_mode()

Return the strategy used for label drawing in the label along the line. The default is: py:attr:AlongLineDrawingMode.COMPATIBLE

Returns:The strategy used for label drawing
Return type:AlongLineDrawingMode
get_along_line_space_ratio()

Return the ratio of text spacing along the line. The value is a multiple of the word height.

Note

  • If the value is greater than 1, the line center shall prevail, and the two sides shall be marked at the specified interval;
  • If the value is between 0 and 1 (including 1), a single text will be marked at the line center according to the angle along the line.
  • If the value is less than or equal to 0, the default labeling mode along the line will be used.
Returns:the ratio of text spacing along the line
Return type:float
get_along_line_word_angle_range()

Return the tolerance value of the relative angle between words or letters in the label along the line, in degrees. In the labeling along the line, in order to adapt to the trend of the curved line, the Chinese label and the English label will rotate the text or letter, but the single word or letter is always perpendicular to the tangent direction of the current labeling point. Therefore, the effect as shown in the figure below will appear. Adjacent words or letters form a certain included Angle. When the bending degree of the line is large, the included Angle will also increase, resulting in the overall unattractive effect of the label. Therefore, the interface limits the maximum Angle between adjacent words or letters through a given volume limit, so as to ensure the aesthetics of markings along the line.

The smaller the included angle tolerance limit, the more compact the label, but the place with large curvature may not be able to label; the larger the included angle tolerance limit, the large curvature can also display the label, but the aesthetics of the label along the line is reduced.

api\../image/LabelWordAngle10.png api\../image/LabelWordAngle20.png api\../image/LabelWordAngle40.png

What is the relative angle between words and letters or letters and letters in the label along the line? As shown below:

api\../image/LabelWordAngle.png
Returns:Mark the tolerance value of the relative angle between Chinese characters or letters and letters along the line, the unit is: degree
Return type:int
get_back_shape()

Return the shape type of the label background in the label map

Returns:The shape type of the label background in the label map
Return type:LabelBackShape
get_back_style()

Set the label background style in the label map.

Returns:label background style
Return type:GeoStyle
get_label_angle_expression()

Return a field, the field is a numeric field, the field value controls the rotation angle of the text

Returns:a field, the field is a numeric field, the field value controls the rotation angle of the text
Return type:str
get_label_color_expression()

Return a field, the field is a numeric field, control the text color

Returns:A field, which is a numeric field, which controls the text color.
Return type:str
get_label_font_type_expression()

Return a field name, the field value is the font name, such as: Microsoft Yahei, Times New Roman, controls the font style of the label text in the label map.

Returns:a field name that controls the font style of the label text in the label map.
Return type:str
get_label_repeat_interval()

Return the interval of circular labeling when labeling along the line. The set interval size represents the paper distance of the adjacent mark interval after printing, and the unit is 0.1 mm. For example: the cycle labeling interval is set to 500, after the map is printed, Measure the distance between adjacent labels on paper to be 5 cm.

Returns:the interval of circular labeling when labeling along the line
Return type:float
get_label_size_expression()

Return a field, the field is a numeric field, the field value controls the height of the text, and the numeric unit is millimeters.

Returns:A field that controls the height of the text.
Return type:str
get_leader_line_style()

Return the style of the leader line between the label and the label object.

Returns:The style of the leader line between the label and its label.
Return type:GeoStyle
get_matrix_label()

Return the matrix label in the label map. In the matrix label, the labels are arranged together in a matrix.

Returns:matrix label in label map
Return type:LabelMatrix
get_max_label_length()

Return the maximum length of the label displayed in each line. The default value is 256

If the input character exceeds the set maximum length, it can be processed in two ways. One is to display in a line break mode. This method automatically adjusts the word spacing to make the number of characters in each line as close as possible. So, the number of characters displayed in each line is less than or equal to the maximum length set; the other is displayed in ellipsis mode. When the input characters are greater than the maximum length set, the extra characters will be displayed in ellipsis mode.

Returns:The maximum length of each line displayed.
Return type:int
get_max_text_height()

Return the maximum height of the text in the label. This method is effective when the size of the label is not fixed. When the height of the enlarged text exceeds the maximum height, it will not be enlarged. The unit of height is 0.1 mm.

Returns:The maximum height of the text in the label.
Return type:int
get_min_text_height()

Return the minimum height of the text in the label.

Returns:The minimum height of the text in the label.
Return type:int
get_numeric_precision()

Return the precision of the number in the label. For example, the number corresponding to the label is 8071.64529347, when the return value is 0, it displays 8071, when it is 1, it displays 8071.6; when it is 3, it is 8071.645

Returns:The precision of the number in the label.
Return type:int
get_offset_x()

Return the horizontal offset of the label text in the label map relative to the point in the element

Returns:The horizontal offset of the label text in the label map relative to the point in the element.
Return type:str
get_offset_y()

Return the vertical offset of the label text in the label map relative to the point within the element

Returns:The vertical offset of the label text in the label map relative to the point within the element
Return type:str
get_overlap_avoided_mode()

Get text automatic avoidance method

Returns:automatic text avoidance method
Return type:AvoidMode
get_overlength_mode()

Return the processing method of overlength tags. There is no need to deal with the over-long label, the excess part can be omitted, or it can be displayed in a new line.

Returns:How to handle super long tags
Return type:OverLengthLabelMode
get_range_expression()

Return the segment field expression. The value in the segment expression must be numeric. The user compares each segment value from the beginning to the end according to the return value of the method to determine what style to use to display the label text corresponding to a given label field expression.

Returns:segment field expression
Return type:str
get_range_items()

Return the sub-item collection of the range label map. Based on the range result of the field expression value, a range corresponds to a sub-item of the range label map. Add the sub-items of the range label thematic map through this object.

Returns:sub-item collection of the range label thematic map
Return type:ThemeLabelRangeItems
get_range_mode()

Return the current segmentation mode.

Returns:segmented mode
Return type:RangeMode
get_split_separator()

Get the line break used to wrap the label text, which can be: “/”, “;”, space, etc.

If you set overlength_mode to: py:attr:.OverLengthLabelMode.NEWLINE through the set_overlength_label() interface, it means line break At the same time, the line break is set through split_separator, then the label text will be displayed in a line break at the position specified by the special character.

When the label thematic map uses line wrapping for ultra-long text processing, you can control the wrapping position of the text by specifying special characters. This requires you to prepare data in advance. In the field for labeling, Add the newline character you set in the position where the field value needs to be wrapped, such as “/”, “;”, and space. When using special characters to wrap, it will wrap at the specified special characters, and the specified special characters will not display.

Returns:newline character used to wrap label text
Return type:str
get_text_extent_inflation()

Return the buffer range of the text in the label in the positive X and Y directions. The size of the space occupied by the text in the map can be modified by setting this value, and it must be non-negative.

Returns:The buffer range of the text in the label in the positive X and Y directions.
Return type:tuple[int,int]
get_uniform_mixed_style()

Return the unified text compound style of the label map

Returns:unified text composite style of label map
Return type:MixedTextStyle
get_uniform_style()

Return to unified text style

Returns:unified text style
Return type:TextStyle
get_unique_expression()

Return a single-valued field expression. The expression can be a field or an expression composed of multiple fields. The value of the expression controls the style of the object label. Object labels with the same expression value are displayed in the same style. .

Returns:single value field expression
Return type:str
get_unique_items()

Return the sub-item collection of the unique value label map. Object labels with the same value based on the expression of a single value field are classified as one type, and a single value corresponds to a sub item of the unique value label map.

Returns:The sub-item collection of the unique value label map.
Return type:ThemeLabelUniqueItems
is_along_line()

Return whether to display text along the line. True means that the text is displayed along the line, and False means that the text is displayed normally. The label attributes along the line are only applicable to the thematic map of the line dataset. The default value is True

Returns:Whether to display text along the line.
Return type:bool
is_angle_fixed()

Whether to fix the text angle when displaying text along the line. True means to display the text at a fixed angle of the text, False means to display the text at an angle along the line. The default value is False.

Returns:When displaying text along the line, whether to fix the angle of the text.
Return type:bool
is_flow_enabled()

Return whether to display labels in a fluid manner. The default is True.

Returns:Whether to display the label in the flow
Return type:bool
is_leader_line_displayed()

Return whether to display the leader line between the label and the object it labels. The default value is False, which means no display.

Returns:Whether to display the leader line between the label and the object it marked.
Return type:bool
is_offset_prj_coordinate_unit()

Get whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system

Returns:Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used.
is_on_top()

Return whether the label map layer is displayed on the top layer. The top layer here refers to the upper layer of all non-label thematic map layers.

Returns:Whether the label map layer is displayed on the top layer
Return type:bool
is_overlap_avoided()

Return whether to allow text to be displayed in a text avoidance mode. Only for the text data in the label thematic layer.

Returns:Whether to automatically avoid text overlapping.
Return type:bool
is_repeat_interval_fixed()

Return whether the cycle label interval is fixed. True means a fixed cycle labeling interval, which does not change with the zoom of the map; False means a cycle labeling interval changes with the zoom of the map.

Returns:return True if the cycle label interval is fixed; otherwise, return False
Return type:bool
is_repeated_label_avoided()

Return whether to avoid repeated labeling on the map.

For the line data representing Beijing Metro Line 4, if it consists of 4 sub-line segments, when the name field (field value: Metro Line 4) is used as the thematic variable to make a label thematic map and set the label along the line, If you do not choose to avoid repeating the map, the display will look like the figure on the left. If you choose to avoid repeated labeling on the map, the system will regard the four sub-lines of this polyline as a line for labeling, and the display effect is shown in the figure below.

api\../image/IsRepeatedLabelAvoided.png
Returns:
Return type:bool
is_small_geometry_labeled()

When the length of the label is greater than the length of the labeled object itself, return whether to display the label.

When the length of the label is greater than the length of the line or area object itself, if you choose to continue labeling, the label text will be displayed superimposed. In order to display the label clearly and completely, you can use the line break mode to display the label. But you must ensure that the length of each line is less than the length of the object itself.

Returns:Whether to display labels whose length is greater than the length of the labeled object itself
Return type:bool
is_support_text_expression()

Return whether the text expression is supported, that is, the subscript and subscript function. The default value is False, text expressions are not supported

Returns:Whether to support text expressions, namely subscripts and subscripts
Return type:bool
is_vertical()

Whether to use vertical labels

Returns:whether to use vertical label
Return type:bool
label_expression

str – label field expression

static make_default_range(dataset, label_expression, range_expression, range_mode, range_parameter, color_gradient_type=None, join_items=None)

Generate the default range label thematic map.

Parameters:
  • dataset (DataestVector or str) – The vector dataset used to make the thematic map of segment labels.
  • label_expression (str) – label field expression
  • range_expression (str) – Range field expression.
  • range_mode (RangeMode or str) – segment mode
  • range_parameter (float) – segment parameter. When the segment mode is equal distance segment method or square root segment method, this parameter is the segment value; when the segment mode is standard difference segment method, this parameter does not work; when the segment mode is When custom distance, this parameter means custom distance.
  • color_gradient_type (ColorGradientType or str) – Color gradient mode.
  • join_items (list[JoinItem] or tuple[JoinItem]) – external table join items
Returns:

Segment label thematic map object

Return type:

ThemeLabel

static make_default_unique(dataset, label_expression, unique_expression, color_gradient_type=None, join_items=None)

Generate the default unique value label thematic map.

Parameters:
  • dataset (DatasetVector or str) – A vector dataset used to make unique value label thematic maps.
  • label_expression (str) – label field expression
  • unique_expression (str) – Specify a field or an expression composed of multiple fields. The value of the expression is used to classify the object labels. Object labels with the same value are displayed in the same style for one category, and labels of different categories are displayed in different styles.
  • color_gradient_type (ColorGradientType or str) – color gradient mode
  • join_items (list[JoinItem] or tuple[JoinItem]) – external table join items
Returns:

unique value label map object

Return type:

ThemeLabel

set_along_line(is_along_line=True, is_angle_fixed=False, culture=None, direction='ALONG_LINE_NORMAL', drawing_mode='COMPATIBLE', space_ratio=None, word_angle_range=None, repeat_interval=0, is_repeat_interval_fixed=False, is_repeated_label_avoided=False)

Set the display text along the line, only applicable to line data and label thematic map.

Parameters:
  • is_along_line (bool) – Whether to display text along the line
  • is_angle_fixed (bool) – Whether to fix the angle of the text
  • culture (AlongLineCulture or str) – The language and cultural habits used along the line. The default is related to the non-Unicode language of the current system. If it is a Chinese environment, it is CHINESE, otherwise it is ENGLISH.
  • direction (AlongLineDirection or str) – the direction of the label along the line
  • drawing_mode (AlongLineDrawingMode or str) – The strategy used for label drawing
  • space_ratio (float) –

    The space ratio of the text along the line, which is a multiple of the character height. note:

    • If the value is greater than 1, the line center shall prevail, and mark on both sides according to the specified interval
    • If the value is between 0 and 1 (including 1), a single text will be marked at the line center according to the angle along the line
    • The value is less than or equal to 0, using the default labeling mode along the line
  • word_angle_range (int) – the tolerance value of the relative angle between word and word or letter and letter, unit: degree
  • repeat_interval (float) – The interval of repeated labeling when labeling along the line. The set interval size represents the paper distance of the adjacent mark interval after printing, the unit is 0.1 mm
  • is_repeat_interval_fixed (bool) – Whether the cycle label interval is fixed
  • is_repeated_label_avoided (bool) – Whether to avoid repeated labeling on the map
Returns:

self

Return type:

ThemeLabel

set_back_shape(value)

Set the shape type of the label background in the label map, no background is displayed by default

Parameters:value (LabelBackShape or str) – The shape type of the label background in the label map
Returns:self
Return type:ThemeLabel
set_back_style(value)

Return the label background style in the label map.

Parameters:value (GeoStyle) – The label background style in the label map.
Returns:self
Return type:ThemeLabel
set_flow_enabled(value)

Set whether to display labels in a fluid manner. Flow display is only suitable for labeling of line and area features

Parameters:value (bool) – Whether to display the label in a fluid manner.
Returns:self
Return type:ThemeLabel
set_label_angle_expression(value)

Set a field, the field is a numeric field, the field value controls the rotation angle of the text. After specifying the field, the text rotation angle of the label will be read from the field value in the corresponding record.

Parameters:value (str) – A field, the field is a numeric field, the field value controls the rotation angle of the text. The value unit is degrees. The angle rotation takes the counterclockwise direction as the positive direction, and the corresponding value is the positive value; the angle value supports the negative value, which means rotating in the clockwise direction. Regarding the label rotation angle and offset, if both are set at the same time, the label is rotated first, and then offset.
Returns:self
Return type:ThemeLabel
set_label_color_expression(value)

Set a field, which is a numeric field, and control the text color. After specifying the field, the text color of the label will be read from the field value in the corresponding record.

Parameters:value (str) – A field, which is a numeric field, which controls the text color. The color value supports hexadecimal expression as 0xRRGGBB, which is arranged according to RGB.
Returns:self
Return type:ThemeLabel
set_label_expression(value)

Set label field expression

Parameters:value (str) – label field expression
Returns:self
Return type:ThemeLabel
set_label_font_type_expression(value)

Set a field, the field value is the font name, such as: Microsoft Yahei, Song Ti, to control the font style of the label text in the label map. After specifying the field, the font style of the label will be read from the field value in the corresponding record.

Parameters:value (str) – A field that controls the font style of the label text in the label map. If the font specified by the field value does not exist in the current system, or the field value has no value, it will be displayed According to the specific font set by the current TAB thematic map, such as the font in the text style set by the set_uniform_style() method.
Returns:self
Return type:ThemeLabel
set_label_size_expression(value)

Set a field, the field is a numeric field, the field value controls the height of the text, and the numeric unit is millimeters. After specifying the field, the text size of the label will be read from the field value in the corresponding record.

Parameters:value (str) – A field that controls the height of the text. If the field value is no value, it will be displayed according to the specific value of the font size set in the current label map
Returns:self
Return type:ThemeLabel
set_leader_line(is_displayed=False, leader_line_style=None)

Set whether to display the leader line between the label and the label object, and the style of the leader line, etc.

Parameters:
  • is_displayed (bool) – Whether to display the leader line between the label and the object it marks.
  • leader_line_style (GeoStyle) – The style of the leader line between the label and its label.
Returns:

self

Return type:

ThemeLabel

set_matrix_label(value)

Set the matrix label in the label map. In the matrix label, the labels are arranged together in a matrix.

Parameters:value (LabelMatrix) – the matrix label in the label map
Returns:self
Return type:ThemeLabel
set_max_text_height(value)

Set the maximum height of the text in the label. This method is effective when the size of the label is not fixed. When the height of the enlarged text exceeds the maximum height, it will not be enlarged. The unit of height is 0.1 mm.

Parameters:value (int) – The maximum height of the text in the label.
Returns:self
Return type:ThemeLabel
set_min_text_height(value)

Set the minimum height of the text in the label.

Parameters:value (int) – The maximum width of the text in the label.
Returns:self
Return type:ThemeLabel
set_numeric_precision(value)

Set the precision of the numbers in the label. For example, the number corresponding to the label is 8071.64529347, when the return value is 0, it displays 8071, when it is 1, it displays 8071.6; when it is 3, it is 8071.645.

Parameters:value (int) – The precision of the number in the label.
Returns:self
Return type:ThemeLabel
set_offset_prj_coordinate_unit(value)

Set whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used. For details, check the set_offset_x() and set_offset_y() interfaces.

Parameters:value (bool) – Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system
Returns:self
Return type:ThemeLabel
set_offset_x(value)

Set the horizontal offset of the label text in the label map relative to the point in the feature. The value of the offset is a constant value or the value represented by the field expression, that is, if the field expression is SmID, where SmID=2, then the value of the offset is 2.

The unit of the offset is determined by: py:meth:.set_offset_prj_coordinate_unit, True means the geographic coordinate unit is used, otherwise the device unit is used

Parameters:value (str) –
Returns:self
Return type:ThemeLabel
set_offset_y(value)

Set the vertical offset of the label text in the label map relative to the point in the element.

The value of the offset is a constant value or the value represented by the field expression, that is, if the field expression is SmID, where SmID=2, then the value of the offset is 2.

Parameters:value (str) – The vertical offset of the label text in the label map relative to the point in the element.
Returns:self
Return type:ThemeLabel
set_on_top(value)

Set whether the label map layer is displayed on the top layer. The top layer here refers to the upper layer of all non-label thematic map layers.

In general mapping, the label thematic map layer in the map will be placed in front of all non-label thematic map layers, but when layer grouping is used, the label thematic layer in the group may be covered by other normal layers in the upper layer group. In order to maintain the layer group state and make the label not be covered, you can use the set_on_top method to pass true value. The label thematic map will appear first regardless of its current position on the map. If there are multiple label thematic map layers that pass in the true value through the set_on_top method, the display order between them is determined by the layer order of the map where they are located.

Parameters:value (bool) – Whether the label map layer is displayed on the top layer
Returns:self
Return type:ThemeLabel
set_overlap_avoided(value)

Set whether to allow text to be displayed in a text avoidance mode. Only for the text data in the label thematic layer.

Note: In the case of large label overlap, even if the automatic avoidance function is used, it may not be possible to completely avoid label overlap. When two overlapping labels are set with text avoidance at the same time, The preceding ID tag text has precedence in drawing.

Parameters:value (bool) – whether to automatically avoid text overlap
Returns:self
Return type:ThemeLabel
set_overlap_avoided_mode(value)

Set automatic text avoidance method

Parameters:value (AvoidMode or str) – text automatic avoidance method
Returns:self
Return type:ThemeLabel
set_overlength_label(overlength_mode=None, max_label_length=256, split_separator=None)

Set to handle very long text.

Parameters:
  • overlength_mode (OverLengthLabelMode or str) – The processing mode of overlength tags. There is no need to deal with the over-long label, the excess part can be omitted, or it can be displayed in a new line.
  • max_label_length (int) – The maximum length of the label displayed in each line. If it exceeds this length, it can be handled in two ways, one is to display in a newline mode, and the other is to display in an ellipsis mode.
  • split_separator (str) – The line break used to wrap the label text, which can be: “/”, “;”, space, etc. When the ultra-long label processing mode is NEWLINE, it will wrap according to the specified character.
Returns:

self

Return type:

ThemeLabel

set_range_label(range_expression, range_mode)

Set the range label thematic map.

Parameters:
  • range_expression (str) – Range field expression. The value in the segment expression must be numeric.
  • range_mode (RangeMode or str) – segment mode
Returns:

self

Return type:

ThemeLabel

set_small_geometry_labeled(value)

When the length of the label is greater than the length of the labeled object itself, set whether to display the label.

When the length of the label is greater than the length of the line or area object itself, if you choose to continue labeling, the label text will be displayed superimposed. In order to display the label clearly and completely, you can use the line feed mode to display the label. But you must ensure that the length of each line is less than the length of the object itself.

Parameters:value (bool) – Whether to display labels whose length is greater than the length of the labeled object itself
Returns:self
Return type:ThemeLabel
set_support_text_expression(value)

Set whether to support text expressions, that is, subscripts and subscripts. When the selected field is a text type, the text contains superscripts and subscripts, and is based on specific standards (for text expressions, please refer to the following description), you need to set this property to display the text correctly (as shown in the right picture below). The following figure shows the effect comparison when setting this property to False and True:

api\../image/isTextExpression.png

Note

  • When this property is set to true, the alignment of labels with subscripts and subscripts can only be displayed as the “top left corner” effect, and the alignment of labels for text without subscripts and subscripts is the same as the alignment set in the text style.
  • Text labels with a rotation angle are not supported, that is, when the rotation angle of the text label is not 0, the setting of this attribute is invalid.
  • Text labels displayed in vertical and new lines are not supported.
  • Text tags containing strikethrough, underline, and separator are not supported.
  • Text labels marked along the lines of the label thematic map of the line dataset are not supported.
  • When the map has a rotation angle, the text label set to support the text expression will not rotate with the rotation of the map.
  • The text label of the label thematic map with special symbols is not supported.
  • In a text expression containing subscripts and subscripts, #+ means superscript; #- means subscript, #= means to split a string into two superscript and subscript parts.
  • If the text label that supports text expressions starts with “#+”, “#-“, “#=”, the entire string is output as it is.

The following figure shows the effect comparison when setting this attribute to false and true:

api\../image/isTextExpression_1.png
  • When #+ or #- is encountered, the string immediately behind will be regarded as the subscript content, and the new string rule will be adopted when #+ or #- is encountered for the third time. The following figure shows the effect comparison when setting this attribute to flse and true respectively.

    api\../image/isTextExpression_2.png
  • In a text expression containing subscripts and subscripts, two consecutive “#+” have the same effect as “#-“, and two consecutive “#-” have the same effect as “#+”. The following figure shows the effect comparison when setting this attribute to false and true:

    api\../image/isTextExpression_3.png
  • Currently the types of label thematic maps that support this function are unified style label thematic maps, segment style label thematic maps and label matrix thematic maps.

Parameters:value (bool) – Whether to support text expressions
Returns:self
Return type:ThemeLabel
set_text_extent_inflation(width, height)

Set the buffer range of the text in the label in the positive X and Y directions. The size of the space occupied by the text in the map can be modified by setting this value, and it must be non-negative.

Parameters:
  • width (int) – size in X direction
  • height (int) – Y direction size
Returns:

self

Return type:

ThemeLabel

set_uniform_mixed_style(value)

Set the unified text compound style of the label map. When the text composite style (get_uniform_mixed_style()) and the text style (get_uniform_style()) are set at the same time, Drawing style priority, text compound style is greater than text style.

Parameters:value (MixedTextStyle) – The text compound style of the label map
Returns:self
Return type:ThemeLabel
set_uniform_style(value)

Set uniform text style

Parameters:value (TextStyle) – unified text style
Returns:self
Return type:ThemeLabel
set_unique_label(unique_expression)

Set the unique value label label thematic map.

Parameters:unique_expression (str) – single value field expression
Returns:self
Return type:ThemeLabel
set_vertical(value)

Set whether to use vertical labels.

  • Matrix label and label along the line are not supported.
  • Text labels with a rotation angle are not supported, that is, when the rotation angle of the text label is greater than 0, the setting of this attribute is invalid.
Parameters:value (bool) – whether to use vertical label
Returns:self
Return type:ThemeLabel
class iobjectspy.mapping.ThemeGraphItem(expression, caption, style=None, range_setting=None)

Bases: object

The sub-item class of statistics thematic map.

The statistical thematic map reflects the size of the corresponding thematic value by drawing a statistical map for each element or record. Statistical thematic maps can be based on multiple variables and reflect multiple attributes, that is, the values of multiple thematic variables can be plotted on A statistical graph. The statistical map corresponding to each thematic variable is a thematic map item. This class is used to set the name, thematic variable, display style and segment style of the sub-items of the statistical map.

Parameters:
  • expression (str) – Thematic variables of the statistical map. Thematic variable can be a field or field expression
  • caption (str) – the name of the thematic map item
  • style (GeoStyle) – The display style of the sub-items of the statistics map
  • range_setting (ThemeRange) – the segmentation style of the sub-items of the statistical thematic map
caption

str – the name of the thematic map item

expression

str – Thematic variable of the statistical map

range_setting

ThemeRange – Return the segmentation style of the sub-item of the statistical thematic map

set_caption(value)

Set the name of the thematic map item.

Parameters:value (str) – the name of the thematic map item
Returns:self
Return type:ThemeGraphItem
set_expression(value)

Set the thematic variables of the statistical map. Thematic variable can be a field or field expression.

Parameters:value (str) – Thematic variable of the statistical map
Returns:self
Return type:ThemeGraphItem
set_range_setting(value)

Set the sub-item style of the statistics map

Parameters:value (ThemeRange) – the sub-item style of the statistics map
Returns:self
Return type:ThemeGraphItem
set_style(value)

Set the display style of the sub-items of the statistics map

Parameters:value (GeoStyle) –
Returns:self
Return type:ThemeGraphItem
style

GeoStyle – Return the sub-item style of the statistic map.

class iobjectspy.mapping.ThemeGraphType

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the statistical graph types of statistical thematic maps.

Variables:
AREA = 0
BAR = 4
BAR3D = 5
LINE = 2
PIE = 6
PIE3D = 7
POINT = 3
RING = 14
ROSE = 8
ROSE3D = 9
STACK_BAR = 12
STACK_BAR3D = 13
STEP = 1
class iobjectspy.mapping.ThemeGraphTextFormat

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of the text display format of the statistical map.

Variables:
CAPTION = 3
CAPTION_PERCENT = 4
CAPTION_VALUE = 5
PERCENT = 1
VALUE = 2
class iobjectspy.mapping.GraphAxesTextDisplayMode

Bases: iobjectspy._jsuperpy.enums.JEnum

The text display mode of the axis of the statistics map.

Variables:
ALL = 3
NONE = 0
YAXES = 2
class iobjectspy.mapping.ThemeGraph(graph_type='PIE3D', graduated_mode='CONSTANT', items=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

Statistics thematic map class. The statistical thematic map reflects the size of the corresponding thematic value by drawing a statistical map for each element or record. Statistical thematic maps can be based on multiple variables and reflect multiple attributes, that is, the values of multiple thematic variables can be plotted on a statistical map.

Parameters:
  • graph_type (ThemeGraphType or str) – The statistical graph type of the statistical map. According to the actual data and usage, different types of statistical graphs can be selected.
  • graduated_mode (GraduatedMode or str) – thematic map classification mode
  • items (list[ThemeGraphItem] or tuple[ThemeGraphItem]) – list of subitems of thematic map
add(item)

Add sub item of thematic map

Parameters:item (ThemeGraphItem) – Statistical thematic map item
Returns:self
Return type:ThemeGraph
clear()

Delete all sub-items in the statistics map.

Returns:self
Return type:ThemeGraph
exchange_item(index1, index2)

The two sub-items of the specified sequence number are exchanged.

Parameters:
  • index1 (int) – The sequence number of the first subitem of the specified exchange.
  • index2 (int) – The number of the second subitem of the specified exchange.
Returns:

The exchange is successful, return True, otherwise False

Return type:

bool

extend(items)

Add sub-items of statistical thematic map in batch

Parameters:items (list[ThemeGraphItem] or tuple[ThemeGraphItem]) – list of subitems of thematic map
Returns:self
Return type:ThemeGraph
get_axes_color()

Return the axis color.

Returns:The color of the axis.
Return type:Color
get_axes_text_display_mode()

When displaying axis text, the displayed text mode

Returns:The text mode displayed when the axis text is displayed
Return type:GraphAxesTextDisplayMode
get_axes_text_style()

Return the style of the axis text of the chart

Returns:the style of the axis text of the chart
Return type:TextStyle
get_bar_space_ratio()

Return the interval of the columns in the column map, the return value is a coefficient value, the value range is 0 to 10, and the default value is 1

Returns:the interval of the columns in the column map
Return type:float
get_bar_width_ratio()

Return the width of each column in the histogram, the return value is a coefficient value, the value range is 0 to 10, and the default value is 1. The column width of the histogram is equal to the original column width multiplied by the coefficient value.

Returns:The width of each column in the column map. Is a coefficient value, the value range is 0 to 10
Return type:float
get_count()

Return the number of sub-items of the statistics map

Returns:count the number of thematic map items
Return type:int
get_custom_graph_size_expression()

Return a field expression, which is used to control the size of the statistical map element corresponding to the object. The field in the field expression must be a numeric field. The field expression can specify a field also Can specify a field expression; you can also specify a value, and all thematic map items will be displayed uniformly at the size specified by the value.

Returns:field expression used to control the size of the statistical map element corresponding to the object
Return type:str
get_graph_text_format()

Return the text display format of the statistical map

Returns:The text display format of the statistics map
Return type:ThemeGraphTextFormat
get_graph_text_style()

Return the text label style on the chart. The text alignment of the coordinate axis on the statistics thematic map adopts the alignment at the bottom right corner to prevent the coordinate axis from overwriting the text

Returns:The text label style on the chart.
Return type:TextStyle
get_item(index)

Replace the thematic map item on the specified serial number with the specified statistical thematic map item.

Parameters:index (int) – The specified serial number.
Returns:Statistics thematic map item
Return type:ThemeGraphItem
get_leader_line_style()

Return the style of the leader line between the statistical graph and the object it represents.

Returns:The style of the leader line between the statistical graph and the object it represents.
Return type:GeoStyle
get_max_graph_size()

Return the maximum value displayed by the statistical symbols in the statistical map. The display size of the statistical symbols in the statistical chart gradually changes between the maximum and minimum values. The maximum and minimum values of the statistical graph are a value related to the number of statistical objects and the size of the layer.

Returns:The maximum value displayed by the statistical symbol in the statistics map
Return type:float
get_min_graph_size()

Return the minimum value displayed by the statistical symbol in the statistical map.

Returns:The minimum value displayed by the statistical symbol in the statistical map.
Return type:float
get_offset_x()

Return the horizontal offset of the statistical graph

Returns:horizontal offset
Return type:str
get_offset_y()

Return the vertical offset of the chart

Returns:vertical offset
Return type:str
get_rose_angle()

Return the angle of the rose chart or 3D rose chart slices in the statistical graph. The unit is degree, accurate to 0.1 degree

Returns:The angle of the rose chart or 3D rose chart slices in the statistical graph
Return type:float
get_start_angle()

Return the starting angle of the pie chart. By default, the horizontal direction is the positive direction. The unit is degree, accurate to 0.1 degree.

Returns:

The starting angle of the pie chart :rtype: float

graduated_mode

GraduatedMode – Thematic map grading mode

graph_type

ThemeGraphType – Statistical graph type of statistical thematic graph

index_of(expression)

Return the serial number of the object of the specified statistical field expression in the current statistical map.

Parameters:expression (str) – The specified statistical field expression.
Returns:The serial number of the sub-item of the statistics map in the sequence.
Return type:int
insert(index, item)

Insert the given statistical thematic map item into the position of the specified sequence number.

Parameters:
  • index (int) – The serial number of the specified sub-item of the statistical map.
  • item (ThemeGraphItem) – The item of the statistical map to be inserted.
Returns:

Return True if the insertion is successful, otherwise False

Return type:

bool

is_all_directions_overlapped_avoided()

Return whether to allow omni-directional statistical thematic map avoidance

Returns:Whether to allow omni-directional statistics thematic map avoidance
Return type:bool
is_axes_displayed()

Return whether to display the axis

Returns:Whether to display the coordinate axis
Return type:bool
is_axes_grid_displayed()

Get whether to display the grid on the graph axis

Returns:whether to display the grid on the graph axis
Return type:bool
is_axes_text_displayed()

Return whether to display the text label of the axis.

Returns:Whether to display the text label of the axis
Return type:bool
is_display_graph_text()

Return whether to display the text label on the statistical graph

Returns:Whether to display the text label on the chart
Return type:bool
is_flow_enabled()

Return whether the statistics thematic map is displayed in flow.

Returns:Whether the statistics thematic map is displayed in flow
Return type:bool
is_global_max_value_enabled()

Return whether to use the global maximum value to make the statistical map. True, indicates that the global maximum value is used as the maximum value of the statistical graph elements to ensure that the statistical graph elements in the same thematic layer have the same scale.

Returns:Whether to use the global maximum value to make the statistical map
Return type:bool
is_graph_size_fixed()

Return whether the statistical graph is fixed in size when zooming in or out

Returns:Whether the size of the statistical graph is fixed when zooming in or out.
Return type:bool
is_leader_line_displayed()

Return whether to display the leader line between the statistical graph and the object it represents. If the rendering symbol is offset from the object, the drawing line can be used to connect the graph and the object.

Returns:Whether to display the leader line between the statistical graph and the object it represents
Return type:bool
is_negative_displayed()

Return whether the data with negative attributes is displayed in the thematic map.

Returns:Whether to display data with negative attributes in the thematic map
Return type:bool
is_offset_prj_coordinate_unit()

Get whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system

Returns:Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used.
Return type:bool
is_overlap_avoided()

Return whether the statistical graph is displayed in avoidance mode. Use avoidance method to display return true, otherwise return false

Returns:Whether to use avoidance method to display
Return type:bool
remove(index)

Delete the specified serial number of the statistical thematic map sub-item from the sub-item sequence of the statistical map.

Parameters:index (int) – the number of the specified item to be deleted
Returns:If the deletion is successful, it Return True; otherwise, it Return False.
Return type:bool
set_all_directions_overlapped_avoided(value)

Set whether to allow the thematic map avoidance in all directions. All directions refer to the 12 directions formed by the outer border and reference line of the statistical map. Four directions refer to the directions of the four corners of the rectangular frame outside the statistical map.

Generally, the avoidance of statistical thematic maps is performed in all directions. Although the avoidance is reasonable, it will affect the display efficiency; if you improve the display efficiency, please set it to False.

Parameters:value (bool) – Whether to avoid the thematic map in all directions
Returns:self
Return type:ThemeGraph
set_axes(is_displayed=True, color=(128, 128, 128), is_text_displayed=False, text_display_mode='', text_style=None, is_grid_displayed=False)

Set whether the coordinate axis, and the coordinate axis text label related content.

Parameters:
  • is_displayed (bool) – Whether to display the coordinate axis.
  • color (Color or str) – axis color
  • is_text_displayed (bool) – Whether to display the text label of the axis
  • text_display_mode (GraphAxesTextDisplayMode or str) – The text mode displayed when the axis text is displayed
  • text_style (TextStyle) – The style of the axis text of the chart
  • is_grid_displayed (bool) – Whether to display the grid on the graph axis
Returns:

self

Return type:

ThemeGraph

set_bar_space_ratio(value)

Set the interval of the bars in the bar thematic map. The set value is a coefficient value, the value range is 0 to 10, and the default value is 1. The bar interval of the histogram is equal to the original interval multiplied by the coefficient value.

Parameters:value (float) – the interval of the columns in the column map
Returns:self
Return type:ThemeGraph
set_bar_width_ratio(value)

Set the width of each column in the column map. The set value is a coefficient value. The value range is from 0 to 10. The default value is 1. The column width of the histogram is equal to the original column width multiplied by the coefficient value.

Parameters:value (float) – The width of each column in the column map, the value is a coefficient value, and the value range is 0-10.
Returns:self
Return type:ThemeGraph
set_custom_graph_size_expression(value)

Set a field expression, which is used to control the size of the statistical map element corresponding to the object. The field in the field expression must be a numeric field. The field expression can specify a field also You can specify a field expression; you can also specify a value, and all thematic map items will be displayed uniformly at the size specified by the value.

Parameters:value (str) – The field expression used to control the size of the statistical map element corresponding to the object
Returns:self
Return type:ThemeGraph
set_display_graph_text(value)

Set whether to display the text labels on the statistical graph

Parameters:value (bool) – Specify whether to display the text label on the statistical graph
Returns:self
Return type:ThemeGraph
set_flow_enabled(value)

Set whether the statistics thematic map is displayed in flow.

Parameters:value (bool) – Whether the statistics thematic map is displayed in a fluid manner.
Returns:self
Return type:ThemeGraph
set_global_max_value_enabled(value)

Set whether to use the global maximum value to make the statistical map. True, indicates that the global maximum value is used as the maximum value of the statistical graph elements to ensure that the statistical graph elements in the same thematic layer have the same scale.

Parameters:value (bool) – Whether to use the global maximum value to make the statistical map
Returns:self
Return type:ThemeGraph
set_graduated_mode(value)

Set the thematic map classification mode

Parameters:value (GraduatedMode or str) – Set the thematic map classification mode
Returns:self
Return type:ThemeGraph
set_graph_size_fixed(value)

Set whether the size of the statistical graph is fixed when zooming in or out of the map.

Parameters:value (bool) – Whether the size of the statistical graph is fixed when zooming in or out.
Returns:self
Return type:ThemeGraph
set_graph_text_format(value)

Set the text display format of the statistical map.

Parameters:value (ThemeGraphTextFormat or str) – The text display format of the statistics map
Returns:self
Return type:ThemeGraph
set_graph_text_style(value)

Set the text label style on the chart. The text alignment of the coordinate axis on the statistics thematic map adopts the alignment at the bottom right corner to prevent the coordinate axis from overwriting the text

Parameters:value (TextStyle) – text label style on the chart
Returns:self
Return type:ThemeGraph
set_graph_type(value)

Set the graph type of the graph thematic map. According to the actual data and usage, different types of statistical graphs can be selected.

Parameters:value (ThemeGraphType or str) – Statistical graph type of thematic graph
Returns:self
Return type:ThemeGraph
set_leader_line(is_displayed, style)

Set to display the leader line between the statistical graph and the object it represents

Parameters:
  • is_displayed (bool) – Whether to display the leader line between the statistical graph and the object it represents
  • style (GeoStyle) – The style of the leader line between the graph and its representation.
Returns:

self

Return type:

ThemeGraph

set_max_graph_size(value)

Set the maximum value of the statistical symbols displayed in the statistical map. The display size of the statistical symbols in the statistical chart gradually changes between the maximum and minimum values. The maximum and minimum values of the statistical graph are a value related to the number of statistical objects and the size of the layer.

When is_graph_size_fixed() is True, the unit is 0.01mm, and is_graph_size_fixed() is False, the map unit is used.

Parameters:value (float) – The maximum value displayed by the statistical symbol in the statistical map
Returns:self
Return type:ThemeGraph
set_min_graph_size(value)

Set the minimum value displayed by the statistical symbols in the statistical map. The display size of the statistical symbols in the statistical chart gradually changes between the maximum and minimum values. The maximum and minimum values of the statistical graph are a value related to the number of statistical objects and the size of the layer.

When is_graph_size_fixed() is True, the unit is 0.01mm, and is_graph_size_fixed() is False, the map unit is used.

Parameters:value (float) – The minimum value displayed by the statistical symbols in the statistical map.
Returns:self
Return type:ThemeGraph
set_negative_displayed(value)

Set whether to display data with negative attributes in the thematic map.

This method is invalid for area charts, ladder charts, line charts, dot charts, histograms, and three-dimensional histograms, because negative data will always be displayed when drawing;

For pie chart, 3D pie chart, rose chart, 3D rose chart, pyramid thematic map-bar, pyramid thematic map-surface, if the user sets the method parameter to True, then take the absolute value of the negative value and treat it as a positive value, If set to False, it will not be drawn (positive and negative data are not drawn)

Parameters:value (bool) – Whether to display data with negative attributes in the thematic map
Returns:self
Return type:ThemeGraph
set_offset_prj_coordinate_unit(value)

Set whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used. For details, check the set_offset_x() and set_offset_y() interfaces.

Parameters:value (bool) – Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system
Returns:self
Return type:ThemeGraph
set_offset_x(value)

Set the horizontal offset of the statistical graph

Parameters:value (str) – horizontal offset
Returns:self
Return type:ThemeGraph
set_offset_y(value)

Set the vertical offset of the chart

Parameters:value (str) – vertical offset
Returns:self
Return type:ThemeGraph
set_overlap_avoided(value)

Set whether the statistical graph is displayed in avoidance mode.

  • Make statistical thematic maps for dataset. When the statistical maps are displayed in the avoidance mode, if the Map.is_overlap_displayed() method is set to True, then in the case of large overlap of the statistical maps, There will be a phenomenon that the statistical graphs cannot be completely avoided; when the Map.is_overlap_displayed() method is set to False, some statistical graphs will be filtered out to ensure that all statistical graphs do not overlap.
  • Create statistical thematic maps and label thematic maps at the same time for the dataset -When the statistic graph does not display the sub-item text, even if the label of the label map overlaps the statistic graph, both can be displayed normally; -When the statistical graph displays the sub-item text, if the sub-item text in the statistical graph and the label in the label map do not overlap, both are displayed normally; if they overlap, the sub-item text of the statistical graph will be filtered out. Show label.
Parameters:value (bool) – Whether to use avoidance method to display
Returns:self
Return type:ThemeGraph
set_rose_angle(value)

Set the angle of the rose chart or 3D rose chart slices in the statistical chart. The unit is degree, accurate to 0.1 degree.

Parameters:value (float) – The angle of the rose chart or 3D rose chart slices in the statistical graph.
Returns:self
Return type:ThemeGraph
set_start_angle(value)

Set the starting angle of the pie chart. By default, the horizontal direction is the positive direction. The unit is degree, accurate to 0.1 degree. It is valid only when the selected statistical graph type is a pie chart (pie chart, 3D pie chart, rose chart, 3D rose chart).

Parameters:value (float) –
Returns:self
Return type:ThemeGraph
class iobjectspy.mapping.GraduatedMode

Bases: iobjectspy._jsuperpy.enums.JEnum

This class defines the constants of thematic map classification mode. Mainly used in statistics thematic maps and graduated symbols thematic maps.

The grading is mainly to reduce the difference between the data size when making thematic maps. If there is a large gap between the data, you can use the logarithmic or square root classification method to carry out, in this way, the absolute size difference Between the data is reduced, which makes the thematic map have better visual effect, and the comparison between different categories is also meaningful. There are three classification modes: constant, logarithm and square root. For fields with negative numbers, the logarithm and In the square root classification method, the absolute value of the negative value is taken as the value involved in the calculation.

Variables:
  • GraduatedMode.CONSTANT – Constant graduated mode. Perform hierarchical operations according to the linear ratio of the original values in the attribute table.
  • GraduatedMode.SQUAREROOT – Square root grading mode. Perform hierarchical operations according to the linear ratio of the square root of the original value in the attribute table.
  • GraduatedMode.LOGARITHM – Logarithmic grading mode. Perform a hierarchical operation according to the linear ratio of the natural logarithm of the original value in the attribute table.
CONSTANT = 0
LOGARITHM = 2
SQUAREROOT = 1
class iobjectspy.mapping.ThemeGraduatedSymbol(expression=None, base_value=0, positive_style=None, graduated_mode='CONSTANT')

Bases: iobjectspy._jsuperpy.mapping.Theme

The class of graduated symbols thematic map.

SuperMap iObjects’ hierarchical symbol thematic map uses symbols of different shapes, colors and sizes to represent the quantity and quality characteristics of each object that are independent and displayed as a whole concept. The shape, color, and size of the symbol are usually used to reflect the specific attributes of the object; the shape and color of the symbol indicate quality characteristics, and the size of the symbol indicates quantitative characteristics.

For example, you can create graduated symbols thematic map objects in the following ways:

>>> theme = ThemeGraduatedSymbol.make_default(dataset,'SmID')

or:

>>> theme = ThemeGraduatedSymbol()
>>> theme.set_expression('SmID').set_graduated_mode(GraduatedMode.CONSTANT).set_base_value(120).set_flow_enabled(True)
Parameters:
  • expression (str) – The field or field expression used to create the graduated symbol map. The field or field expression used to make the graduated symbol thematic map should be a numeric field
  • base_value (float) – The base value of the graduated symbol map, the unit is the same as the unit of the thematic variable
  • positive_style (GeoStyle) – positive grade symbol style
  • graduated_mode (GraduatedMode or str) – graduated symbol map classification mode
base_value

float – the base value of the graduated symbol thematic map, the unit is the same as the unit of the theme variable.

expression

str – The field or field expression used to create the graduated symbol thematic map.

get_leader_line_style()

Return the style of the leader line between the grade symbol and its corresponding object.

Returns:The style of the leader line between the grade symbol and its corresponding object
Return type:GeoStyle
get_negative_style()

Return the grade symbol style of a negative value.

Returns:negative grade symbol style
Return type:GeoStyle
get_offset_x()

Get the X coordinate direction (lateral) offset of the grade symbol

Returns:X coordinate direction (lateral) offset of grade symbol
Return type:str
get_offset_y()

Get the Y coordinate direction (vertical) offset of the grade symbol

Returns:Y coordinate direction (vertical) offset of grade symbol
Return type:str
get_positive_style()

Return the positive grade symbol style.

Returns:Positive grade symbol style.
Return type:bool
get_zero_style()

Return the grade symbol style with a value of 0.

Returns:0 value grade symbol style
Return type:GeoStyle
graduated_mode

GraduatedMode – Return to the graduated symbol map grading mode

is_flow_enabled()

Return whether the grade symbol is displayed in flow

Returns:Whether the grade symbol is displayed in flow
Return type:bool
is_leader_line_displayed()

Return whether to display the leader line between the grade symbol and its corresponding object.

Returns:Whether to display the leader line between the grade symbol and its corresponding object
Return type:bool
is_negative_displayed()

Return whether to display the negative grade symbol style, true means display

Returns:Whether to display the negative grade symbol style
Return type:bool
is_offset_prj_coordinate_unit()

Get whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system

Returns:Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used.
Return type:bool
is_zero_displayed()

Return whether to display the grade symbol style of 0 value, True means display

Returns:Whether to display the grade symbol style of 0 value
Return type:bool
static make_default(dataset, expression, graduated_mode='CONSTANT')

Generate the default graduated symbol thematic map.

Parameters:
  • dataset (DataetVector or str) – Vector dataset.
  • expression (str) – field expression
  • graduated_mode (GraduatedMode or str) – Type of thematic map classification mode.
Returns:

thematic map of graduated symbols

Return type:

ThemeGraduatedSymbol

set_base_value(value)

Set the base value of the graduated symbol thematic map, the unit is the same as the unit of thematic variable.

The display size of each symbol is equal to get_positive_style() (or get_zero_style() or get_negative_style()) GeoStyle.marker_size() *value/base_value, Among them, value refers to the thematic value after grading calculation, that is, the value obtained by calculating the thematic value according to the grading mode selected by the user (graduated_mode).

Parameters:value (float) – Base value of the graduated symbol map
Returns:self
Return type:ThemeGraduatedSymbol
set_expression(value)

Set the field or field expression used to create the graduated symbol map. The field or field expression used to make the graduated symbol map should be a numeric field.

Parameters:value (str) – The field or field expression used to create the graduated symbol map.
Returns:self
Return type:ThemeGraduatedSymbol
set_flow_enabled(value)

Set whether the grade symbol is displayed in a fluid manner.

Parameters:value (bool) – Whether the grade symbols are displayed in a fluid manner.
Returns:self
Return type:ThemeGraduatedSymbol
set_graduated_mode(value)

Set the grading mode of graduated symbols thematic map.

  • Grading is mainly to reduce the difference between the data sizes in the production of thematic maps of grade symbols. If there is a large gap between the data , you can use the logarithmic or square root classification method to carry out, which reduces the absolute size differences between the data, and makes the visual effect of the grade symbol better, and the comparison between different categories is also meaningful;
  • There are three classification modes: constant, logarithm and square root. For fields with negative values, logarithm and square root classification methods cannot be used;
  • Different grading modes are used to determine the value of the symbol size. The constant is based on the original data of the field, the logarithm is the natural logarithm of the thematic value corresponding to each record, and the square root is the square of it. Using the final result to determine the size of its grade symbol.
Parameters:value (GraduatedMode or str) – Grading mode of graduated symbol map
Returns:self
Return type:ThemeGraduatedSymbol
set_leader_line(is_displayed, style)

Set to display the leader line between the grade symbol and its corresponding object

Parameters:
  • is_displayed (bool) – Whether to display the style of the leader line between the grade symbol and its corresponding object
  • style (GeoStyle) –
Returns:

self

Return type:

ThemeGraduatedSymbol

set_negative_displayed(is_displayed, style)

Set whether to display the grade symbol style of negative value, true means display.

Parameters:
  • is_displayed (bool) – Whether to display the grade symbol style of negative values
  • style (GeoStyle) – negative grade symbol style
Returns:

self

Return type:

ThemeGraduatedSymbol

set_offset_prj_coordinate_unit(value)

Set whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system. If it is True, it is the geographic coordinate unit, otherwise the device unit is used. For details, check the set_offset_x() and set_offset_y() interfaces.

Parameters:value (bool) – Whether the unit of the horizontal or vertical offset is the unit of the geographic coordinate system
Returns:self
Return type:ThemeGraduatedSymbol
set_offset_x(value)

Set the X coordinate direction (lateral) offset of the grade symbol

Parameters:value (str) – X coordinate direction (horizontal) offset of grade symbol
Returns:self
Return type:ThemeGraduatedSymbol
set_offset_y(value)

Set the Y coordinate direction (vertical) offset of the grade symbol

Parameters:value (str) – Y coordinate direction (vertical) offset of grade symbol
Returns:self
Return type:ThemeGraduatedSymbol
set_positive_style(value)

Set the positive grade symbol style.

Parameters:value (GeoStyle) – The style of the graduated symbol with a positive value.
Returns:self
Return type:ThemeGraduatedSymbol
set_zero_displayed(is_displayed, style)

Set whether to display the grade symbol style of 0 value.

Parameters:
  • is_displayed (bool) – Whether to display the 0-value grade symbol style
  • style (GeoStyle) – 0 value of the grade symbol style
Returns:

self

Return type:

ThemeGraduatedSymbol

class iobjectspy.mapping.ThemeDotDensity(expression=None, value=None, style=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

Point density thematic map class.

The dot density thematic map of SuperMap iObjects uses dots of a certain size and the same shape to represent the distribution range, quantitative characteristics and distribution density of the phenomenon. The number of points and the meaning they represent are determined by the content of the map.

The following code demonstrates how to make a dot density thematic map:

>>> ds = open_datasource('/home/data/data.udb')
>>> dt = ds['world']
>>> mmap = Map()
>>> theme = ThemeDotDensity('Pop_1994', 10000000.0)
>>> mmap.add_dataset(dt, True, theme)
>>> mmap.set_image_size(2000, 2000)
>>> mmap.view_entire()
>>> mmap.output_to_file('/home/data/mapping/dotdensity_theme.png')
Parameters:
  • expression (str) – The field or field expression used to create the dot density map.
  • value (float) – The value represented by each point in the thematic map. The determination of the point value is related to the map scale and the size of the point. The larger the map scale, the larger the corresponding drawing area, and the more the corresponding points The smaller the point value can be at this time. The larger the dot shape, the smaller the dot value should be set accordingly. Too big or too small a point value is inappropriate.
  • style (GeoStyle) – the style of the points in the point density map
expression

str – The field or field expression used to create the dot density thematic map.

set_expression(expression)

Set the field or field expression used to create the dot density thematic map.

Parameters:expression (str) – field or field expression used to create the dot density map
Returns:self
Return type:ThemeDotDensity
set_style(style)

Set the point style in the point density map.

Parameters:style (GeoStyle) – the style of the points in the point density map
Returns:self
Return type:ThemeDotDensity
set_value(value)

Set the value represented by each point in the thematic map.

The determination of the point value is related to the map scale and the size of the point. The larger the map scale, the larger the corresponding drawing area, and the more points can correspond. At this time, the point value can be set relatively small. The larger the point shape, The point value should be set smaller accordingly. Too big or too small a point value is inappropriate.

Parameters:value (float) – The value represented by each point in the thematic map.
Returns:self
Return type:ThemeDotDensity
style

GeoStyle – Point style in the thematic map of point density

value

float – the value represented by each point in the thematic map

class iobjectspy.mapping.ThemeGridUniqueItem(unique_value, color, caption, visible=True)

Bases: object

The sub-item class of grid unique values map.

Parameters:
  • unique_value (str) – the unique value of the sub-item of the grid unique values map
  • color (Color) – The display color of the grid unique values map item
  • caption (str) – The name of the sub-item of the grid unique values map
  • visible (bool) – Whether the grid unique values map item is visible
caption

str – the name of the sub-item of the grid unique values map

color

Color – the display color of the grid unique values map item

set_caption(value)

Set the name of each grid unique values map item

Parameters:value (str) – The name of each grid unique values map item
Returns:self
Return type:ThemeGridUniqueItem
set_color(value)

Set the display color of each grid unique values map item.

Parameters:value (Color or str) – The display color of the grid unique values map item.
Returns:self
Return type:ThemeGridUniqueItem
set_unique_value(value)

Set the unique value of grid unique values map item

Parameters:value (str) – the single value of the subitem of the grid unique values map
Returns:self
Return type:ThemeGridUniqueItem
set_visible(value)

Set whether the grid unique values map item is visible

Parameters:value (bool) – Whether the grid unique values map item is visible
Returns:self
Return type:ThemeGridUniqueItem
unique_value

str – the unique value of the sub-item of the grid unique values map

visible

bool – Whether the grid unique values map item is visible

class iobjectspy.mapping.ThemeGridUnique(items=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

The grid unique values map class.

The grid unique values map is to classify cells with the same value into one category, and set a color for each category to distinguish different categories. Raster single-valued thematic map is applicable to discrete raster data and part of continuous raster data. For those continuous raster data with different cell values, it is not meaningful to use raster single-valued thematic map.

For example, you can create raster thematic map objects in the following ways:

>>> theme = ThemeGridUnique.make_default(dataset,'RAINBOW')

or:

>>> theme = ThemeGridUnique()
>>> theme.add(ThemeGridUniqueItem(1, Color.rosybrown(), '1'))
>>> theme.add(ThemeGridUniqueItem(2, Color.coral(), '2'))
>>> theme.add(ThemeGridUniqueItem(3, Color.darkred(), '3'))
>>> theme.add(ThemeGridUniqueItem(4, Color.blueviolet(), '4'))
>>> theme.add(ThemeGridUniqueItem(5, Color.greenyellow(), '5'))
>>> theme.set_default_color(Color.white())
Parameters:items (list[ThemeGridUniqueItem] or tuple[ThemeGridUniqueItem]) – grid unique values map sub-item list
add(item)

Add sub item of grid unique values map

Parameters:item (ThemeGridUniqueItem) – grid unique values map item
Returns:self
Return type:ThemeGridUnique
clear()

Delete all grid unique values map sub-items.

Returns:self
Return type:ThemeGridUnique
extend(items)

Add sub-items of grid unique values map in batch

Parameters:items (list[ThemeGridUniqueItem] or tuple[ThemeGridUniqueItem]) – grid unique values map sub-item list
Returns:self
Return type:ThemeGridUnique
get_count()

Return the number of grid unique values map items

Returns:the number of sub-items of the grid unique values map
Return type:int
get_default_color()

Return the default color of the grid unique values map. Those objects that are not listed in the grid unique values map are displayed in this color. If not set, the default color of the layer will be used for display.

Returns:The default color of grid unique values map.
Return type:Color
get_item(index)

Return the grid unique values map item of the specified serial number

Parameters:index (int) – The number of the specified grid unique values map item
Returns:Grid unique values map subitem with specified serial number
Return type:ThemeGridUniqueItem
get_special_value()

Return the special value of the grid unique value thematic layer. When adding a new raster layer, the return value of this method is equal to the NoValue property value of the dataset.

Returns:The special value of the grid unique value thematic layer.
Return type:float
get_special_value_color()

Return the color of the special value of the grid unique value thematic layer

Returns:The color of the special value of the grid unique value thematic layer.
Return type:Color
index_of(unique_value)

Return the serial number of the specified sub-item unique value in the grid unique values map in the current series.

Parameters:unique_value (int) – the unique value of the given grid unique values map item
Returns:The serial number value of the grid thematic map item in the sequence. If the value does not exist, -1 is returned.
Return type:int
insert(index, item)

Insert the given grid unique values map item to the position of the specified sequence number.

Parameters:
  • index (int) – The serial number of the specified grid unique values map sub-item sequence.
  • item (ThemeGridUniqueItem) – the inserted grid unique values map item.
Returns:

self

Return type:

ThemeGridUnique

is_special_value_transparent()

Whether the area of the special value of the grid unique value thematic layer is transparent.

Returns:Whether the area of the special value of the grid unique value thematic layer is transparent; True means transparent; False means opaque.
Return type:bool
static make_default(dataset, color_gradient_type=None)

Generate the default grid unique values map based on the given raster dataset and color gradient mode. Only supports making unique values maps for raster datasets whose raster values are integers, and cannot make raster unique values maps for raster datasets with floating-point raster values.

Parameters:
Returns:

object instance of the new grid unique values map class

Return type:

ThemeGridUnique

remove(index)

Delete a sub-item of the grid unique values map with a specified serial number.

Parameters:index (int) – The specified sequence number of the sub-item of the grid unique values map to be deleted.
Returns:If the deletion is successful, it Return True; otherwise, it Return False.
Return type:bool
reverse_color()

Display the colors of the sub-items in the grid unique values map in reverse order.

Returns:self
Return type:ThemeGridUnique
set_default_color(color)

Set the default color of the grid unique values map, and use this color to display the objects that are not listed in the grid unique values map sub-items. If not set, the default color of the layer will be used for display.

Parameters:color (Color or str) – the default color of grid unique values map
Returns:self
Return type:ThemeGridUnique
set_special_value(value, color, is_transparent=False)

Set the special value of the grid unique value thematic layer.

Parameters:
  • value (float) – The special value of the grid unique value thematic layer.
  • color (Color or str) – The color of the special value of the grid unique value thematic layer.
  • is_transparent (bool) – Whether the area where the special value of the grid unique value thematic layer is located is transparent. True means transparent; False means opaque.
Returns:

self

Return type:

ThemeGridUnique

class iobjectspy.mapping.ThemeGridRangeItem(start, end, color, caption, visible=True)

Bases: object

The sub-item class of the grid range map.

In the grid range map, the expression value of the range field is divided into multiple ranges according to a certain range mode. This class is used to set the start value, end value, name and color of each range segment. The range represented by each segment is [Start, End).

Parameters:
  • start (float) – the starting value of the sub item of the grid range map
  • end (float) – the end value of the grid range map item
  • color (Color) – The corresponding color of each range map item in the grid ranges map.
  • caption (str) – The name of the item in the grid range map
  • visible (bool) – Whether the sub items in the grid range map are visible
caption

str – The name of the item in the grid range map

color

Color – The corresponding color of each range thematic map item in the grid range map.

end

float – the end value of the grid range thematic map item

set_caption(value)

Set the name of the item in the grid range map.

Parameters:value (str) – The name of the item in the grid range map.
Returns:self
Return type:ThemeGridRangeItem
set_color(value)

Set the corresponding color of each range map item in the grid ranges map.

Parameters:value (Color or str) – The corresponding color of each range map item in the grid ranges map.
Returns:self
Return type:ThemeGridRangeItem
set_end(value)

Set the end value of the grid range map item. Note: If the sub-item is the last sub-item in the segment, then the end value is the maximum value of the segment; if it is not the last item, the end value must be the same as the start value of the next sub-item, otherwise the system will throw An exception occurred.

Parameters:value (float) – The end value of the grid range map item.
Returns:self
Return type:ThemeGridRangeItem
set_start(value)

Set the starting value of the grid range map item. Note: If the sub-item is the first sub-item in the segment, then the starting value is the minimum value of the segment; if the sequence number of the sub-item is greater than or equal to 1, the starting value must be the same as that of the previous sub-item The end value is the same, otherwise The system will throw an exception.

Parameters:value (float) – The starting value of the grid range map item.
Returns:self
Return type:ThemeGridRangeItem
set_visible(value)

Set whether the sub items in the grid range map are visible

Parameters:value (bool) – Whether the sub items in the grid range map are visible
Returns:self
Return type:ThemeGridRangeItem
start

float – the starting value of the grid range thematic map item

visible

bool – Whether the subitem in the grid range map is visible

class iobjectspy.mapping.ThemeGridRange(items=None)

Bases: iobjectspy._jsuperpy.mapping.Theme

The grid range thematic map class.

The grid range thematic map divides the values of all cells into multiple ranges according to a certain range method, and the cells with values in the same range are displayed in the same color. Thematic maps of raster segments are generally used to Reflect the quantity or degree of continuous distribution phenomena. For example, in the national precipitation distribution map of a certain year, the grid data generated after the interpolation of the observed values of each meteorological station is displayed in segments. This class is similar to the segmented thematic map class, The difference is that the segmented thematic map operates on vector data, while the raster segmented thematic map operates on raster data.

For example, you can create raster thematic map objects in the following ways:

>>>theme = ThemeGridRange.make_default(dataset,’EUQALINTERVAL’, 6,’RAINBOW’)

or:

>>> theme = ThemeGridRange()
>>> theme.add(ThemeGridRangeItem(-999, 3,'rosybrown', '1'))
>>> theme.add(ThemeGridRangeItem(3, 6,'darkred', '2'))
>>> theme.add(ThemeGridRangeItem(6, 9,'cyan', '3'))
>>> theme.add(ThemeGridRangeItem(9, 20,'blueviolet', '4'))
>>> theme.add(ThemeGridRangeItem(20, 52,'darkkhaki', '5'))
Parameters:items (list[ThemeGridRangeItem] or tuple[ThemeGridRangeItem]) – grid range thematic map sub-item list
add(item, is_normalize=True, is_add_to_head=False)

Add sub item list of grid range map

Parameters:
  • item (ThemeGridRangeItem) – Grid range thematic map item list
  • is_normalize (bool) – Indicates whether to normalize, when normalize is true, if the item value is illegal, then normalize, and when normalize is fasle, if the item value is illegal, an exception will be thrown
  • is_add_to_head (bool) – Whether to add to the head of the segment list, if it is True, add to the head, otherwise add to the tail.
Returns:

self

Return type:

ThemeGridRange

clear()

Delete a range value of the grid range map. After executing this method, all the sub-items of the grid range thematic map are released and are no longer available.

Returns:self
Return type:ThemeGridRange
extend(items, is_normalize=True)

Add the sub-item list of grid range thematic map in batches. Added to the end by default.

Parameters:
  • items (list[ThemeGridRangeItem] or tuple[ThemeGridRangeItem]) – grid range thematic map sub-item list
  • is_normalize (bool) – Indicates whether to normalize, when normalize is true, if the item value is illegal, then normalize, and when normalize is fasle, if the item value is illegal, an exception will be thrown
Returns:

self

Return type:

ThemeGridRange

get_count()

Return the number of ranges in the grid ranges map

Returns:the number of ranges in the grid range map
Return type:int
get_item(index)

Return the range thematic map item of the grid range map with the specified serial number

Parameters:index (int) – The specified sub-item number of the grid range map.
Returns:The range map sub-item of the grid range map with the specified serial number.
Return type:ThemeGridRangeItem
get_special_value()

Return the special value of the grid segment thematic layer.

Returns:The special value of the grid segment thematic layer.
Return type:float
get_special_value_color()

Return the color of the special value of the grid segment thematic layer

Returns:the color of the special value of the raster segment thematic layer
Return type:Color
index_of(value)

Return the serial number of the specified range field value in the current range sequence in the grid ranges map.

Parameters:value (str) –
Returns:The sequence number of the segment field value in the segment sequence. If the value of the given segment field does not have a corresponding sequence number, -1 is returned.
Return type:int
is_special_value_transparent()

Whether the area where the special value of the raster segment thematic layer is located is transparent.

Returns:Whether the area where the special value of the raster segment thematic layer is located is transparent; True means transparent; False means opaque.
Return type:bool
static make_default(dataset, range_mode, range_parameter, color_gradient_type=None)

According to the given raster dataset, segmentation mode and corresponding segmentation parameters, a default raster segmentation thematic map is generated.

Parameters:
  • dataset (DatasetGrid or str) – raster dataset.
  • range_mode (RangeMode or str) – segment mode. Only supports equidistant segmentation method, square root segmentation method, logarithmic segmentation method, and custom distance method.
  • range_parameter (float) – segment parameter. When the segmentation mode is equidistant segmentation method, square root segmentation method, or logarithmic segmentation method, this parameter is the number of segments; when the segmentation mode is custom distance, this parameter represents custom distance .
  • color_gradient_type (ColorGradientType or str) – Color gradient mode.
Returns:

The new grid range thematic map object.

Return type:

ThemeGridRange

range_mode

RangeMode – The segmentation mode of the current thematic map

reverse_color()

Display the styles of the ranges in the range map in reverse order.

Returns:self
Return type:ThemeGridRange
set_special_value(value, color, is_transparent=False)

Set the special value of the grid segment thematic layer.

Parameters:
  • value (float) – The special value of the grid segment thematic layer.
  • color (Color or str) – The color of the special value of the grid segment thematic layer.
  • is_transparent (bool) – Whether the area where the special value of the grid segmentation thematic layer is located is transparent. True means transparent; False means opaque.
Returns:

self

Return type:

ThemeRangeUnique

class iobjectspy.mapping.ThemeCustom

Bases: iobjectspy._jsuperpy.mapping.Theme

Custom thematic map class, which can dynamically set the display style through field expressions.

get_fill_back_color_expression()

Return the field expression representing the background color of the fill.

Returns:Represents the field expression to fill the background color.
Return type:str
get_fill_fore_color_expression()

Return the field expression representing the fill color.

Returns:field expression representing the fill color
Return type:str
get_fill_gradient_angle_expression()

Return the field expression representing the filling angle

Returns:field expression representing the filling angle
Return type:str
get_fill_gradient_mode_expression()

Return the field expression representing the fill gradient type.

Returns:Represents the field expression of the fill gradient type.
Return type:str
get_fill_gradient_offset_ratio_x_expression()

Return the field expression representing the offset of the filling center point in the X direction

Returns:A field expression representing the offset of the center point in the X direction
Return type:str
get_fill_gradient_offset_ratio_y_expression()

Return the field expression representing the offset of the center point of the filling in the Y direction

Returns:A field expression representing the offset of the center point in the Y direction
Return type:str
get_fill_opaque_rate_expression()

Return a field expression representing the opacity of the fill

Returns:Field expression representing the opacity of the fill
Return type:str
get_fill_symbol_id_expression()

Return the field expression representing the style of the fill symbol.

Returns:Represents the field expression of the fill symbol style.
Return type:str
get_line_color_expression()
Returns:
Return type:str
get_line_symbol_id_expression()

Get the field expression representing the color of the line symbol or point symbol

Returns:field expression representing the color of line symbol or dot symbol
Return type:str
get_line_width_expression()

Get the field expression representing the line width of the line symbol

Returns:field expression representing the line width of the line symbol
Return type:str
get_marker_angle_expression()

Return the field expression representing the rotation angle of the point symbol. The direction of rotation is counterclockwise and the unit is degree

Returns:field expression representing the rotation angle of the point symbol
Return type:str
get_marker_size_expression()

Return the field expression representing the size of the point symbol. Unit is mm

Returns:Field expression representing the size of the point symbol.
Return type:str
get_marker_symbol_id_expression()

Return the field expression representing the dot notation style.

Returns:Represents a field expression in dot notation style.
Return type:str
is_argb_color_mode()

Return whether the color in the color expression indicates whether the rule is RGB mode. The default value is False.

When the return value is true, it means that the value of the color expression uses RRGGBB to express the color. (RRGGBB is a hexadecimal color converted to a decimal value, generally obtained by converting the hexadecimal value Under the desktop color panel to a decimal value.)

When the attribute value is false, it means that the value of the color expression adopts BBGGRR to express the color. (BBGGRR is a hexadecimal color conversion to a decimal value, generally through the desktop color panel, firstly, Exchange the R and B values of the target color, and then convert the obtained hexadecimal value into a decimal value, which is the BBGGRR decimal value of the target color. )

Returns:The color in the color expression indicates whether the rule is RGB mode
Return type:bool
set_argb_color_mode(value)

Set whether the color expression rule in the color expression is RGB mode. The default value is False.

When the return value is true, it means that the value of the color expression uses RRGGBB to express the color. (RRGGBB is a hexadecimal color converted to a decimal value, generally obtained by converting the hexadecimal value Under the desktop color panel to a decimal value.)

When the attribute value is false, it means that the value of the color expression adopts BBGGRR to express the color. (BBGGRR is a hexadecimal color conversion to a decimal value, generally through the desktop color panel, firstly, Exchange the R and B values of the target color, and then convert the obtained hexadecimal value into a decimal value, which is the BBGGRR decimal value of the target color. )

Parameters:value (bool) – Whether the color expression rule in the color expression is RGB mode
Returns:self
Return type:ThemeCustom
set_fill_back_color_expression(value)

Set the field expression that represents the fill background color

Parameters:value (str) – Represents the field expression to fill the background color.
Returns:self
Return type:ThemeCustom
set_fill_fore_color_expression(value)

Set the field expression that represents the fill color

Parameters:value (str) – field expression representing the fill color
Returns:self
Return type:ThemeCustom
set_fill_gradient_angle_expression(value)

Set the field expression representing the filling angle

Parameters:value (str) – field expression representing the filling angle
Returns:self
Return type:ThemeCustom
set_fill_gradient_mode_expression(value)

Set the field expression representing the fill gradient type.

Parameters:value (str) – Represents the field expression of the fill gradient type.
Returns:self
Return type:ThemeCustom
set_fill_gradient_offset_ratio_x_expression(value)

Set the field expression that represents the offset of the center point in the X direction

Parameters:value (str) – A field expression representing the offset of the center point in the X direction
Returns:self
Return type:ThemeCustom
set_fill_gradient_offset_ratio_y_expression(value)

Set the field expression that represents the offset of the center point of the filling in the Y direction

Parameters:value (str) – A field expression representing the offset of the filling center point in the Y direction
Returns:self
Return type:ThemeCustom
set_fill_opaque_rate_expression(value)

Set the field expression representing the opacity of the fill

Parameters:value (str) – field expression representing the opacity of the fill
Returns:self
Return type:ThemeCustom
set_fill_symbol_id_expression(value)

Set the field expression representing the style of the fill symbol.

Parameters:value (str) – Field expression that represents the filling symbol style.
Returns:self
Return type:ThemeCustom
set_line_color_expression(value)

Set the field expression representing the color of line symbol or dot symbol

Parameters:value (str) – field expression representing the color of line symbol or dot symbol
Returns:self
Return type:ThemeCustom
set_line_symbol_id_expression(value)

Set the field expression representing the style of the line symbol

Parameters:value (str) – field expression representing the style of line symbol
Returns:self
Return type:ThemeCustom
set_line_width_expression(value)

Set the field expression representing the line width of the line symbol.

Parameters:value (str) – field expression representing the line width of the line symbol
Returns:self
Return type:ThemeCustom
set_marker_angle_expression(value)

Set the field expression representing the rotation angle of the point symbol. The direction of rotation is counterclockwise and the unit is degree

Parameters:value (str) – The field expression representing the rotation angle of the point symbol.
Returns:self
Return type:ThemeCustom
set_marker_size_expression(value)

Set the field expression representing the size of the point symbol. The unit is mm.

Parameters:value (str) – Field expression representing the size of the point symbol. The unit is mm.
Returns:self
Return type:ThemeCustom
set_marker_symbol_id_expression(value)

Set the field expression representing the dot notation style.

Parameters:value (str) – Represents a field expression in dot notation style.
Returns:self
Return type:ThemeCustom

iobjectspy.threeddesigner module

iobjectspy.threeddesigner.linear_extrude(input_data, out_data=None, out_dataset_name='Extrude_Result', height=None, twist=0.0, scaleX=1.0, scaleY=1.0, progress=None)

Linear stretch: stretch the vector plane into a white mold model according to a given height :param input_data: the given face dataset :param out_data: output datasource :param out_dataset_name: output dataset name :param bLonLat: :param height: :param twist: :param scaleX: :param scaleY: :return:

iobjectspy.threeddesigner.build_house(input_data, out_data=None, out_dataset_name='House', wallHeight=0.0, wallMaterial=None, eaveHeight=0.0, eaveWidth=0.0, eaveMaterial=None, roofWidth=0.0, roofSlope=0.0, roofMaterial=None, progress=None)

Build a house model: build a house model from polygons (walls, eaves, roofs can be built) :param input_data: The specified source vector dataset, supports two-dimensional and three-dimensional surface dataset :param out_data: output datasource :param out_dataset_name: output dataset name :param wallHeight: the height of the wall of the house :param wallMaterail: wall material parameters :param eaveHeight: eave height :param eaveWidth: eave width :param eaveMaterial: eave material parameters :param roofWidth: roof width :param roofSlope: roof slope, unit degree :param roofMaterail: roof material parameters :param progress: progress event :return: return the model dataset

iobjectspy.threeddesigner.compose_models(value)
class iobjectspy.threeddesigner.Material3D

Bases: object

Material related parameter settings, mainly color, texture picture and texture repeat mode and number of repeats

color
is_texture_times_repeat
set_color(color)
set_is_texture_times_repeat(b)
set_texture_file(textureFile)
set_uTiling(uTiling)
set_vTiling(vTiling)
texture_file
uTiling
vTiling
iobjectspy.threeddesigner.building_height_check(input_data=None, height=0.0)

Planning the height control inspection :param input_data: building model record set or dataset :param height: limit height :return: return the ID of the super high building

iobjectspy.threeddesigner.lonLatToENU(points, pntInsert)

: 将经纬度的点转成以经纬度作为插入点的笛卡尔坐标系的点 :param points:带转换的点(支持二三维点) :param pntInsert:pntInsert插入点 :return:以经纬度作为插入点的笛卡尔坐标系的点(三维点)

class iobjectspy.threeddesigner.BoolOperation3D

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

check()

Check whether the model object meets the Boolean operation condition :param geometry3d: :return:

erase(erase_geomety3d)

Difference operation of two specified 3D geometric objects :param geometry3d: :param erase_geomety3d: :return: Return the difference Geometry3D object

intersect(intersect_geomety3d)

The intersection of two specified 3D geometric objects :param geometry3d: :param geomety3d: :return: return the intersection Geometry3D object

isClosed()

Check if Geometry3D object is closed :param geometry3d: :return: true means closing, false means not closing

union(union_geomety3d)

Union of two specified 3D geometric objects :param geometry3d: :param union_geomety3d: :return: return the union Geometry3D object

class iobjectspy.threeddesigner.ModelBuilder3D

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

create_buffer(offset, bLonLat=False, joinType=Buffer3DJoinType.ROUND)

Three-dimensional buffer, support line and surface buffer (expanded) into surface; model buffer (expanded) into three-dimensional solid model :param geometry: line, area and model objects :param offset: buffer distance :param bLonLat: Whether it is latitude and longitude :param joinType: Join style, including sharp corners, rounded corners, and beveled corners.

The three-dimensional line buffering surface supports the sharp corner connection style, and the buffering body supports the sharp corner and round corner connection style. Three-dimensional surfaces only support buffering into three-dimensional surfaces, and the connection styles include sharp and rounded corners. The entity model only supports cached adult, no cohesive style
Returns:
linear_extrude(geometry, bLonLat, height)

Linear stretch This method is only supported in the Windows platform version, not in the Linux version :param geometry: the surface to be linearly stretched :param bLonLat: Whether it is latitude and longitude :param height: stretch height :param twist: rotation angle :param scaleX: scale around the X axis :param scaleY: scale around the Y axis :param material: texture settings :return: return GeoModel3D object

loft(line3D, bLonLat=False, nChamfer=50, chamferStyle=ChamferStyle.SOBC_CIRCLE_ARC)

Stakeout, this method is only supported in the Windows platform version, not in the Linux version :param geometry: the cross section of the stakeout, Support two-dimensional objects: GeoLine, GeoLineEPS, GeoCirCle, GeoRegion, GeoRegionEPS, GeoEllipse, GeoRect Support three-dimensional objects: GeoLine3D, GeoCircle3D, GeoRegion3D :param line3D: the line object to be staked :param bLonLat: Whether it is latitude and longitude :param nChamfer: smoothness :param chamferStyle: chamfer style :param material: texture settings :return: return GeoModel3D object

mirror(plane=None)

: Get the model object of geomodel3d about plane mirroring :param plane: mirror plane :return: return the model object of ‘geomodel3d’ about plane mirroring

plane_projection(plane=None)

Planar projection, not available in the Linux version :param geomodel3d: the three-dimensional geometric model object to be cross-sectional projection :param plane: projection plane :return: return to the projection surface

rotate_extrude(angle, slices, isgroup=False, hasStartFace=True, hasRingFace=True, hasEndFace=True)

This method of rotating and stretching is only supported in the Windows platform version, not in the Linux version :param geometry: area object (must be constructed in plane coordinate system) :param angle: rotation angle :param slices: number of slices :param isgroup: Whether to split into multiple objects :param hasStartFace: Do you need a starting face :param hasRingFace: Do you need a ring :param hasEndFace: Do you need an end face :return: return GeoModel3D object

section_projection(plane=None)

Sectional projection, not available in the Linux version :param geometry: the 3D geometric model object to be projected :param plane: projection plane :return: return to the projection surface

straight_skeleton(dAngle, bLonLat=False)

Straight skeleton generation :param geometry: the area object of the skeleton to be straight :param bLonLat: Whether it is latitude and longitude :param dAngle: split angle threshold :return: successfully returned to the 3D model

class iobjectspy.threeddesigner.ClassificationOperator

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

add_labels_to_S3M_file(outputFolder, labelsArray)

Import OSGB oblique photography data, use tag array to generate S3M data, and save it in outputFolder :param osgbFilePath:OSGB oblique photography data :param outputFolder: path to save the result :param labelsArray: label array

extract_infos()

Import OSGB oblique photography data to get the vertex, normal and label information of the data :param osgbFilePath: OSGB data of original oblique photographic slice

generate_training_set()

Import S3M single data to get the vertex, normal and label information of the data :param diecretFilePath: Single S3M data

class iobjectspy.threeddesigner.ClassificationInfos

Bases: iobjectspy._jsuperpy.data._jvm.JVMBase

Python object mapping for com.supermap.data.processing.ClassificationInfos Single oblique photography data OSGB, export object of S3M

labels

Label list :return: list tag list

normals

List of normals :return: list normal list

vertices

Vertex list :return: list vertex list

iobjectspy.threeddesigner.CreateIFCFile(DatasourcePath, RegionDatasetName, ModelDatasetName, IFCFilePath, ExtrudeHeightField)

从数据集创建IFC文件 :DatasourcePath:数据源 :RegionDatasetName:数据集 :IFCFilePath:IFC保存路径 :ExtrudeHeightField:拉伸字段

Module contents