iobjectspy.ml package

Submodules

iobjectspy.ml.utils module

The utils module is responsible for some common small functions in data processing

iobjectspy.ml.utils.datasetraster_to_df_or_xarray(source, no_value=None, origin_is_left_up=True)

Read data from raster dataset (DatasetGrid) or iamge dataset (DatasetImage) to ‘pandas.DataFrame’ or ‘xarray.DataArray’. The number of columns in the array is the width of the dataset, and the number of rows is the height of the dataset.

  • If it is a single-band dataset and it is not a RGB and RGBA pixel format, return a 2D array (DataFrame). The first dimension of the array is the row and the second dimension is the column.
  • If it is a single-band dataset in RGB and RGBA formate, return a 3D array (DataArray). The first dimension of the array is the band and the size will be 3 (for RGB pixel formate) or 4 (for RGBA formate), the second dimension is the row, and the third dimensions is column.
  • If it is a multispectral dataset, return a 3D array (DataArray). The first dimension is the band and the size is the number of bands, the second dimension is the row, and the third dimension is the column.
Parameters:
  • source (DatasetGrid or DatasetImage or str) – The raster dataset or image dataset to be read. If the input is a string, you can use datasource information plus the dataset name to represented the dataset. For example, the datasource alias can be used as ‘alias/imagedataset’. And the datasource connection information (a udb file path, a dcf file path, or a XML string which represent the datasource connection information) can also be used. For example, the input can be ‘/home/data/test.udb/imagedataset’ when used a udb file path as connection information. When use the datasourse information, if the datasource which the connection information is already exists in the workspace, it will be directly obtained, otherwise, a new datasource object will be opened.
  • no_value (float) – If the raster dataset and image dataset have NoData pixels, user can set a new value to represent them in the dataset by this prameter. If this parameter is not None, the value set by the user will be used to replace the NoData grids or pixels in the returned DataFrame. The default is None, which means the NoData grids or pixels in the returned DataFrame will not change.
  • origin_is_left_up (bool) – the position of (0,0) in the returned DataFrame or DataArray, the default is True. If it is True, the position of (0,0) means the upper left corner of the dataset, and DataFrame[i][j] is the i-th row and j-th column of the dataset. If it is False, the position of (0,0) means the lower left corner of the dataset, and DataFrame[i][j] is the (height-i) row j column of the dataset.
Returns:

when the read dataset is a 2D array, returns a DataFrame; if the read dataset is a 3D array, return a ‘xarray.DataArray’

Return type:

pandas.DataFrame or xarray.DataArray

iobjectspy.ml.utils.datasetraster_to_numpy_array(source, no_value=None, origin_is_left_up=True)

Read data from raster dataset (DatasetGrid) or image dataset (DatasetImage) to a numpy array. The number of columns in the array is the width of the dataset, and the number of rows is the height of the dataset.

  • If it is a single-band dataset, and it is not a RGB and RGBA pixel format, return a 2D array. The first dimension of the array is the row and the second dimension is the column.
  • If it is a single-band dataset in RGB and RGBA formate, return a 3D array. The first dimension of the array is the band and the size is 3 (for RGB pixel formate) or 4 (for RGBA formate), the second dimension is the row, and the third dimensions is column.
  • If it is a multispectral dataset, return a 3D array. The first dimension of the array is the band and the size is the number of bands, the second dimension is the row, and the third dimension is the column.

datasetraster_to_numpy_array() supports writing image datasets and raster datasets to a numpy ndarray. Image datasets can be multispectral images. When writing out the image dataset:

  • If it is an image dataset in RGB or RGBA pixel format, it will be written as an 3D array, the first dimension of the array is 3 (RGB) or 4 (RGBA), the second dimension is the row , and the third dimension is the column.
  • If it is a single-band dataset, it will be written as an 2D array. The first dimension is the row and the second dimension is the column.
  • If the number of bands is greater than 1, a 3D array will be returned, where the first dimension is the band, the second dimension is the row, and the third dimension is the column.

When writing out the multi-bands raster dataset, it will be written as a 2D array.

  • Read an RGB image dataset:

    >>> datasetraster_to_numpy_array(data_dir +'example_data.udb/seaport')
    >>> print(k_array.ndim)
    3
    >>> print(k_array.shape)
    (3, 1537, 1728)
    >>> print(k_array.dtype)
    uint8
    >>> print(k_array)
    [[[ 21 21 21 ... 192 194 191]
      [21 21 21 ... 191 192 190]
      [21 21 21 ... 190 192 193]
      ...
      [98 94 91 ... 31 31 29]
      [101 97 94 ... 30 30 29]
      [116 114 110 ... 24 24 24]]
    
     [[ 56 56 56 ... 196 198 195]
      [56 56 56 ... 195 196 194]
      [56 56 56 ... 194 196 197]
      ...
      [119 115 111 ... 25 25 26]
      [125 121 114 ... 24 24 26]
      [116 114 110 ... 24 24 24]]
    
     [[ 75 75 75 ... 179 181 181]
      [75 75 75 ... 178 179 180]
      [75 75 75 ... 177 179 183]
      ...
      [110 106 102 ... 35 35 33]
      [112 108 105 ... 34 34 33]
      [116 114 110 ... 24 24 24]]]
    
  • Read a raster dataset with pixel format BIT16:

    >>> datasetraster_to_numpy_array(data_dir +'example_data.udb/DEM')
    >>> print(k_array.ndim)
    2
    >>> print(k_array.shape)
    (4849, 5892)
    >>> print(k_array.dtype)
    int16
    >>> print(k_array)
    [[-32768 -32768 -32768 ... -32768 -32768 -32768]
     [-32768 -32768 -32768 ... -32768 -32768 -32768]
     [-32768 -32768 -32768 ... -32768 -32768 -32768]
     ...
     [-32768 -32768 -32768 ... -32768 -32768 -32768]
     [-32768 -32768 -32768 ... -32768 -32768 -32768]
     [-32768 -32768 -32768 ... -32768 -32768 -32768]]
    
  • In addition, it should be noted that the ‘origin_is_left’ parameter can specify whether the origin of the generated array (row 0, column 0) is the upper left corner or the lower left corner of the SuperMap raster dataset or image dataset. The raster dataset and image dataset in SuperMap defaults that Row 0 and column 0 is the upper left corner. The row number increases from top to bottom and the column number increase from the left to right

    [[(0,0),       (0,1),        (0,2), ...        (0, width-2),       (0, width-1)]
     [(1,0),       (1,1),        (1,2), ...        (1, width-2),       (1, width-1)]
     [(2,0),       (2,1),        (0,2), ...        (2, width-2),       (2, width-1)]
     ... ... ...
     [(height-2,0), (height-2,1), (height-2,2), ... (height-2, width-2), (height-2, width-1)]
     [(height-1,0), (height-1,1), (height-1,2), ... (height-1, width-2), (height-1, width-1)]]
    

    However, the 0th row and 0th column may be located in the lower left corner in other software, and the row increases when going up, and the column increases when going roght

    [(height-1,0), (height-1,1), (height-1,2), ... (height-1, width-2), (height-1, width-1)]]
    [(height-2,0), (height-2,1), (height-2,2), ... (height-2, width-2), (height-2, width-1)]
    ... ... ...
    [(2,0), (2,1), (0,2), ... (2, width-2), (2, width-1)]
    [(1,0), (1,1), (1,2), ... (1, width-2), (1, width-1)]
    [(0,0), (0,1), (0,2), ... (0, width-2), (0, width-1)]
    

    Therefore, the user can choose to use the upper left corner or the lower left corner as the origin of the array according to the specific usage.

Parameters:
  • source (DatasetGrid or DatasetImage or str) – The raster dataset or image dataset to be read. If the input is a ‘str’, you can use datasource information plus the dataset name to represented the dataset. For example, the datasource alias can be used as ‘alias/imagedataset’. And the datasource connection information (a udb file path, a dcf file path, or a XML string which represent the datasource connection information) can also be used. For example, the input can be ‘/home/data/test.udb/imagedataset’ when used a udb file path as connection information. When use the datasourse information, if the datasource which the connection information is already exists in the workspace, it will be directly obtained, otherwise, a new datasource object will be opened.
  • no_value (float) – SuerpMap’s raster dataset and image dataset have NoData pixels and the user can set a new value to represent them. If this parameter is not None, the value set by the user will be used to replace the NoData grids or pixels in the returned numpy array. The default is None, which means the NoData grids or pixels in the returned array will not be changed.
  • origin_is_left_up (bool) – the position of (0,0) in the returned numpy array. The default is True. If it is True, the position of (0,0) means the upper left corner of the dataset, and ndarray[i][j] is the i-th row and j-th column of the dataset, If it is False, the position of (0,0) means the lower left corner of the dataset, and ndarray[i][j] is the (height-i) row j column of the dataset.
Returns:

numpy multi-dimensional array

Return type:

numpy.ndarray

iobjectspy.ml.utils.datasetvector_to_df(source, attr_filter=None, fields=None, export_spatial=False, skip_null_value=True, null_values=None)

Write the vector dataset to pandas.DataFrame

Parameters:
  • source (DatasetVector or str) – the vector dataset to be written to a ‘pandas.Dataframe’, supporting point, linear, surface, and attribute table datasets. Support the combination of datasource information and dataset name as input, such as’alias|point’.
  • attr_filter (str) – attribute filter condition
  • fields (list[str]) – fields that need to be wrote, if it is ‘None’, all non-system fields will be wrote.
  • skip_null_value (is) – whether to skip the records with null value. The integral type field does not supported null value in a DataFrame. If a integer field has null value, that field will be converted to floating-point. Therefore, if the field in the dataset contains null values, you need to fill them with a number. For floating-point fields, the null value is ‘numpy.nan’, for text (TEXT, WTEXT, CHAR, JSONB), it is an empty string, and for Boolean, binary, and time fields, it is None.
  • null_values (dict) – the null value for the specified field. ‘key’ is the field name or field serial number, ‘value’ is a specified value to represent the null value. The ‘value’ type needs to match with the field type. For example, use ‘null_values = {‘ID’: -9999}’ to specify the null value of an integer field ‘ID’ as ‘-9999’,
Returns:

Return a DataFrame object.

Return type:

pandas.DataFrame

iobjectspy.ml.utils.datasetvector_to_numpy_array(source, attr_filter=None, fields=None, export_spatial=False, skip_null_value=True, null_values=None)

Write the vector dataset to a numpy array

Parameters:
  • source (DatasetVector or str) – the vector dataset to be written, supporting point, linear, surface, and attribute table datasets. Support the combination of datasource information and dataset name as input, such as’alias|point’.
  • attr_filter (str) – attribute filter condition
  • fields (list[str]) – fields that need to be wrote, if it is None, all non-system fields will be wrote.
  • export_spatial (bool) – whether export the spatial geometric objects. For point objects, write the points’ X and Y coordinates into SmX and SmY fiels, and the points consist the linear and polygon objects will be written out to the SmX and SmY columns. Besides, linear objects will also have a SmLength field to represent the length of that line, and plygons will have SmPerimeter and SmArea field to show their perimeter and area.
  • skip_null_value (is) – whether to skip records with null values. The integral type field does not supported null value in a numpy array. If a integer field has null value, that field will be converted to floating-point. Therefore, if the field in the dataset contains null values, you need to fill them with a number. For floating-point fields, the null value is ‘numpy.nan’, for text (TEXT, WTEXT, CHAR, JSONB), it is an empty string, and for Boolean, binary, and time fields, it is None.
  • null_values (dict) – the null value for the specified field. ‘key’ is the field name or field serial number, ‘value’ is a specified value to represent the null value. The ‘value’ type needs to match with the field type. For example, use ‘null_values = {‘ID’: -9999}’ to specify the null value of an integer field ‘ID’ as ‘-9999’,
Returns:

1D arrary

Return type:

numpy.ndarray

iobjectspy.ml.utils.df_or_xarray_to_datasetraster(data, x_resolution, y_resolution, output, x_start=0, y_start=0, out_dataset_name=None, no_value=None, origin_is_left_up=True, as_grid=False)

Write the ‘pandas.DataFrame’, ‘xarray.DataArray’ or ‘xarray.Dataset’ to SuperMap raster dataset or image dataset. If it is a ‘xarray.DataArray’ or ‘xarray.Dataset’, you should install ‘xarray’ first.

Parameters:
  • data (DataFrame) – the ‘xarray.DataArray’ or ‘xarray.Dataset’ to be written. Supports 2D and 3D numerical array. For a 3D array, it can only be written as an image dataset.
  • x_resolution (float) – the resolution of the result dataset in the x direction
  • y_resolution (float) – the resolution of the result dataset in the y direction
  • output (Datasource or DatasourceConnectionInfo or str) – the output datasource object
  • x_start (float) – the X coordinate of the lower right corner
  • y_start (float) – the Y coordinate of the lower right corner
  • out_dataset_name (str) – the output dataset name
  • no_value (float) – the specified value to represent the no_value. The default is -9999.
  • origin_is_left_up (bool) – specify whether the 0th row and 0th column of the DataFrame is the upper left corner or the lower left corner of the raster dataset or image dataset.
  • as_grid (bool) – whether to write as a : py:class:DatasetGrid dataset
Returns:

the output dataset object or the dataset name.

Return type:

DatasetGrid or DatasetImage or str

iobjectspy.ml.utils.df_to_datasetvector(df, output, out_dataset_name=None, x_col=None, y_col=None)

Write pandas ‘DataFrame’ objects to a vector dataset

Parameters:
  • df (pandas.DataFrame) – the ‘pandas DataFrame’ object to be written
  • output (Datasource or DatasourceConnectionInfo or str) – the output data source object
  • out_dataset_name (str) – the output dataset name
  • x_col (str) – the column where the X coordinats is located when writing to a point dataset. If it is empty, an attribute table dataset will be written.
  • y_col (str) – the column where the Y coordinats is located when writing to a point dataset. If it is empty, an attribute table dataset will be written.
Returns:

the output dataset object or dataset name.

Return type:

DatasetVector or str

iobjectspy.ml.utils.numpy_array_to_datasetraster(narray, x_resolution, y_resolution, output, x_start=0, y_start=0, out_dataset_name=None, no_value=None, origin_is_left_up=True, as_grid=False)

Write the ‘numpy’ array to the SuperMap raster dataset or image dataset.

Supports writing a 2D array or 3D array to an image dataset or raster dataset. The input can only be a 2D array if the output is a raster dataset. If the input is a 3D array and the output is a iamge dataset, the first dimension must be the band information, the second and third dimension must be the row and the column:

>>> d = numpy.empty((100, 100), dtype='float32')
>>> for i in range(100):
... for j in range(100):
... d[i][j] = i * j + i
>>> iobjects.numpy_array_to_datasetraster(d, 0.1, 0.1, out_dir +'narray_out.udb', as_grid=True)
Parameters:
  • narray (numpy.ndarray) – the numpy array to be written. It supports 2D numerical array or 3D numerical array. For a 3D array, it can only be written as an image dataset.
  • x_resolution (float) – the resolution of the result dataset in the x direction
  • y_resolution (float) – the resolution of the result dataset in the x direction
  • output (Datasource or DatasourceConnectionInfo or str) – the output datasource object
  • x_start (float) – the X coordinate of the lower right corner
  • y_start (float) – the Y coordinate of the lower right corner
  • out_dataset_name (str) – the output dataset name
  • no_value (float) – the specified value to represent the no_value. The default is -9999.
  • origin_is_left_up (bool) – specify whether the 0th row and 0th column of the DataFrame is the upper left corner or the lower left corner of the raster dataset or image dataset.
  • as_grid (bool) – whether to write as a : py:class:DatasetGrid dataset
Returns:

the output dataset object or the dataset name.

Return type:

DatasetGrid or DatasetImage or str

iobjectspy.ml.utils.numpy_array_to_datasetvector(narray, output, out_dataset_name=None, x_col=None, y_col=None)

Write a ‘numpy’ array to a SuperMap vector dataset.

Supports writing an ‘ndarray’ array containing ‘dtype’ information and column names into a SuperMap vector dataset. It will be written to a point dataset, if the column names of the X and Y field are specified. Otherwise, it will be written to an attribute table dataset. Note that the ‘ndarray’ must contain column names before it can be written:

>>> narray = np.empty(10, dtype=[('ID','int32'), ('X','float64'), ('Y','float64'), ('NAME', ' U10'), ('COUNT','int32')])
>>> narray[0] = 1, 116.380351, 39.93393099,'Shichahai', 1023
>>> narray[1] = 2, 116.365305, 39.89622499,'Guanganmen inner', 10
>>> narray[2] = 3, 116.427342, 39.89467499,'Chongwenmenwai', 238
>>> narray[3] = 4, 116.490881, 39.96567299,'jiuxianqiao', 1788
>>> narray[4] = 5, 116.447486, 39.93767799,'Sanlitun', 8902
>>> narray[5] = 6, 116.347435, 40.08078599,'Return to Dragon View', 903
>>> narray[6] = 7, 116.407196, 39.83895899,'Big Red Gate', 88
>>> narray[7] = 8, 116.396915, 39.88371499,'Skybridge', 5
>>> narray[8] = 9, 116.334522, 40.03594199,'Qinghe', 77
>>> narray[9] = 10, 116.03008, 39.87852799,'Tanzhe Temple', 1
>>> result = numpy_array_to_datasetvector(narray, out_dir +'narray_out.udb', x_col='X', y_col='Y')
Parameters:
  • narray (numpy.ndarray) – the ‘numpy’ array to be written
  • output (Datasource or DatasourceConnectionInfo or str) – the output datasource object
  • out_dataset_name (str) – the output dataset name
  • x_col (str) – the column where the X coordinats is located when writing to a point dataset. If it is empty, an attribute table dataset will be written.
  • y_col (str) – the column where the Y coordinats is located when writing to a point dataset. If it is empty, an attribute table dataset will be written.
Returns:

The output dataset object or dataset name.

Return type:

DatasetVector or str

iobjectspy.ml.utils.recordset_to_df(recordset, fields=None, export_spatial=False, skip_null_value=True, null_values=None)

Write a record set to a ‘pandas.DataFrame’. Supports write out the record sets of point, linear, surface, and attribute table dataset.

Parameters:
  • recordset (Recordset) – the record set which the dataset has been write out.
  • fields (list[str]) – name of the fields to be written out. If it is None, all non-system fields are written out
  • export_spatial (bool) – whether export the spatial geometric objects. For point objects, write the points’ X and Y coordiantes into the SmX and SmY fiels, and the points consist the linear and polygon objects will be written out to the SmX and SmY columns. Besides, linear objects will also have a SmLength field to represent the length of that line, and the plygons will have SmPerimeter and SmArea field to show their perimeter and area.
  • skip_null_value (is) – whether to skip records with null values. The integral type field does not supported null value in a DataFrame. If a integer field has null value, that field will be converted to floating-point. Therefore, if the field in the dataset contains null values, you need to fill them with a number. For floating-point fields, the null value is ‘numpy.nan’, for text (TEXT, WTEXT, CHAR, JSONB), it is an empty string, and for Boolean, binary, and time fields, it is None.
  • null_values (dict) – the null value for the specified field. ‘key’ is the field name or field serial number, ‘value’ is a specified value to represent the null value. The ‘value’ type needs to match with the field type. For example, use ‘null_values = {‘ID’: -9999}’ to specify the null value of an integer field ‘ID’ as ‘-9999’,
Returns:

Return a DataFrame object.

Return type:

pandas.DataFrame

iobjectspy.ml.utils.recordset_to_numpy_array(recordset, fields=None, export_spatial=False, skip_null_value=True, null_values=None)

Write out a record set to numpy.ndarray. Supports write out the record sets of point, linear, surface, and attribute table dataset.

recordset_to_numpy_array() and datasetvector_to_numpy_array() are used to write vector data to ‘ndarray’. The output is a 1D array. Each element of the array contains multiple sub-elements. You can directly use the column name to get the column where the sub-item is located. For example, the vector data can be read directly by the following code:

>>> narray = datasetvector_to_numpy_array(data_dir +'example_data.udb/Town_P', export_spatial=True)
>>> print('ndarray.ndim: '+ str(narray.ndim))
ndarray.ndim: 1
>>> print('ndarray.dtype: '+ str(narray.dtype))
ndarray.dtype: [('NAME','<U9'), ('SmX','<f8'), ('SmY','<f8')]
>>> print(narray[:10])
[('Baichigan Township', 115.917748, 39.53525099) ('Shichahai', 116.380351, 39.93393099)
 ('Yuetan', 116.344828, 39.91476099) ('Guanganmen inner', 116.365305, 39.89622499)
 ('Niujie', 116.36388, 39.88680299) ('Chongwenmenwai', 116.427342, 39.89467499)
 ('Outside Yongding Gate', 116.402249, 39.86559299) ('Cui Gezhuang', 116.515447, 39.99966499)
 ('Xiaoguan', 116.411727, 39.97737199) ('Panjiayuan', 116.467911, 39.87179299))
>>> print(narray['SmX'][:10])
[115.917748 116.380351 116.344828 116.365305 116.36388 116.427342
 116.402249 116.515447 116.411727 116.467911]
>>> xy_array = np.c_[narray['SmX'], narray['SmY']][:10]
>>> print(xy_array.ndim)
2
>>> print(xy_array.dtype)
float64
>>> print(xy_array)
[[115.917748 39.53525099]
 [116.380351 39.93393099]
 [116.344828 39.91476099]
 [116.365305 39.89622499]
 [116.36388 39.88680299]
 [116.427342 39.89467499]
 [116.402249 39.86559299]
 [116.515447 39.99966499]
 [116.411727 39.97737199]
 [116.467911 39.87179299]]

When writing the vector data, you can choose whether to write out the spatial information. For point objects, the X coordiants and Y coordinats of the points will be written into the SmX and SmY columns. For linear objects, the points consist of the line :py:meth:’GeoLine.get_inner_point’ will be written to SmX and SmY column, and the lengthof that line will be written to SmLength field. For a polygon object, write the inner point :py:meth:’GeoRegion.get_inner_point’ to the SmX and SmY columns, and write the perimeter and area of that polygon to SmPerimeter and SmArea fields,

>>> narray = datasetvector_to_numpy_array(data_dir +'example_data.udb/Landuse_R', export_spatial=True)
>>> print(narray.dtype)
[('LANDTYPE','<U4'), ('Area','<f4'), ('Area_1','<i2'), ('SmX','<f8'), ('SmY', '<f8'), ('SmPerimeter','<f8'), ('SmArea','<f8')]
>>> print(narray[:10])
[('Timber Forest', 132., 132, 116.47779337, 40.87251703, 0.75917921, 1.40894401e-02)
 ('Timber Forest', 97., 97, 116.6540059, 40.94696274, 0.4945153, 1.03534475e-02)
 ('Shrub', 36., 36, 116.58451795, 40.98712283, 0.25655489, 3.89923745e-03)
 ('Shrub', 36., 36, 116.89611418, 40.76792703, 0.59237713, 3.81791878e-03)
 ('Timber Forest', 1., 1, 116.37943683, 40.91435429, 0.03874328, 7.08450886e-05)
 ('Shrub', 126., 126, 116.49117083, 40.78302383, 0.53664074, 1.34577856e-02)
 ('Timber Forest', 83., 83, 116.69943237, 40.74456848, 0.39696365, 8.83225363e-03)
 ('Timber Forest', 128., 128, 116.8129727, 40.69116153, 0.56949408, 1.35877743e-02)
 ('Timber Forest', 29., 29, 116.24543769, 40.71076092, 0.30082509, 3.07221559e-03)
 ('Shrub', 467., 467, 116.43290772, 40.50875567, 1.91745792, 4.95537433e-02)]
Parameters:
  • recordset (Recordset) – the record set of the dataset need to be written
  • fields (list[str]) – name of the fields to be written out. If it is None, all non-system fields will be written out
  • export_spatial (bool) – whether export the spatial geometric objects. For point objects, write the points’ X and Y coordinates into the SmX and SmY fiels, and the points consist the linear and polygon objects will be written out to the SmX and SmY columns. Besides, linear objects will also have a SmLength field to represent the length of that line, and the plygons will have SmPerimeter and SmArea field to show their perimeter and area.
  • skip_null_value (is) – whether to skip records with null values. The integral type field does not supported null value in a numpy array. If a integer field has null value, that field will be converted to floating-point. Therefore, if the field in the dataset contains null values, you need to fill them with a number. For floating-point fields, the null value is ‘numpy.nan’, for text (TEXT, WTEXT, CHAR, JSONB), it is an empty string, and for Boolean, binary, and time fields, it is None.
  • null_values (dict) – the null value for a specified field. ‘key’ is the field name or field serial number, ‘value’ is a specified value to represent the null value. The ‘value’ type needs to match with the field type. For example, use ‘null_values = {‘ID’: -9999}’ to specify the null value of an integer field ‘ID’ as ‘-9999’,
Returns:

numpy array (1D array).

Return type:

numpy.ndarray

Module contents