Skip to content

scan_sensor

ScanSensor

Bases: BaseSensor

General 2D LiDAR range sensor and occupancy grid sensor.

Parameters:

Name Type Description Default
prim_path str

prim path of the Prim to encapsulate or create.

required
name str

Name for the object. Names need to be unique per scene.

required
modalities str or list of str

Modality(s) supported by this sensor. Default is "all", which corresponds to all modalities being used. Otherwise, valid options should be part of cls.all_modalities. For this scan sensor, this includes any of: {scan, occupancy_grid} Note that in order for "occupancy_grid" to be used, "scan" must also be included.

'all'
enabled bool

Whether this sensor should be enabled by default

True
noise None or BaseSensorNoise

If specified, sensor noise model to apply to this sensor.

None
load_config None or dict

If specified, should contain keyword-mapped values that are relevant for loading this sensor's prim at runtime.

None
min_range float

Minimum range to sense in meters

0.05
max_range float

Maximum range to sense in meters

10.0
horizontal_fov float

Field of view of sensor, in degrees

360.0
vertical_fov float

Field of view of sensor, in degrees

1.0
yaw_offset float

Degrees for offsetting this sensors horizontal FOV. Useful in cases where this sensor's forward direction is different than expected

0.0
horizontal_resolution float

Degrees in between each horizontal scan hit

1.0
vertical_resolution float

Degrees in between each vertical scan hit

1.0
rotation_rate float

How fast the range sensor is rotating, in rotations per sec. Set to 0 for all scans be to hit at once

0.0
draw_points bool

Whether to draw the points hit by this sensor

False
draw_lines bool

Whether to draw the lines representing the scans from this sensor

False
occupancy_grid_resolution int

How many discretized nodes in the occupancy grid. This will specify the height == width of the map

128
occupancy_grid_range float

Range of the occupancy grid, in meters

5.0
occupancy_grid_inner_radius float

Inner range of the occupancy grid that will assumed to be empty, in meters

0.5
occupancy_grid_local_link None or XFormPrim

XForm prim that represents the "origin" of any generated occupancy grid, e.g.: if this scan sensor is attached to a robot, then this should possibly be the base link for that robot. If None is specified, then this will default to this own sensor's frame as the origin.

None
Source code in omnigibson/sensors/scan_sensor.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
class ScanSensor(BaseSensor):
    """
    General 2D LiDAR range sensor and occupancy grid sensor.

    Args:
        prim_path (str): prim path of the Prim to encapsulate or create.
        name (str): Name for the object. Names need to be unique per scene.
        modalities (str or list of str): Modality(s) supported by this sensor. Default is "all", which corresponds
            to all modalities being used. Otherwise, valid options should be part of cls.all_modalities.
            For this scan sensor, this includes any of:
                {scan, occupancy_grid}
            Note that in order for "occupancy_grid" to be used, "scan" must also be included.
        enabled (bool): Whether this sensor should be enabled by default
        noise (None or BaseSensorNoise): If specified, sensor noise model to apply to this sensor.
        load_config (None or dict): If specified, should contain keyword-mapped values that are relevant for
            loading this sensor's prim at runtime.
        min_range (float): Minimum range to sense in meters
        max_range (float): Maximum range to sense in meters
        horizontal_fov (float): Field of view of sensor, in degrees
        vertical_fov (float): Field of view of sensor, in degrees
        yaw_offset (float): Degrees for offsetting this sensors horizontal FOV.
            Useful in cases where this sensor's forward direction is different than expected
        horizontal_resolution (float): Degrees in between each horizontal scan hit
        vertical_resolution (float): Degrees in between each vertical scan hit
        rotation_rate (float): How fast the range sensor is rotating, in rotations per sec. Set to 0 for all scans
            be to hit at once
        draw_points (bool): Whether to draw the points hit by this sensor
        draw_lines (bool): Whether to draw the lines representing the scans from this sensor
        occupancy_grid_resolution (int): How many discretized nodes in the occupancy grid. This will specify the
            height == width of the map
        occupancy_grid_range (float): Range of the occupancy grid, in meters
        occupancy_grid_inner_radius (float): Inner range of the occupancy grid that will assumed to be empty, in meters
        occupancy_grid_local_link (None or XFormPrim): XForm prim that represents the "origin" of any generated
            occupancy grid, e.g.: if this scan sensor is attached to a robot, then this should possibly be the base link
            for that robot. If None is specified, then this will default to this own sensor's frame as the origin.
    """
    def __init__(
        self,
        prim_path,
        name,
        modalities="all",
        enabled=True,
        noise=None,
        load_config=None,

        # Basic LIDAR kwargs
        min_range=0.05,
        max_range=10.0,
        horizontal_fov=360.0,
        vertical_fov=1.0,
        yaw_offset=0.0,
        horizontal_resolution=1.0,
        vertical_resolution=1.0,
        rotation_rate=0.0,
        draw_points=False,
        draw_lines=False,

        # Occupancy Grid kwargs
        occupancy_grid_resolution=128,
        occupancy_grid_range=5.0,
        occupancy_grid_inner_radius=0.5,
        occupancy_grid_local_link=None,
    ):
        # Store settings
        self.occupancy_grid_resolution = occupancy_grid_resolution
        self.occupancy_grid_range = occupancy_grid_range
        self.occupancy_grid_inner_radius = int(occupancy_grid_inner_radius * occupancy_grid_resolution
                                                / occupancy_grid_range)
        self.occupancy_grid_local_link = self if occupancy_grid_local_link is None else occupancy_grid_local_link

        # Create variables that will be filled in at runtime
        self._rs = None                 # Range sensor interface, analagous to others, e.g.: dynamic control interface

        # Create load config from inputs
        load_config = dict() if load_config is None else load_config
        load_config["min_range"] = min_range
        load_config["max_range"] = max_range
        load_config["horizontal_fov"] = horizontal_fov
        load_config["vertical_fov"] = vertical_fov
        load_config["yaw_offset"] = yaw_offset
        load_config["horizontal_resolution"] = horizontal_resolution
        load_config["vertical_resolution"] = vertical_resolution
        load_config["rotation_rate"] = rotation_rate
        load_config["draw_points"] = draw_points
        load_config["draw_lines"] = draw_lines

        # Sanity check modalities -- if we're using occupancy_grid without scan modality, raise an error
        if isinstance(modalities, Iterable) and not isinstance(modalities, str) and "occupancy_grid" in modalities:
            assert "scan" in modalities, f"'scan' modality must be included in order to get occupancy_grid modality!"

        # Run super method
        super().__init__(
            prim_path=prim_path,
            name=name,
            modalities=modalities,
            enabled=enabled,
            noise=noise,
            load_config=load_config,
        )

    def _load(self):
        # Define a LIDAR prim at the current stage
        result, lidar = lazy.omni.kit.commands.execute("RangeSensorCreateLidar", path=self._prim_path)

        return lidar.GetPrim()

    def _post_load(self):
        # run super first
        super()._post_load()

        # Set all the lidar kwargs
        self.min_range = self._load_config["min_range"]
        self.max_range = self._load_config["max_range"]
        self.horizontal_fov = self._load_config["horizontal_fov"]
        self.vertical_fov = self._load_config["vertical_fov"]
        self.yaw_offset = self._load_config["yaw_offset"]
        self.horizontal_resolution = self._load_config["horizontal_resolution"]
        self.vertical_resolution = self._load_config["vertical_resolution"]
        self.rotation_rate = self._load_config["rotation_rate"]
        self.draw_points = self._load_config["draw_points"]
        self.draw_lines = self._load_config["draw_lines"]

    def _initialize(self):
        # run super first
        super()._initialize()

        # Initialize lidar sensor interface
        self._rs = lazy.omni.isaac.range_sensor._range_sensor.acquire_lidar_sensor_interface()

    @property
    def _obs_space_mapping(self):
        # Set the remaining modalities' values
        # (obs modality, shape, low, high)
        obs_space_mapping = dict(
            scan=((self.n_horizontal_rays, self.n_vertical_rays), 0.0, 1.0, np.float32),
            occupancy_grid=((self.occupancy_grid_resolution, self.occupancy_grid_resolution, 1), 0.0, 1.0, np.float32),
        )

        return obs_space_mapping

    def get_local_occupancy_grid(self, scan):
        """
        Get local occupancy grid based on current 1D scan

        Args:
            n-array: 1D LiDAR scan

        Returns:
            2D-array: (occupancy_grid_resolution, occupancy_grid_resolution)-sized numpy array of the local occupancy grid
        """
        # Run sanity checks first
        assert "occupancy_grid" in self._modalities, "Occupancy grid is not enabled for this range sensor!"
        assert self.n_vertical_rays == 1, "Occupancy grid is only valid for a 1D range sensor (n_vertical_rays = 1)!"

        # Grab vector of corresponding angles for each scan line
        angles = np.arange(
            -np.radians(self.horizontal_fov / 2),
            np.radians(self.horizontal_fov / 2),
            np.radians(self.horizontal_resolution),
        )

        # Convert into 3D unit vectors for each angle
        unit_vector_laser = np.array([[np.cos(ang), np.sin(ang), 0.0] for ang in angles])

        # Scale unit vectors by corresponding laser scan distnaces
        assert ((scan >= 0.0) & (scan <= 1.0)).all(), "scan out of valid range [0, 1]"
        scan_laser = unit_vector_laser * (scan * (self.max_range - self.min_range) + self.min_range)

        # Convert scans from laser frame to world frame
        pos, ori = self.get_position_orientation()
        scan_world = quat2mat(ori).dot(scan_laser.T).T + pos

        # Convert scans from world frame to local base frame
        base_pos, base_ori = self.occupancy_grid_local_link.get_position_orientation()
        scan_local = quat2mat(base_ori).T.dot((scan_world - base_pos).T).T
        scan_local = scan_local[:, :2]
        scan_local = np.concatenate([np.array([[0, 0]]), scan_local, np.array([[0, 0]])], axis=0)

        # flip y axis
        scan_local[:, 1] *= -1

        # Initialize occupancy grid -- default is unknown values
        occupancy_grid = np.zeros((self.occupancy_grid_resolution, self.occupancy_grid_resolution)).astype(np.uint8)
        occupancy_grid.fill(int(OccupancyGridState.UNKNOWN * 2.0))

        # Convert local scans into the corresponding OG square it should belong to (note now all values are > 0, since
        # OG ranges from [0, resolution] x [0, resolution])
        scan_local_in_map = scan_local / self.occupancy_grid_range * self.occupancy_grid_resolution + \
                            (self.occupancy_grid_resolution / 2)
        scan_local_in_map = scan_local_in_map.reshape((1, -1, 1, 2)).astype(np.int32)

        # For each scan hit,
        for i in range(scan_local_in_map.shape[1]):
            cv2.circle(
                img=occupancy_grid,
                center=(scan_local_in_map[0, i, 0, 0], scan_local_in_map[0, i, 0, 1]),
                radius=2,
                color=int(OccupancyGridState.OBSTACLES * 2.0),
                thickness=-1,
            )
        cv2.fillPoly(
            img=occupancy_grid, pts=scan_local_in_map, color=int(OccupancyGridState.FREESPACE * 2.0), lineType=1
        )
        cv2.circle(
            img=occupancy_grid,
            center=(self.occupancy_grid_resolution // 2, self.occupancy_grid_resolution // 2),
            radius=self.occupancy_grid_inner_radius,
            color=int(OccupancyGridState.FREESPACE * 2.0),
            thickness=-1,
        )

        return occupancy_grid[:, :, None].astype(np.float32) / 2.0

    def _get_obs(self):
        # Run super first to grab any upstream obs
        obs, info = super()._get_obs()

        # Add scan info (normalized to [0.0, 1.0])
        if "scan" in self._modalities:
            raw_scan = self._rs.get_linear_depth_data(self._prim_path)
            # Sometimes get_linear_depth_data will return values that are slightly out of range, needs clipping
            raw_scan = np.clip(raw_scan, self.min_range, self.max_range)
            obs["scan"] = (raw_scan - self.min_range) / (self.max_range - self.min_range)

            # Optionally add occupancy grid info
            if "occupancy_grid" in self._modalities:
                obs["occupancy_grid"] = self.get_local_occupancy_grid(scan=obs["scan"])

        return obs, info

    @property
    def n_horizontal_rays(self):
        """
        Returns:
            int: Number of horizontal rays for this range sensor
        """
        return int(self.horizontal_fov // self.horizontal_resolution)

    @property
    def n_vertical_rays(self):
        """
        Returns:
            int: Number of vertical rays for this range sensor
        """
        return int(self.vertical_fov // self.vertical_resolution)

    @property
    def min_range(self):
        """
        Gets this range sensor's min_range (minimum distance in meters which will register a hit)

        Returns:
            float: minimum range for this range sensor, in meters
        """
        return self.get_attribute("minRange")

    @min_range.setter
    def min_range(self, val):
        """
        Sets this range sensor's min_range (minimum distance in meters which will register a hit)

        Args:
            val (float): minimum range for this range sensor, in meters
        """
        self.set_attribute("minRange", val)

    @property
    def max_range(self):
        """
        Gets this range sensor's max_range (maximum distance in meters which will register a hit)

        Returns:
            float: maximum range for this range sensor, in meters
        """
        return self.get_attribute("maxRange")

    @max_range.setter
    def max_range(self, val):
        """
        Sets this range sensor's max_range (maximum distance in meters which will register a hit)

        Args:
            val (float): maximum range for this range sensor, in meters
        """
        self.set_attribute("maxRange", val)

    @property
    def draw_lines(self):
        """
        Gets whether range lines are drawn for this sensor

        Returns:
            bool: Whether range lines are drawn for this sensor
        """
        return self.get_attribute("drawLines")

    @draw_lines.setter
    def draw_lines(self, draw):
        """
        Sets whether range lines are drawn for this sensor

        Args:
            draw (float): Whether range lines are drawn for this sensor
        """
        self.set_attribute("drawLines", draw)

    @property
    def draw_points(self):
        """
        Gets whether range points are drawn for this sensor

        Returns:
            bool: Whether range points are drawn for this sensor
        """
        return self.get_attribute("drawPoints")

    @draw_points.setter
    def draw_points(self, draw):
        """
        Sets whether range points are drawn for this sensor

        Args:
            draw (float): Whether range points are drawn for this sensor
        """
        self.set_attribute("drawPoints", draw)

    @property
    def horizontal_fov(self):
        """
        Gets this range sensor's horizontal_fov

        Returns:
            float: horizontal field of view for this range sensor
        """
        return self.get_attribute("horizontalFov")

    @horizontal_fov.setter
    def horizontal_fov(self, fov):
        """
        Sets this range sensor's horizontal_fov

        Args:
            fov (float): horizontal field of view to set
        """
        self.set_attribute("horizontalFov", fov)

    @property
    def horizontal_resolution(self):
        """
        Gets this range sensor's horizontal_resolution (degrees in between each horizontal hit)

        Returns:
            float: horizontal resolution for this range sensor, in degrees
        """
        return self.get_attribute("horizontalResolution")

    @horizontal_resolution.setter
    def horizontal_resolution(self, resolution):
        """
        Sets this range sensor's horizontal_resolution (degrees in between each horizontal hit)

        Args:
            resolution (float): horizontal resolution to set, in degrees
        """
        self.set_attribute("horizontalResolution", resolution)

    @property
    def vertical_fov(self):
        """
        Gets this range sensor's vertical_fov

        Returns:
            float: vertical field of view for this range sensor
        """
        return self.get_attribute("verticalFov")

    @vertical_fov.setter
    def vertical_fov(self, fov):
        """
        Sets this range sensor's vertical_fov

        Args:
            fov (float): vertical field of view to set
        """
        self.set_attribute("verticalFov", fov)

    @property
    def vertical_resolution(self):
        """
        Gets this range sensor's vertical_resolution (degrees in between each vertical hit)

        Returns:
            float: vertical resolution for this range sensor, in degrees
        """
        return self.get_attribute("verticalResolution")

    @vertical_resolution.setter
    def vertical_resolution(self, resolution):
        """
        Sets this range sensor's vertical_resolution (degrees in between each vertical hit)

        Args:
            resolution (float): vertical resolution to set, in degrees
        """
        self.set_attribute("verticalResolution", resolution)

    @property
    def yaw_offset(self):
        """
        Gets this range sensor's yaw_offset (used in cases where this sensor's forward direction is different than expected)

        Returns:
            float: yaw offset for this range sensor in degrees
        """
        return self.get_attribute("yawOffset")

    @yaw_offset.setter
    def yaw_offset(self, offset):
        """
        Sets this range sensor's yaw_offset (used in cases where this sensor's forward direction is different than expected)

        Args:
            offset (float): yaw offset to set in degrees.
        """
        self.set_attribute("yawOffset", offset)

    @property
    def rotation_rate(self):
        """
        Gets this range sensor's rotation_rate, in degrees per second. Note that a 0 value corresponds to no rotation,
        and all range hits are assumed to be received at the exact same time.

        Returns:
            float: rotation rate for this range sensor in degrees per second
        """
        return self.get_attribute("rotationRate")

    @rotation_rate.setter
    def rotation_rate(self, rate):
        """
        Sets this range sensor's rotation_rate, in degrees per second. Note that a 0 value corresponds to no rotation,
        and all range hits are assumed to be received at the exact same time.

        Args:
            rate (float): rotation rate for this range sensor in degrees per second
        """
        self.set_attribute("rotationRate", rate)

    @classproperty
    def all_modalities(cls):
        return {"scan", "occupancy_grid"}

    @classproperty
    def no_noise_modalities(cls):
        # Occupancy grid should have no noise
        return {"occupancy_grid"}

    @property
    def enabled(self):
        # Just use super
        return super().enabled

    @enabled.setter
    def enabled(self, enabled):
        # We must use super and additionally directly en/disable the sensor in the simulation
        # Note: weird syntax below required to "extend" super class's implementation, see:
        # https://stackoverflow.com/a/37663266
        super(ScanSensor, self.__class__).enabled.fset(self, enabled)
        self.set_attribute("enabled", enabled)

draw_lines property writable

Gets whether range lines are drawn for this sensor

Returns:

Name Type Description
bool

Whether range lines are drawn for this sensor

draw_points property writable

Gets whether range points are drawn for this sensor

Returns:

Name Type Description
bool

Whether range points are drawn for this sensor

horizontal_fov property writable

Gets this range sensor's horizontal_fov

Returns:

Name Type Description
float

horizontal field of view for this range sensor

horizontal_resolution property writable

Gets this range sensor's horizontal_resolution (degrees in between each horizontal hit)

Returns:

Name Type Description
float

horizontal resolution for this range sensor, in degrees

max_range property writable

Gets this range sensor's max_range (maximum distance in meters which will register a hit)

Returns:

Name Type Description
float

maximum range for this range sensor, in meters

min_range property writable

Gets this range sensor's min_range (minimum distance in meters which will register a hit)

Returns:

Name Type Description
float

minimum range for this range sensor, in meters

n_horizontal_rays property

Returns:

Name Type Description
int

Number of horizontal rays for this range sensor

n_vertical_rays property

Returns:

Name Type Description
int

Number of vertical rays for this range sensor

rotation_rate property writable

Gets this range sensor's rotation_rate, in degrees per second. Note that a 0 value corresponds to no rotation, and all range hits are assumed to be received at the exact same time.

Returns:

Name Type Description
float

rotation rate for this range sensor in degrees per second

vertical_fov property writable

Gets this range sensor's vertical_fov

Returns:

Name Type Description
float

vertical field of view for this range sensor

vertical_resolution property writable

Gets this range sensor's vertical_resolution (degrees in between each vertical hit)

Returns:

Name Type Description
float

vertical resolution for this range sensor, in degrees

yaw_offset property writable

Gets this range sensor's yaw_offset (used in cases where this sensor's forward direction is different than expected)

Returns:

Name Type Description
float

yaw offset for this range sensor in degrees

get_local_occupancy_grid(scan)

Get local occupancy grid based on current 1D scan

Parameters:

Name Type Description Default
n-array

1D LiDAR scan

required

Returns:

Type Description

2D-array: (occupancy_grid_resolution, occupancy_grid_resolution)-sized numpy array of the local occupancy grid

Source code in omnigibson/sensors/scan_sensor.py
def get_local_occupancy_grid(self, scan):
    """
    Get local occupancy grid based on current 1D scan

    Args:
        n-array: 1D LiDAR scan

    Returns:
        2D-array: (occupancy_grid_resolution, occupancy_grid_resolution)-sized numpy array of the local occupancy grid
    """
    # Run sanity checks first
    assert "occupancy_grid" in self._modalities, "Occupancy grid is not enabled for this range sensor!"
    assert self.n_vertical_rays == 1, "Occupancy grid is only valid for a 1D range sensor (n_vertical_rays = 1)!"

    # Grab vector of corresponding angles for each scan line
    angles = np.arange(
        -np.radians(self.horizontal_fov / 2),
        np.radians(self.horizontal_fov / 2),
        np.radians(self.horizontal_resolution),
    )

    # Convert into 3D unit vectors for each angle
    unit_vector_laser = np.array([[np.cos(ang), np.sin(ang), 0.0] for ang in angles])

    # Scale unit vectors by corresponding laser scan distnaces
    assert ((scan >= 0.0) & (scan <= 1.0)).all(), "scan out of valid range [0, 1]"
    scan_laser = unit_vector_laser * (scan * (self.max_range - self.min_range) + self.min_range)

    # Convert scans from laser frame to world frame
    pos, ori = self.get_position_orientation()
    scan_world = quat2mat(ori).dot(scan_laser.T).T + pos

    # Convert scans from world frame to local base frame
    base_pos, base_ori = self.occupancy_grid_local_link.get_position_orientation()
    scan_local = quat2mat(base_ori).T.dot((scan_world - base_pos).T).T
    scan_local = scan_local[:, :2]
    scan_local = np.concatenate([np.array([[0, 0]]), scan_local, np.array([[0, 0]])], axis=0)

    # flip y axis
    scan_local[:, 1] *= -1

    # Initialize occupancy grid -- default is unknown values
    occupancy_grid = np.zeros((self.occupancy_grid_resolution, self.occupancy_grid_resolution)).astype(np.uint8)
    occupancy_grid.fill(int(OccupancyGridState.UNKNOWN * 2.0))

    # Convert local scans into the corresponding OG square it should belong to (note now all values are > 0, since
    # OG ranges from [0, resolution] x [0, resolution])
    scan_local_in_map = scan_local / self.occupancy_grid_range * self.occupancy_grid_resolution + \
                        (self.occupancy_grid_resolution / 2)
    scan_local_in_map = scan_local_in_map.reshape((1, -1, 1, 2)).astype(np.int32)

    # For each scan hit,
    for i in range(scan_local_in_map.shape[1]):
        cv2.circle(
            img=occupancy_grid,
            center=(scan_local_in_map[0, i, 0, 0], scan_local_in_map[0, i, 0, 1]),
            radius=2,
            color=int(OccupancyGridState.OBSTACLES * 2.0),
            thickness=-1,
        )
    cv2.fillPoly(
        img=occupancy_grid, pts=scan_local_in_map, color=int(OccupancyGridState.FREESPACE * 2.0), lineType=1
    )
    cv2.circle(
        img=occupancy_grid,
        center=(self.occupancy_grid_resolution // 2, self.occupancy_grid_resolution // 2),
        radius=self.occupancy_grid_inner_radius,
        color=int(OccupancyGridState.FREESPACE * 2.0),
        thickness=-1,
    )

    return occupancy_grid[:, :, None].astype(np.float32) / 2.0