array_tensor_utils
any_concat(xs, *, dim=0)
Works for both th Tensor and numpy array
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
any_ones_like(x)
Returns a one-filled object of the same (d)type and shape as the input.
The difference between this and np.ones_like() is that this works well
with np.number, int, float, and jax.numpy.DeviceArray objects without
converting them to np.ndarrays.
Args:
x: The object to replace with 1s.
Returns:
A one-filed object of the same (d)type and shape as the input.
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
any_slice(x, slice)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slice
|
you can use np.s_[...] to return the slice object |
required |
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
any_stack(xs, *, dim=0)
Works for both th Tensor and numpy array
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
get_batch_size(x, strict=False)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
can be any arbitrary nested structure of np array and th tensor |
required | |
strict
|
bool
|
True to check all batch sizes are the same |
False
|
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
make_recursive_func(fn, *, with_path=False)
Decorator that turns a function that works on a single array/tensor to working on arbitrary nested structures.
Source code in OmniGibson/omnigibson/learning/utils/array_tensor_utils.py
sequential_sum_balanced_partitioning(nums, M, i)
Split a list of numbers into M partitions, where the i-th partition is returned.
The i-th partition is balanced such that the sum of the numbers in each partition
is as equal as possible.
NOTE: if sum not divisible by M, the first sum % M partitions will have one more element.
Args:
nums: list of numbers to be partitioned
M: number of partitions
i: index of the partition to be returned (0-indexed)
Returns:
start_idx: starting index of the i-th partition
start_offset: offset of the first element in the i-th partition
end_idx: ending index of the i-th partition (inclusive)
end_offset: offset of the last element in the i-th partition (not inclusive)
Example:
nums = [1, 2, 3, 4, 5, 6]
M = 3
i = 1
sequential_sum_balanced_partitioning(nums, M, i)
Returns: (3, 1, 4, 4)