Skip to content

core.base_abstractions.preprocessor#

[view_source]

Preprocessor#

class Preprocessor(abc.ABC)

[view_source]

Represents a preprocessor that transforms data from a sensor or another preprocessor to the input of agents or other preprocessors. The user of this class needs to implement the process method and the user is also required to set the below attributes:

Attributes:#

input_uuids : List of input universally unique ids. uuid : Universally unique id. observation_space : gym.Space object corresponding to processed observation spaces.

Preprocessor.process#

 | @abc.abstractmethod
 | process(obs: Dict[str, Any], *args: Any, **kwargs: Any) -> Any

[view_source]

Returns processed observations from sensors or other preprocessors.

Parameters

  • obs : Dict with available observations and processed observations.

Returns

Processed observation.

SensorPreprocessorGraph#

class SensorPreprocessorGraph()

[view_source]

Represents a graph of preprocessors, with each preprocessor being identified through a universally unique id.

Allows for the construction of observations that are a function of sensor readings. For instance, perhaps rather than giving your agent a raw RGB image, you'd rather first pass that image through a pre-trained convolutional network and only give your agent the resulting features (see e.g. the ResNetPreprocessor class).

Attributes

  • preprocessors: List containing preprocessors with required input uuids, output uuid of each sensor must be unique.
  • observation_spaces: The observation spaces of the values returned when calling get_observations. By default (see the additionally_exposed_uuids parameter to to change this default) the observations returned by the SensorPreprocessorGraph include only the sink nodes of the graph (i.e. those that are not used by any other preprocessor). Thus if one of the input preprocessors takes as input the 'YOUR_SENSOR_UUID' sensor, then 'YOUR_SENSOR_UUID' will not be returned when calling get_observations.
  • device: The torch.device upon which the preprocessors are run.

SensorPreprocessorGraph.__init__#

 | __init__(source_observation_spaces: SpaceDict, preprocessors: Sequence[Union[Preprocessor, Builder[Preprocessor]]], additional_output_uuids: Sequence[str] = tuple()) -> None

[view_source]

Initializer.

Parameters

  • source_observation_spaces : The observation spaces of all sensors before preprocessing. This generally should be the output of SensorSuite.observation_spaces.
  • preprocessors : The preprocessors that will be included in the graph.
  • additional_output_uuids: As described in the documentation for this class, the observations returned when calling get_observations only include, by default, those observations that are not processed by any preprocessor. If you'd like to include observations that would otherwise not be included, the uuids of these sensors should be included as a sequence of strings here.

SensorPreprocessorGraph.get#

 | get(uuid: str) -> Preprocessor

[view_source]

Return preprocessor with the given uuid.

Parameters

  • uuid : The unique id of the preprocessor.

Returns

The preprocessor with unique id uuid.

SensorPreprocessorGraph.get_observations#

 | get_observations(obs: Dict[str, Any], *args: Any, **kwargs: Any) -> Dict[str, Any]

[view_source]

Get processed observations.

Returns

Collect observations processed from all sensors and return them packaged inside a Dict.

ResNetPreprocessor#

class ResNetPreprocessor(Preprocessor)

[view_source]

Preprocess RGB or depth image using a ResNet model.