allenact.base_abstractions.task
#
Defines the primary data structures by which agents interact with their environment.
Task
#
class Task(Generic[EnvType])
An abstract class defining a, goal directed, 'task.' Agents interact
with their environment through a task by taking a step
after which they
receive new observations, rewards, and (potentially) other useful
information.
A Task is a helpful generalization of the OpenAI gym's Env
class
and allows for multiple tasks (e.g. point and object navigation) to
be defined on a single environment (e.g. AI2-THOR).
Attributes
env
: The environment.sensor_suite
: Collection of sensors formed from thesensors
argument in the initializer.task_info
: Dictionary of (k, v) pairs defining task goals and other task information.max_steps
: The maximum number of steps an agent can take an in the task before it is considered failed.observation_space
: The observation space returned on each step from the sensors.
Task.action_space
#
| @property
| @abc.abstractmethod
| action_space() -> gym.Space
Task's action space.
Returns
The action space for the task.
Task.render
#
| @abc.abstractmethod
| render(mode: str = "rgb", *args, **kwargs) -> np.ndarray
Render the current task state.
Rendered task state can come in any supported modes.
Parameters
- mode : The mode in which to render. For example, you might have a 'rgb' mode that renders the agent's egocentric viewpoint or a 'dev' mode returning additional information.
- args : Extra args.
- kwargs : Extra kwargs.
Returns
An numpy array corresponding to the requested render.
Task.step
#
| step(action: Any) -> RLStepResult
Take an action in the environment (one per agent).
Takes the action in the environment and returns
observations (& rewards and any additional information)
corresponding to the agent's new state. Note that this function
should not be overwritten without care (instead
implement the _step
function).
Parameters
- action : The action to take, should be of the same form as specified by
self.action_space
.
Returns
A RLStepResult
object encoding the new observations, reward, and
(possibly) additional information.
Task.reached_max_steps
#
| reached_max_steps() -> bool
Has the agent reached the maximum number of steps.
Task.reached_terminal_state
#
| @abc.abstractmethod
| reached_terminal_state() -> bool
Has the agent reached a terminal state (excluding reaching the maximum number of steps).
Task.is_done
#
| is_done() -> bool
Did the agent reach a terminal state or performed the maximum number of steps.
Task.num_steps_taken
#
| num_steps_taken() -> int
Number of steps taken by the agent in the task so far.
Task.action_names
#
| @deprecated
| action_names() -> Tuple[str, ...]
Action names of the Task instance.
This function has been deprecated and will be removed.
This function is a hold-over from when the Task
abstraction only considered gym.space.Discrete
action spaces (in which
case it makes sense name these actions).
This implementation of action_names
requires that a class_action_names
method has been defined. This method should be overwritten if class_action_names
requires key word arguments to determine the number of actions.
Task.close
#
| @abc.abstractmethod
| close() -> None
Closes the environment and any other files opened by the Task (if applicable).
Task.metrics
#
| metrics() -> Dict[str, Any]
Computes metrics related to the task after the task's completion.
By default this function is automatically called during training and the reported metrics logged to tensorboard.
Returns
A dictionary where every key is a string (the metric's name) and the value is the value of the metric.
Task.query_expert
#
| query_expert(**kwargs) -> Tuple[Any, bool]
(Deprecated) Query the expert policy for this task.
The new correct way to include this functionality is through the definition of a class
derived from allenact.base_abstractions.sensor.AbstractExpertActionSensor
or
allenact.base_abstractions.sensor.AbstractExpertPolicySensor
, where a
query_expert
method must be defined.
Returns
A tuple (x, y) where x is the expert action (or policy) and y is False \ if the expert could not determine the optimal action (otherwise True). Here y \ is used for masking. Even when y is False, x should still lie in the space of \ possible values (e.g. if x is the expert policy then x should be the correct length, \ sum to 1, and have non-negative entries).
Task.cumulative_reward
#
| @property
| cumulative_reward() -> float
Mean per-agent total cumulative in the task so far.
Returns
Mean per-agent cumulative reward as a float.
TaskSampler
#
class TaskSampler(abc.ABC)
Abstract class defining a how new tasks are sampled.
TaskSampler.length
#
| @property
| @abc.abstractmethod
| length() -> Union[int, float]
Length.
Returns
Number of total tasks remaining that can be sampled. Can be float('inf').
TaskSampler.last_sampled_task
#
| @property
| @abc.abstractmethod
| last_sampled_task() -> Optional[Task]
Get the most recently sampled Task.
Returns
The most recently sampled Task.
TaskSampler.next_task
#
| @abc.abstractmethod
| next_task(force_advance_scene: bool = False) -> Optional[Task]
Get the next task in the sampler's stream.
Parameters
- force_advance_scene : Used to (if applicable) force the task sampler to use a new scene for the next task. This is useful if, during training, you would like to train with one scene for some number of steps and then explicitly control when you begin training with the next scene.
Returns
The next Task in the sampler's stream if a next task exists. Otherwise None.
TaskSampler.close
#
| @abc.abstractmethod
| close() -> None
Closes any open environments or streams.
Should be run when done sampling.
TaskSampler.all_observation_spaces_equal
#
| @property
| @abc.abstractmethod
| all_observation_spaces_equal() -> bool
Checks if all observation spaces of tasks that can be sampled are equal.
This will almost always simply return True
. A case in which it should
return False
includes, for example, a setting where you design
a TaskSampler
that can generate different types of tasks, i.e.
point navigation tasks and object navigation tasks. In this case, these
different tasks may output different types of observations.
Returns
True if all Tasks that can be sampled by this sampler have the same observation space. Otherwise False.
TaskSampler.reset
#
| @abc.abstractmethod
| reset() -> None
Resets task sampler to its original state (except for any seed).
TaskSampler.set_seed
#
| @abc.abstractmethod
| set_seed(seed: int) -> None
Sets new RNG seed.
Parameters
- seed : New seed.