allenact.embodiedai.models.visual_nav_models
#
VisualNavActorCritic
#
class VisualNavActorCritic(ActorCriticModel[CategoricalDistr])
Base class of visual navigation / manipulation (or broadly, embodied AI) model.
forward_encoder
function requires implementation.
VisualNavActorCritic.num_recurrent_layers
#
| @property
| num_recurrent_layers()
Number of recurrent hidden layers.
VisualNavActorCritic.recurrent_hidden_state_size
#
| @property
| recurrent_hidden_state_size()
The recurrent hidden state size of a single model.
VisualNavActorCritic.forward
#
| forward(observations: ObservationType, memory: Memory, prev_actions: torch.Tensor, masks: torch.FloatTensor) -> Tuple[ActorCriticOutput[DistributionType], Optional[Memory]]
Processes input batched observations to produce new actor and critic
values. Processes input batched observations (along with prior hidden
states, previous actions, and masks denoting which recurrent hidden
states should be masked) and returns an ActorCriticOutput
object
containing the model's policy (distribution over actions) and
evaluation of the current state (value).
Parameters
- observations : Batched input observations.
- memory :
Memory
containing the hidden states from initial timepoints. - prev_actions : Tensor of previous actions taken.
- masks : Masks applied to hidden states. See
RNNStateEncoder
. Returns
Tuple of the ActorCriticOutput
and recurrent hidden state.