projects.pointnav_baselines.models.point_nav_models
#
PointNavActorCritic
#
class PointNavActorCritic(VisualNavActorCritic)
Use raw image as observation to the agent
ResnetTensorPointNavActorCritic
#
class ResnetTensorPointNavActorCritic(VisualNavActorCritic)
Use resnet_preprocessor to generate observations to the agent
ResnetTensorPointNavActorCritic.is_blind
#
| @property
| is_blind() -> bool
True if the model is blind (e.g. neither 'depth' or 'rgb' is an input observation type).
ResnetTensorGoalEncoder
#
class ResnetTensorGoalEncoder(nn.Module)
ResnetTensorGoalEncoder.get_object_type_encoding
#
| get_object_type_encoding(observations: Dict[str, torch.FloatTensor]) -> torch.FloatTensor
Get the object type encoding from input batched observations.
ResnetDualTensorGoalEncoder
#
class ResnetDualTensorGoalEncoder(nn.Module)
ResnetDualTensorGoalEncoder.get_object_type_encoding
#
| get_object_type_encoding(observations: Dict[str, torch.FloatTensor]) -> torch.FloatTensor
Get the object type encoding from input batched observations.