Tutorial: Swapping in a new environment#
Note The provided paths in this tutorial assume you have installed the full library.
Introduction#
This tutorial was designed as a continuation of the Robothor PointNav Tutorial
and explains
how to modify the experiment config created in that tutorial to work with the iTHOR and
Habitat environments.
Cross-platform support is one of the key design goals of allenact
. This is achieved through
a total decoupling of the environment code from the engine, model and algorithm code, so that
swapping in a new environment is as plug and play as possible. Crucially we will be able to
run a model on different environments without touching the model code at all, which will allow
us to train neural networks in one environment and test them in another.
RoboTHOR to iTHOR#
Since both the RoboTHOR
and the iTHOR
environment stem from the same family and are developed
by the same organization, switching between the two is incredibly easy. We only have to change
the path parameter to point to an iTHOR dataset rather than the RoboTHOR one.
# Dataset Parameters
TRAIN_DATASET_DIR = "datasets/ithor-pointnav/train"
VAL_DATASET_DIR = "datasets/ithor-pointnav/val"
We also have to download the iTHOR-PointNav
dataset, following these instructions.
We might also want to modify the tag
method to accurately reflect our config but this will not change
the behavior at all and is merely a bookkeeping convenience.
@classmethod
def tag(cls):
return "PointNavRobothorRGBPPO"
RoboTHOR to Habitat#
To train experiments using the Habitat framework we need to install it following these instructions.
Since the roboTHOR and Habitat simulators are sufficiently different and have different parameters to configure this transformation takes a bit more effort, but we only need to modify the environment config and TaskSampler (we have to change the former because the habitat simulator accepts a different format of configuration and the latter because the habitat dataset is formatted differently and thus needs to be parsed differently.)
As part of our environment modification, we need to switch from using RoboTHOR sensors to using Habitat sensors. The implementation of sensors we provide offer an uniform interface across all the environments so we simply have to swap out our sensor classes:
SENSORS = [
DepthSensorHabitat(
height=SCREEN_SIZE,
width=SCREEN_SIZE,
use_normalization=True,
),
TargetCoordinatesSensorHabitat(coordinate_dims=2),
]
Next we need to define the simulator config:
CONFIG = get_habitat_config("configs/gibson.yaml")
CONFIG.defrost()
CONFIG.NUM_PROCESSES = NUM_PROCESSES
CONFIG.SIMULATOR_GPU_IDS = TRAIN_GPUS
CONFIG.DATASET.SCENES_DIR = HABITAT_SCENE_DATASETS_DIR
CONFIG.DATASET.POINTNAVV1.CONTENT_SCENES = ["*"]
CONFIG.DATASET.DATA_PATH = TRAIN_SCENES
CONFIG.SIMULATOR.AGENT_0.SENSORS = ["RGB_SENSOR"]
CONFIG.SIMULATOR.RGB_SENSOR.WIDTH = CAMERA_WIDTH
CONFIG.SIMULATOR.RGB_SENSOR.HEIGHT = CAMERA_HEIGHT
CONFIG.SIMULATOR.TURN_ANGLE = 30
CONFIG.SIMULATOR.FORWARD_STEP_SIZE = 0.25
CONFIG.ENVIRONMENT.MAX_EPISODE_STEPS = MAX_STEPS
CONFIG.TASK.TYPE = "Nav-v0"
CONFIG.TASK.SUCCESS_DISTANCE = 0.2
CONFIG.TASK.SENSORS = ["POINTGOAL_WITH_GPS_COMPASS_SENSOR"]
CONFIG.TASK.POINTGOAL_WITH_GPS_COMPASS_SENSOR.GOAL_FORMAT = "POLAR"
CONFIG.TASK.POINTGOAL_WITH_GPS_COMPASS_SENSOR.DIMENSIONALITY = 2
CONFIG.TASK.GOAL_SENSOR_UUID = "pointgoal_with_gps_compass"
CONFIG.TASK.MEASUREMENTS = ["DISTANCE_TO_GOAL", "SUCCESS", "SPL"]
CONFIG.TASK.SPL.TYPE = "SPL"
CONFIG.TASK.SPL.SUCCESS_DISTANCE = 0.2
CONFIG.TASK.SUCCESS.SUCCESS_DISTANCE = 0.2
CONFIG.MODE = "train"
This CONFIG
object holds very similar values to the ones ENV_ARGS
held in the RoboTHOR example. We
decided to leave this way of passing in configurations exposed to the user to offer maximum customization
of the underlying environment.
Finally we need to replace the task sampler and its argument generating functions:
# Define Task Sampler
from allenact_plugins.habitat_plugin.habitat_task_samplers import PointNavTaskSampler
@classmethod
def make_sampler_fn(cls, **kwargs) -> TaskSampler:
return PointNavTaskSampler(**kwargs)
def train_task_sampler_args(
self,
process_ind: int,
total_processes: int,
devices: Optional[List[int]] = None,
seeds: Optional[List[int]] = None,
deterministic_cudnn: bool = False,
) -> Dict[str, Any]:
config = self.TRAIN_CONFIGS_PER_PROCESS[process_ind]
return {
"env_config": config,
"max_steps": self.MAX_STEPS,
"sensors": self.SENSORS,
"action_space": gym.spaces.Discrete(len(PointNavTask.action_names())),
"distance_to_goal": self.DISTANCE_TO_GOAL,
}
def valid_task_sampler_args(
self,
process_ind: int,
total_processes: int,
devices: Optional[List[int]] = None,
seeds: Optional[List[int]] = None,
deterministic_cudnn: bool = False,
) -> Dict[str, Any]:
config = self.CONFIG.clone()
config.defrost()
config.DATASET.DATA_PATH = self.VALID_SCENES_PATH
config.MODE = "validate"
config.freeze()
return {
"env_config": config,
"max_steps": self.MAX_STEPS,
"sensors": self.SENSORS,
"action_space": gym.spaces.Discrete(len(PointNavTask.action_names())),
"distance_to_goal": self.DISTANCE_TO_GOAL,
}
def test_task_sampler_args(
self,
process_ind: int,
total_processes: int,
devices: Optional[List[int]] = None,
seeds: Optional[List[int]] = None,
deterministic_cudnn: bool = False,
) -> Dict[str, Any]:
config = self.TEST_CONFIGS[process_ind]
return {
"env_config": config,
"max_steps": self.MAX_STEPS,
"sensors": self.SENSORS,
"action_space": gym.spaces.Discrete(len(PointNavTask.action_names())),
"distance_to_goal": self.DISTANCE_TO_GOAL,
}
As we can see this code looks very similar as well, we simply need to pass slightly different parameters.
Conclusion#
In this tutorial, we learned how to modify our experiment configurations to work with different environments. By
providing a high level of modularity and out-of-the-box support for both Habitat
and THOR
, two of the most popular embodied frameworks out there AllenAct hopes to give researchers the ability to validate their results across many platforms and help guide them towards genuine progress. The source code for this tutorial can be found in /projects/framework_transfer_tutorial
.