Downward Lab tutorial¶
This tutorial shows you how to install Downward Lab and how to create a simple experiment for Fast Downward that compares two heuristics, the causal graph (CG) heuristic and the FF heuristic. There are many ways for setting up your experiments. This tutorial gives you an opinionated alternative that has proven to work well in practice.
Note
During ICAPS 2020, we gave an online Downward Lab presentation (version 6.2). The second half of the presentation covers this tutorial and you can find the recording here.
Installation¶
Lab requires Python 3.6+ and Linux. To run Fast Downward experiments, you’ll need a Fast Downward repository, planning benchmarks and a plan validator.
# Install required packages.
sudo apt install bison cmake flex g++ git make python3 python3-venv
# Create directory for holding binaries and scripts.
mkdir --parents ~/bin
# Make directory for all projects related to Fast Downward.
mkdir downward-projects
cd downward-projects
# Install the plan validator VAL.
git clone https://github.com/KCL-Planning/VAL.git
cd VAL
# Newer VAL versions need time stamps, so we use an old version
# (https://github.com/KCL-Planning/VAL/issues/46).
git checkout a556539
make clean # Remove old binaries.
sed -i 's/-Werror //g' Makefile # Ignore warnings.
make
cp validate ~/bin/ # Add binary to a directory on your ``$PATH``.
# Return to projects directory.
cd ../
# Download planning tasks.
git clone https://github.com/aibasel/downward-benchmarks.git benchmarks
# Clone Fast Downward and let it solve an example task.
git clone https://github.com/aibasel/downward.git
cd downward
./build.py
./fast-downward.py ../benchmarks/grid/prob01.pddl --search "astar(lmcut())"
If Fast Downward doesn’t compile, see
http://www.fast-downward.org/ObtainingAndRunningFastDownward and
http://www.fast-downward.org/LPBuildInstructions. We now create a new
directory for our CG-vs-FF project. By putting it into the Fast Downward
repo under experiments/
, it’s easy to share both the code and
experiment scripts with your collaborators.
# Create new branch.
git checkout -b cg-vs-ff main
# Create a new directory for your experiments in Fast Downward repo.
cd experiments
mkdir cg-vs-ff
cd cg-vs-ff
Now it’s time to install Lab. We install it in a Python virtual environment specific to the cg-vs-ff project. This has the advantage that there are no modifications to the system-wide configuration, and that you can have multiple projects with different Lab versions (e.g., for different papers).
# Create and activate a Python 3 virtual environment for Lab.
python3 -m venv --prompt cg-vs-ff .venv
source .venv/bin/activate
# Install Lab in the virtual environment.
pip install -U pip wheel # It's good to have new versions of these.
pip install lab # or preferably a specific version with lab==x.y
# Store installed packages and exact versions for reproducibility.
# Ignore pkg-resources package (https://github.com/pypa/pip/issues/4022).
pip freeze | grep -v "pkg-resources" > requirements.txt
git add requirements.txt
git commit -m "Store requirements for experiments."
To use the same versions of your requirements on a different computer, use
pip install -r requirements.txt
instead of the pip install
commands above.
Add to your ~/.bashrc
file:
# Make executables in ~/bin directory available globally.
export PATH="${PATH}:${HOME}/bin"
# Some example experiments need these two environment variables.
export DOWNWARD_BENCHMARKS=/path/to/downward-projects/benchmarks # Adapt path
export DOWNWARD_REPO=/path/to/downward-projects/downward # Adapt path
Add to your ~/.bash_aliases
file:
# Activate virtualenv and unset PYTHONPATH to obtain isolated virtual environments.
alias venv="unset PYTHONPATH; source .venv/bin/activate"
Finally, reload .bashrc
(which usually also reloads ~/.bash_aliases
):
source ~/.bashrc
You can activate virtual environments now by running venv
in
directories containing a .venv
subdirectory.
Run tutorial experiment¶
The files below are an experiment script for the example experiment, a
project.py
module that bundles common functionality for all
experiments related to the project, a parser script, and a script for
collecting results and making reports. You can use the files as a basis
for your own experiments. They are available in the Lab repo. Copy the
files into experiments/cg-vs-ff
.
Make sure the experiment script and the parser are executable. Then you can see the available steps with
./2020-09-11-A-cg-vs-ff.py
Run all steps with
./2020-09-11-A-cg-vs-ff.py --all
Run individual steps with
./2020-09-11-A-cg-vs-ff.py build
./2020-09-11-A-cg-vs-ff.py 2
./2020-09-11-A-cg-vs-ff.py 3 6 7
#! /usr/bin/env python
import os
import shutil
import project
REPO = project.get_repo_base()
BENCHMARKS_DIR = os.environ["DOWNWARD_BENCHMARKS"]
SCP_LOGIN = "myname@myserver.com"
REMOTE_REPOS_DIR = "/infai/seipp/projects"
if project.REMOTE:
SUITE = project.SUITE_SATISFICING
ENV = project.BaselSlurmEnvironment(email="my.name@myhost.ch")
else:
SUITE = ["depot:p01.pddl", "grid:prob01.pddl", "gripper:prob01.pddl"]
ENV = project.LocalEnvironment(processes=2)
CONFIGS = [
(f"{index:02d}-{h_nick}", ["--search", f"eager_greedy([{h}])"])
for index, (h_nick, h) in enumerate(
[
("cg", "cg(transform=adapt_costs(one))"),
("ff", "ff(transform=adapt_costs(one))"),
],
start=1,
)
]
BUILD_OPTIONS = []
DRIVER_OPTIONS = ["--overall-time-limit", "5m"]
REVS = [
("release-20.06.0", "20.06"),
]
ATTRIBUTES = [
"error",
"run_dir",
"search_start_time",
"search_start_memory",
"total_time",
"h_values",
"coverage",
"expansions",
"memory",
project.EVALUATIONS_PER_TIME,
]
exp = project.FastDownwardExperiment(environment=ENV)
for config_nick, config in CONFIGS:
for rev, rev_nick in REVS:
algo_name = f"{rev_nick}:{config_nick}" if rev_nick else config_nick
exp.add_algorithm(
algo_name,
REPO,
rev,
config,
build_options=BUILD_OPTIONS,
driver_options=DRIVER_OPTIONS,
)
exp.add_suite(BENCHMARKS_DIR, SUITE)
exp.add_parser(exp.EXITCODE_PARSER)
exp.add_parser(exp.TRANSLATOR_PARSER)
exp.add_parser(exp.SINGLE_SEARCH_PARSER)
exp.add_parser(project.DIR / "parser.py")
exp.add_parser(exp.PLANNER_PARSER)
exp.add_step("build", exp.build)
exp.add_step("start", exp.start_runs)
exp.add_fetcher(name="fetch")
if not project.REMOTE:
exp.add_step("remove-eval-dir", shutil.rmtree, exp.eval_dir, ignore_errors=True)
project.add_scp_step(exp, SCP_LOGIN, REMOTE_REPOS_DIR)
project.add_absolute_report(
exp, attributes=ATTRIBUTES, filter=[project.add_evaluations_per_time]
)
attributes = ["expansions"]
pairs = [
("20.06:01-cg", "20.06:02-ff"),
]
suffix = "-rel" if project.RELATIVE else ""
for algo1, algo2 in pairs:
for attr in attributes:
exp.add_report(
project.ScatterPlotReport(
relative=project.RELATIVE,
get_category=None if project.TEX else lambda run1, run2: run1["domain"],
attributes=[attr],
filter_algorithm=[algo1, algo2],
filter=[project.add_evaluations_per_time],
format="tex" if project.TEX else "png",
),
name=f"{exp.name}-{algo1}-vs-{algo2}-{attr}{suffix}",
)
exp.run_steps()
from pathlib import Path
import platform
import subprocess
import sys
from downward.experiment import FastDownwardExperiment
from downward.reports.absolute import AbsoluteReport
from downward.reports.scatter import ScatterPlotReport
from downward.reports.taskwise import TaskwiseReport
from lab import tools
from lab.environments import (
BaselSlurmEnvironment,
LocalEnvironment,
TetralithEnvironment,
)
from lab.experiment import ARGPARSER
from lab.reports import Attribute, geometric_mean
# Silence import-unused messages. Experiment scripts may use these imports.
assert (
BaselSlurmEnvironment
and FastDownwardExperiment
and LocalEnvironment
and ScatterPlotReport
and TaskwiseReport
and TetralithEnvironment
)
DIR = Path(__file__).resolve().parent
NODE = platform.node()
REMOTE = NODE.endswith((".scicore.unibas.ch", ".cluster.bc2.ch"))
def parse_args():
ARGPARSER.add_argument("--tex", action="store_true", help="produce LaTeX output")
ARGPARSER.add_argument(
"--relative", action="store_true", help="make relative scatter plots"
)
return ARGPARSER.parse_args()
ARGS = parse_args()
TEX = ARGS.tex
RELATIVE = ARGS.relative
EVALUATIONS_PER_TIME = Attribute(
"evaluations_per_time", min_wins=False, function=geometric_mean, digits=1
)
# Generated by "./suites.py satisficing" in aibasel/downward-benchmarks repo.
# fmt: off
SUITE_SATISFICING = [
"agricola-sat18-strips", "airport", "assembly", "barman-sat11-strips",
"barman-sat14-strips", "blocks", "caldera-sat18-adl",
"caldera-split-sat18-adl", "cavediving-14-adl", "childsnack-sat14-strips",
"citycar-sat14-adl", "data-network-sat18-strips", "depot", "driverlog",
"elevators-sat08-strips", "elevators-sat11-strips", "flashfill-sat18-adl",
"floortile-sat11-strips", "floortile-sat14-strips", "freecell",
"ged-sat14-strips", "grid", "gripper", "hiking-sat14-strips",
"logistics00", "logistics98", "maintenance-sat14-adl", "miconic",
"miconic-fulladl", "miconic-simpleadl", "movie", "mprime", "mystery",
"nomystery-sat11-strips", "nurikabe-sat18-adl", "openstacks",
"openstacks-sat08-adl", "openstacks-sat08-strips",
"openstacks-sat11-strips", "openstacks-sat14-strips", "openstacks-strips",
"optical-telegraphs", "organic-synthesis-sat18-strips",
"organic-synthesis-split-sat18-strips", "parcprinter-08-strips",
"parcprinter-sat11-strips", "parking-sat11-strips", "parking-sat14-strips",
"pathways", "pegsol-08-strips", "pegsol-sat11-strips", "philosophers",
"pipesworld-notankage", "pipesworld-tankage", "psr-large", "psr-middle",
"psr-small", "rovers", "satellite", "scanalyzer-08-strips",
"scanalyzer-sat11-strips", "schedule", "settlers-sat18-adl",
"snake-sat18-strips", "sokoban-sat08-strips", "sokoban-sat11-strips",
"spider-sat18-strips", "storage", "termes-sat18-strips",
"tetris-sat14-strips", "thoughtful-sat14-strips", "tidybot-sat11-strips",
"tpp", "transport-sat08-strips", "transport-sat11-strips",
"transport-sat14-strips", "trucks", "trucks-strips",
"visitall-sat11-strips", "visitall-sat14-strips",
"woodworking-sat08-strips", "woodworking-sat11-strips", "zenotravel",
]
# fmt: on
def get_repo_base() -> Path:
"""Get base directory of the repository, as an absolute path.
Search upwards in the directory tree from the main script until a
directory with a subdirectory named ".git" is found.
Abort if the repo base cannot be found."""
path = Path(tools.get_script_path())
while path.parent != path:
if (path / ".git").is_dir():
return path
path = path.parent
sys.exit("repo base could not be found")
def remove_file(path: Path):
try:
path.unlink()
except FileNotFoundError:
pass
def add_evaluations_per_time(run):
evaluations = run.get("evaluations")
time = run.get("search_time")
if evaluations is not None and evaluations >= 100 and time:
run["evaluations_per_time"] = evaluations / time
return run
def _get_exp_dir_relative_to_repo():
repo_name = get_repo_base().name
script = Path(tools.get_script_path())
script_dir = script.parent
rel_script_dir = script_dir.relative_to(get_repo_base())
expname = script.stem
return repo_name / rel_script_dir / "data" / expname
def add_scp_step(exp, login, repos_dir):
remote_exp = Path(repos_dir) / _get_exp_dir_relative_to_repo()
exp.add_step(
"scp-eval-dir",
subprocess.call,
[
"scp",
"-r", # Copy recursively.
"-C", # Compress files.
f"{login}:{remote_exp}-eval",
f"{exp.path}-eval",
],
)
def fetch_algorithm(exp, expname, algo, *, new_algo=None):
"""Fetch (and possibly rename) a single algorithm from *expname*."""
new_algo = new_algo or algo
def rename_and_filter(run):
if run["algorithm"] == algo:
run["algorithm"] = new_algo
run["id"][0] = new_algo
return run
return False
exp.add_fetcher(
f"data/{expname}-eval",
filter=rename_and_filter,
name=f"fetch-{new_algo}-from-{expname}",
merge=True,
)
def add_absolute_report(exp, *, name=None, outfile=None, **kwargs):
report = AbsoluteReport(**kwargs)
if name and not outfile:
outfile = f"{name}.{report.output_format}"
elif outfile and not name:
name = Path(outfile).name
elif not name and not outfile:
name = f"{exp.name}-abs"
outfile = f"{name}.{report.output_format}"
if not Path(outfile).is_absolute():
outfile = Path(exp.eval_dir) / outfile
exp.add_report(report, name=name, outfile=outfile)
if not REMOTE:
exp.add_step(f"open-{name}", subprocess.call, ["xdg-open", outfile])
exp.add_step(f"publish-{name}", subprocess.call, ["publish", outfile])
#! /usr/bin/env python
import logging
import re
from lab.parser import Parser
class CommonParser(Parser):
def add_repeated_pattern(
self, name, regex, file="run.log", required=False, type=int
):
def find_all_occurences(content, props):
matches = re.findall(regex, content)
if required and not matches:
logging.error(f"Pattern {regex} not found in file {file}")
props[name] = [type(m) for m in matches]
self.add_function(find_all_occurences, file=file)
def add_bottom_up_pattern(
self, name, regex, file="run.log", required=False, type=int
):
def search_from_bottom(content, props):
reversed_content = "\n".join(reversed(content.splitlines()))
match = re.search(regex, reversed_content)
if required and not match:
logging.error(f"Pattern {regex} not found in file {file}")
if match:
props[name] = type(match.group(1))
self.add_function(search_from_bottom, file=file)
def main():
parser = CommonParser()
parser.add_bottom_up_pattern(
"search_start_time",
r"\[t=(.+)s, \d+ KB\] g=0, 1 evaluated, 0 expanded",
type=float,
)
parser.add_bottom_up_pattern(
"search_start_memory",
r"\[t=.+s, (\d+) KB\] g=0, 1 evaluated, 0 expanded",
type=int,
)
parser.add_pattern(
"initial_h_value",
r"f = (\d+) \[1 evaluated, 0 expanded, t=.+s, \d+ KB\]",
type=int,
)
parser.add_repeated_pattern(
"h_values",
r"New best heuristic value for .+: (\d+)\n",
type=int,
)
parser.parse()
if __name__ == "__main__":
main()
#! /usr/bin/env python
from pathlib import Path
from lab.experiment import Experiment
import project
ATTRIBUTES = [
"error",
"run_dir",
"planner_time",
"initial_h_value",
"coverage",
"cost",
"evaluations",
"memory",
project.EVALUATIONS_PER_TIME,
]
exp = Experiment()
exp.add_step(
"remove-combined-properties", project.remove_file, Path(exp.eval_dir) / "properties"
)
project.fetch_algorithm(exp, "2020-09-11-A-cg-vs-ff", "20.06:01-cg", new_algo="cg")
project.fetch_algorithm(exp, "2020-09-11-A-cg-vs-ff", "20.06:02-ff", new_algo="ff")
filters = [project.add_evaluations_per_time]
project.add_absolute_report(
exp, attributes=ATTRIBUTES, filter=filters, name=f"{exp.name}"
)
exp.run_steps()
The Downward Lab API shows you how to adjust this example to your needs. You may also find the other example experiments useful.