Downward Lab tutorial
This tutorial shows you how to install Downward Lab and how to create a simple experiment for Fast Downward that compares two heuristics, the causal graph (CG) heuristic and the FF heuristic. There are many ways for setting up your experiments. This tutorial gives you an opinionated alternative that has proven to work well in practice.
Note
During ICAPS 2020, we gave an online Downward Lab presentation (version 6.2). The second half of the presentation covers this tutorial and you can find the recording here.
Installation
Lab requires Python 3 and Linux. To run Fast Downward experiments, you’ll need a Fast Downward repository, planning benchmarks and a plan validator.
# Install required packages.
sudo apt install bison cmake flex g++ git make python3 python3-venv
# Create directory for holding binaries and scripts.
mkdir --parents ~/bin
# Make directory for all projects related to Fast Downward.
mkdir downward-projects
cd downward-projects
# Install the plan validator VAL.
git clone https://github.com/KCL-Planning/VAL.git
cd VAL
# Newer VAL versions need time stamps, so we use an old version
# (https://github.com/KCL-Planning/VAL/issues/46).
git checkout a556539
make clean # Remove old binaries.
sed -i 's/-Werror //g' Makefile # Ignore warnings.
make
cp validate ~/bin/ # Add binary to a directory on your ``$PATH``.
# Return to projects directory.
cd ../
# Download planning tasks.
git clone https://github.com/aibasel/downward-benchmarks.git benchmarks
# Clone Fast Downward and let it solve an example task.
git clone https://github.com/aibasel/downward.git
cd downward
./build.py
./fast-downward.py ../benchmarks/grid/prob01.pddl --search "astar(lmcut())"
If Fast Downward doesn’t compile, see
http://www.fast-downward.org/ObtainingAndRunningFastDownward and
http://www.fast-downward.org/LPBuildInstructions. We now create a new
directory for our CG-vs-FF project. By putting it into the Fast Downward
repo under experiments/
, it’s easy to share both the code and
experiment scripts with your collaborators.
# Create new branch.
git checkout -b cg-vs-ff main
# Create a new directory for your experiments in Fast Downward repo.
cd experiments
mkdir cg-vs-ff
cd cg-vs-ff
Now it’s time to install Lab. We install it in a Python virtual environment specific to the cg-vs-ff project. This has the advantage that there are no modifications to the system-wide configuration, and that you can have multiple projects with different Lab versions (e.g., for different papers).
# Create and activate a Python 3 virtual environment for Lab.
python3 -m venv --prompt cg-vs-ff .venv
source .venv/bin/activate
# Install Lab in the virtual environment.
pip install -U pip wheel # It's good to have new versions of these.
pip install lab # or preferably a specific version with lab==x.y
# Store installed packages and exact versions for reproducibility.
# Ignore pkg-resources package (https://github.com/pypa/pip/issues/4022).
pip freeze | grep -v "pkg-resources" > requirements.txt
git add requirements.txt
git commit -m "Store requirements for experiments."
To use the same versions of your requirements on a different computer, use
pip install -r requirements.txt
instead of the pip install
commands above.
Add to your ~/.bashrc
file:
# Make executables in ~/bin directory available globally.
export PATH="${PATH}:${HOME}/bin"
# Some example experiments need these two environment variables.
export DOWNWARD_BENCHMARKS=/path/to/downward-projects/benchmarks # Adapt path
export DOWNWARD_REPO=/path/to/downward-projects/downward # Adapt path
Add to your ~/.bash_aliases
file:
# Activate virtualenv and unset PYTHONPATH to obtain isolated virtual environments.
alias venv="unset PYTHONPATH; source .venv/bin/activate"
Finally, reload .bashrc
(which usually also reloads ~/.bash_aliases
):
source ~/.bashrc
You can activate virtual environments now by running venv
in
directories containing a .venv
subdirectory.
Run tutorial experiment
The files below are two experiment scripts, a project.py
module that bundles
common functionality for all experiments related to the project, a parser
module, and a script for collecting results and making reports. You can use the
files as a basis for your own experiments. They are available in the Lab repo. Copy the files
into experiments/my-exp-dir
.
Make sure the experiment script is executable. Then you can see the available steps with
./2020-09-11-A-cg-vs-ff.py
Run all steps with
./2020-09-11-A-cg-vs-ff.py --all
Run individual steps with
./2020-09-11-A-cg-vs-ff.py build
./2020-09-11-A-cg-vs-ff.py 2
./2020-09-11-A-cg-vs-ff.py 3 6 7
The 2020-09-11-A-cg-vs-ff.py
script uses the
downward.experiment.FastDownwardExperiment
class, which reduces the
amount of code you need to write, but assumes a rigid structure of the
experiment: it only allows you to run each added algorithm on each added task,
and individual runs cannot be customized. If you need more flexibility, you can
employ the lab.experiment.Experiment
class instead and fill it by using
FastDownwardAlgorithm
,
FastDownwardRun
,
CachedFastDownwardRevision
, and Task
objects. The 2020-09-11-A-cg-vs-ff.py
script shows
an example. See the Downward Lab API for a
reference on all Downward Lab classes.
#! /usr/bin/env python
import os
import custom_parser
import project
REPO = project.get_repo_base()
BENCHMARKS_DIR = os.environ["DOWNWARD_BENCHMARKS"]
SCP_LOGIN = "myname@myserver.com"
REMOTE_REPOS_DIR = "/infai/username/projects"
# If REVISION_CACHE is None, the default "./data/revision-cache/" is used.
REVISION_CACHE = os.environ.get("DOWNWARD_REVISION_CACHE")
if project.REMOTE:
SUITE = project.SUITE_SATISFICING
ENV = project.BaselSlurmEnvironment(email="my.name@myhost.ch")
else:
SUITE = ["depot:p01.pddl", "grid:prob01.pddl", "gripper:prob01.pddl"]
ENV = project.LocalEnvironment(processes=2)
CONFIGS = [
(f"{index:02d}-{h_nick}", ["--search", f"eager_greedy([{h}])"])
for index, (h_nick, h) in enumerate(
[
("cg", "cg(transform=adapt_costs(one))"),
("ff", "ff(transform=adapt_costs(one))"),
],
start=1,
)
]
BUILD_OPTIONS = []
DRIVER_OPTIONS = ["--overall-time-limit", "5m"]
REV_NICKS = [
("main", ""),
]
ATTRIBUTES = [
"error",
"run_dir",
"search_start_time",
"search_start_memory",
"total_time",
"h_values",
"coverage",
"expansions",
"memory",
project.EVALUATIONS_PER_TIME,
]
exp = project.FastDownwardExperiment(environment=ENV, revision_cache=REVISION_CACHE)
for config_nick, config in CONFIGS:
for rev, rev_nick in REV_NICKS:
algo_name = f"{rev_nick}:{config_nick}" if rev_nick else config_nick
exp.add_algorithm(
algo_name,
REPO,
rev,
config,
build_options=BUILD_OPTIONS,
driver_options=DRIVER_OPTIONS,
)
exp.add_suite(BENCHMARKS_DIR, SUITE)
exp.add_parser(exp.EXITCODE_PARSER)
exp.add_parser(exp.TRANSLATOR_PARSER)
exp.add_parser(exp.SINGLE_SEARCH_PARSER)
exp.add_parser(custom_parser.get_parser())
exp.add_parser(exp.PLANNER_PARSER)
exp.add_step("build", exp.build)
exp.add_step("start", exp.start_runs)
exp.add_step("parse", exp.parse)
exp.add_fetcher(name="fetch")
project.add_absolute_report(
exp, attributes=ATTRIBUTES, filter=[project.add_evaluations_per_time]
)
if not project.REMOTE:
project.add_scp_step(exp, SCP_LOGIN, REMOTE_REPOS_DIR)
attributes = ["expansions"]
pairs = [
("01-cg", "02-ff"),
]
suffix = "-rel" if project.RELATIVE else ""
for algo1, algo2 in pairs:
for attr in attributes:
exp.add_report(
project.ScatterPlotReport(
relative=project.RELATIVE,
get_category=None if project.TEX else lambda run1, run2: run1["domain"],
attributes=[attr],
filter_algorithm=[algo1, algo2],
filter=[project.add_evaluations_per_time],
format="tex" if project.TEX else "png",
),
name=f"{exp.name}-{algo1}-vs-{algo2}-{attr}{suffix}",
)
project.add_compress_exp_dir_step(exp)
exp.run_steps()
#! /usr/bin/env python
import os
import custom_parser
import project
from downward import suites
from downward.cached_revision import CachedFastDownwardRevision
from downward.experiment import FastDownwardAlgorithm, FastDownwardRun
from lab.experiment import Experiment
REPO = project.get_repo_base()
BENCHMARKS_DIR = os.environ["DOWNWARD_BENCHMARKS"]
SCP_LOGIN = "myname@myserver.com"
REMOTE_REPOS_DIR = "/infai/username/projects"
SUITE = ["depot:p01.pddl", "grid:prob01.pddl", "gripper:prob01.pddl"]
REVISION_CACHE = (
os.environ.get("DOWNWARD_REVISION_CACHE") or project.DIR / "data" / "revision-cache"
)
if project.REMOTE:
# ENV = project.BaselSlurmEnvironment(email="my.name@myhost.ch")
ENV = project.TetralithEnvironment(
memory_per_cpu="9G", # leave some space for the scripts
email="first.last@liu.se",
extra_options="#SBATCH --account=naiss2024-5-421",
)
SUITE = project.SUITE_OPTIMAL_STRIPS
else:
ENV = project.LocalEnvironment(processes=2)
CONFIGS = [
("ff", ["--search", "lazy_greedy([ff()], bound=100)"]),
]
BUILD_OPTIONS = []
DRIVER_OPTIONS = [
"--validate",
"--overall-time-limit",
"5m",
"--overall-memory-limit",
"8G",
]
# Pairs of revision identifier and optional revision nick.
REV_NICKS = [
("main", ""),
]
ATTRIBUTES = [
"error",
"run_dir",
"search_start_time",
"search_start_memory",
"total_time",
"h_values",
"coverage",
"expansions",
"memory",
project.EVALUATIONS_PER_TIME,
]
exp = Experiment(environment=ENV)
for rev, rev_nick in REV_NICKS:
cached_rev = CachedFastDownwardRevision(REVISION_CACHE, REPO, rev, BUILD_OPTIONS)
cached_rev.cache()
exp.add_resource("", cached_rev.path, cached_rev.get_relative_exp_path())
for config_nick, config in CONFIGS:
algo_name = f"{rev_nick}-{config_nick}" if rev_nick else config_nick
for task in suites.build_suite(BENCHMARKS_DIR, SUITE):
algo = FastDownwardAlgorithm(
algo_name,
cached_rev,
DRIVER_OPTIONS,
config,
)
run = FastDownwardRun(exp, algo, task)
exp.add_run(run)
exp.add_parser(project.FastDownwardExperiment.EXITCODE_PARSER)
exp.add_parser(project.FastDownwardExperiment.TRANSLATOR_PARSER)
exp.add_parser(project.FastDownwardExperiment.SINGLE_SEARCH_PARSER)
exp.add_parser(custom_parser.get_parser())
exp.add_parser(project.FastDownwardExperiment.PLANNER_PARSER)
exp.add_step("build", exp.build)
exp.add_step("start", exp.start_runs)
exp.add_step("parse", exp.parse)
exp.add_fetcher(name="fetch")
project.add_absolute_report(
exp,
attributes=ATTRIBUTES,
filter=[project.add_evaluations_per_time, project.group_domains],
)
if not project.REMOTE:
project.add_scp_step(exp, SCP_LOGIN, REMOTE_REPOS_DIR)
project.add_compress_exp_dir_step(exp)
exp.run_steps()
import contextlib
import shutil
import subprocess
import sys
import tarfile
from collections import defaultdict
from pathlib import Path
from downward.experiment import FastDownwardExperiment
from downward.reports.absolute import AbsoluteReport
from downward.reports.scatter import ScatterPlotReport
from downward.reports.taskwise import TaskwiseReport
from lab import tools
from lab.environments import (
BaselSlurmEnvironment,
LocalEnvironment,
TetralithEnvironment,
)
from lab.experiment import ARGPARSER
from lab.reports import Attribute, geometric_mean
# Silence import-unused messages. Experiment scripts may use these imports.
assert (
BaselSlurmEnvironment
and FastDownwardExperiment
and LocalEnvironment
and ScatterPlotReport
and TaskwiseReport
and TetralithEnvironment
)
DIR = Path(__file__).resolve().parent
SCRIPT = Path(sys.argv[0]).resolve()
# Cover both the Basel and Linköping clusters for simplicity.
REMOTE = BaselSlurmEnvironment.is_present() or TetralithEnvironment.is_present()
def parse_args():
ARGPARSER.add_argument("--tex", action="store_true", help="produce LaTeX output")
ARGPARSER.add_argument(
"--relative", action="store_true", help="make relative scatter plots"
)
args, _ = ARGPARSER.parse_known_args()
return args
ARGS = parse_args()
TEX = ARGS.tex
RELATIVE = ARGS.relative
EVALUATIONS_PER_TIME = Attribute(
"evaluations_per_time", min_wins=False, function=geometric_mean, digits=1
)
UNSOLVABLE_TASKS = {
"mystery:prob%02d.pddl" % index
for index in [4, 5, 7, 8, 12, 16, 18, 21, 22, 23, 24]
}
# Generated by "./suites.py satisficing" in aibasel/downward-benchmarks repo.
# fmt: off
SUITE_SATISFICING = [
"agricola-sat18-strips", "airport", "assembly", "barman-sat11-strips",
"barman-sat14-strips", "blocks", "caldera-sat18-adl",
"caldera-split-sat18-adl", "cavediving-14-adl", "childsnack-sat14-strips",
"citycar-sat14-adl", "data-network-sat18-strips", "depot", "driverlog",
"elevators-sat08-strips", "elevators-sat11-strips", "flashfill-sat18-adl",
"floortile-sat11-strips", "floortile-sat14-strips", "freecell",
"ged-sat14-strips", "grid", "gripper", "hiking-sat14-strips",
"logistics00", "logistics98", "maintenance-sat14-adl", "miconic",
"miconic-fulladl", "miconic-simpleadl", "movie", "mprime", "mystery",
"nomystery-sat11-strips", "nurikabe-sat18-adl", "openstacks",
"openstacks-sat08-adl", "openstacks-sat08-strips",
"openstacks-sat11-strips", "openstacks-sat14-strips", "openstacks-strips",
"optical-telegraphs", "organic-synthesis-sat18-strips",
"organic-synthesis-split-sat18-strips", "parcprinter-08-strips",
"parcprinter-sat11-strips", "parking-sat11-strips", "parking-sat14-strips",
"pathways", "pegsol-08-strips", "pegsol-sat11-strips", "philosophers",
"pipesworld-notankage", "pipesworld-tankage", "psr-large", "psr-middle",
"psr-small", "rovers", "satellite", "scanalyzer-08-strips",
"scanalyzer-sat11-strips", "schedule", "settlers-sat18-adl",
"snake-sat18-strips", "sokoban-sat08-strips", "sokoban-sat11-strips",
"spider-sat18-strips", "storage", "termes-sat18-strips",
"tetris-sat14-strips", "thoughtful-sat14-strips", "tidybot-sat11-strips",
"tpp", "transport-sat08-strips", "transport-sat11-strips",
"transport-sat14-strips", "trucks", "trucks-strips",
"visitall-sat11-strips", "visitall-sat14-strips",
"woodworking-sat08-strips", "woodworking-sat11-strips", "zenotravel",
]
SUITE_OPTIMAL_STRIPS = [
"agricola-opt18-strips", "airport", "barman-opt11-strips",
"barman-opt14-strips", "blocks", "childsnack-opt14-strips",
"data-network-opt18-strips", "depot", "driverlog", "elevators-opt08-strips",
"elevators-opt11-strips", "floortile-opt11-strips", "floortile-opt14-strips",
"freecell", "ged-opt14-strips", "grid", "gripper", "hiking-opt14-strips",
"logistics00", "logistics98", "miconic", "movie", "mprime", "mystery",
"nomystery-opt11-strips", "openstacks-opt08-strips", "openstacks-opt11-strips",
"openstacks-opt14-strips", "openstacks-strips", "organic-synthesis-opt18-strips",
"organic-synthesis-split-opt18-strips", "parcprinter-08-strips",
"parcprinter-opt11-strips", "parking-opt11-strips", "parking-opt14-strips",
"pathways", "pegsol-08-strips", "pegsol-opt11-strips",
"petri-net-alignment-opt18-strips", "pipesworld-notankage", "pipesworld-tankage",
"psr-small", "rovers", "satellite", "scanalyzer-08-strips",
"scanalyzer-opt11-strips", "snake-opt18-strips", "sokoban-opt08-strips",
"sokoban-opt11-strips", "spider-opt18-strips", "storage", "termes-opt18-strips",
"tetris-opt14-strips", "tidybot-opt11-strips", "tidybot-opt14-strips", "tpp",
"transport-opt08-strips", "transport-opt11-strips", "transport-opt14-strips",
"trucks-strips", "visitall-opt11-strips", "visitall-opt14-strips",
"woodworking-opt08-strips", "woodworking-opt11-strips", "zenotravel",
]
DOMAIN_GROUPS = {
"airport": ["airport"],
"assembly": ["assembly"],
"barman": [
"barman", "barman-opt11-strips", "barman-opt14-strips",
"barman-sat11-strips", "barman-sat14-strips"],
"blocksworld": ["blocks", "blocksworld"],
"cavediving": ["cavediving-14-adl"],
"childsnack": ["childsnack-opt14-strips", "childsnack-sat14-strips"],
"citycar": ["citycar-opt14-adl", "citycar-sat14-adl"],
"depots": ["depot", "depots"],
"driverlog": ["driverlog"],
"elevators": [
"elevators-opt08-strips", "elevators-opt11-strips",
"elevators-sat08-strips", "elevators-sat11-strips"],
"floortile": [
"floortile-opt11-strips", "floortile-opt14-strips",
"floortile-sat11-strips", "floortile-sat14-strips"],
"folding": ["folding-opt23-adl", "folding-sat23-adl"],
"freecell": ["freecell"],
"ged": ["ged-opt14-strips", "ged-sat14-strips"],
"grid": ["grid"],
"gripper": ["gripper"],
"hiking": ["hiking-opt14-strips", "hiking-sat14-strips"],
"labyrinth": ["labyrinth-opt23-adl", "labyrinth-sat23-adl"],
"logistics": ["logistics98", "logistics00"],
"maintenance": ["maintenance-opt14-adl", "maintenance-sat14-adl"],
"miconic": ["miconic", "miconic-strips"],
"miconic-fulladl": ["miconic-fulladl"],
"miconic-simpleadl": ["miconic-simpleadl"],
"movie": ["movie"],
"mprime": ["mprime"],
"mystery": ["mystery"],
"nomystery": ["nomystery-opt11-strips", "nomystery-sat11-strips"],
"openstacks": [
"openstacks", "openstacks-strips", "openstacks-opt08-strips",
"openstacks-opt11-strips", "openstacks-opt14-strips",
"openstacks-sat08-adl", "openstacks-sat08-strips",
"openstacks-sat11-strips", "openstacks-sat14-strips",
"openstacks-opt08-adl", "openstacks-sat08-adl"],
"optical-telegraphs": ["optical-telegraphs"],
"parcprinter": [
"parcprinter-08-strips", "parcprinter-opt11-strips",
"parcprinter-sat11-strips"],
"parking": [
"parking-opt11-strips", "parking-opt14-strips",
"parking-sat11-strips", "parking-sat14-strips"],
"pathways": ["pathways"],
"pathways-noneg": ["pathways-noneg"],
"pegsol": ["pegsol-08-strips", "pegsol-opt11-strips", "pegsol-sat11-strips"],
"philosophers": ["philosophers"],
"pipes-nt": ["pipesworld-notankage"],
"pipes-t": ["pipesworld-tankage"],
"psr": ["psr-middle", "psr-large", "psr-small"],
"quantum-layout": ["quantum-layout-opt23-strips", "quantum-layout-sat23-strips"],
"recharging-robots": ["recharging-robots-opt23-adl", "recharging-robots-sat23-adl"],
"ricochet-robots": ["ricochet-robots-opt23-adl", "ricochet-robots-sat23-adl"],
"rovers": ["rover", "rovers"],
"rubiks-cube": ["rubiks-cube-opt23-adl", "rubiks-cube-sat23-adl"],
"satellite": ["satellite"],
"scanalyzer": [
"scanalyzer-08-strips", "scanalyzer-opt11-strips", "scanalyzer-sat11-strips"],
"schedule": ["schedule"],
"slitherlink": ["slitherlink-opt23-adl", "slitherlink-sat23-adl"],
"sokoban": [
"sokoban-opt08-strips", "sokoban-opt11-strips",
"sokoban-sat08-strips", "sokoban-sat11-strips"],
"storage": ["storage"],
"tetris": ["tetris-opt14-strips", "tetris-sat14-strips"],
"thoughtful": ["thoughtful-sat14-strips"],
"tidybot": [
"tidybot-opt11-strips", "tidybot-opt14-strips",
"tidybot-sat11-strips", "tidybot-sat14-strips"],
"tpp": ["tpp"],
"transport": [
"transport-opt08-strips", "transport-opt11-strips", "transport-opt14-strips",
"transport-sat08-strips", "transport-sat11-strips", "transport-sat14-strips"],
"trucks": ["trucks", "trucks-strips"],
"visitall": [
"visitall-opt11-strips", "visitall-opt14-strips",
"visitall-sat11-strips", "visitall-sat14-strips"],
"woodworking": [
"woodworking-opt08-strips", "woodworking-opt11-strips",
"woodworking-sat08-strips", "woodworking-sat11-strips"],
"zenotravel": ["zenotravel"],
# IPC 2018:
"agricola": ["agricola", "agricola-opt18-strips", "agricola-sat18-strips"],
"caldera": ["caldera-opt18-adl", "caldera-sat18-adl"],
"caldera-split": ["caldera-split-opt18-adl", "caldera-split-sat18-adl"],
"data-network": [
"data-network", "data-network-opt18-strips", "data-network-sat18-strips"],
"flashfill": ["flashfill-sat18-adl"],
"nurikabe": ["nurikabe-opt18-adl", "nurikabe-sat18-adl"],
"organic-split": [
"organic-synthesis-split", "organic-synthesis-split-opt18-strips",
"organic-synthesis-split-sat18-strips"],
"organic" : [
"organic-synthesis", "organic-synthesis-opt18-strips",
"organic-synthesis-sat18-strips"],
"petri-net": [
"petri-net-alignment", "petri-net-alignment-opt18-strips",
"petri-net-alignment-sat18-strips"],
"settlers": ["settlers-opt18-adl", "settlers-sat18-adl"],
"snake": ["snake", "snake-opt18-strips", "snake-sat18-strips"],
"spider": ["spider", "spider-opt18-strips", "spider-sat18-strips"],
"termes": ["termes", "termes-opt18-strips", "termes-sat18-strips"],
}
# fmt: on
DOMAIN_RENAMINGS = {}
for group_name, domains in DOMAIN_GROUPS.items():
for domain in domains:
DOMAIN_RENAMINGS[domain] = group_name
for group_name in DOMAIN_GROUPS:
DOMAIN_RENAMINGS[group_name] = group_name
def group_domains(run):
old_domain = run["domain"]
run["domain"] = DOMAIN_RENAMINGS[old_domain]
run["problem"] = old_domain + "-" + run["problem"]
run["id"][2] = run["problem"]
return run
def get_repo_base() -> Path:
"""Get base directory of the repository, as an absolute path.
Search upwards in the directory tree from the main script until a
directory with a subdirectory named ".git" is found.
Abort if the repo base cannot be found."""
path = Path(SCRIPT)
while path.parent != path:
if (path / ".git").is_dir():
return path
path = path.parent
sys.exit("repo base could not be found")
def remove_file(path: Path):
with contextlib.suppress(FileNotFoundError):
path.unlink()
def add_evaluations_per_time(run):
evaluations = run.get("evaluations")
time = run.get("search_time")
if evaluations is not None and evaluations >= 100 and time:
run["evaluations_per_time"] = evaluations / time
return run
def _get_exp_dir_relative_to_repo():
repo_name = get_repo_base().name
script = Path(SCRIPT)
script_dir = script.parent
rel_script_dir = script_dir.relative_to(get_repo_base())
expname = script.stem
return repo_name / rel_script_dir / "data" / expname
def add_scp_step(exp, login, repos_dir, name="scp-eval-dir"):
remote_exp = Path(repos_dir) / _get_exp_dir_relative_to_repo()
exp.add_step(
name,
subprocess.call,
[
"rsync",
"-Pavz",
f"{login}:{remote_exp}-eval/",
f"{exp.path}-eval/",
],
)
def add_compress_exp_dir_step(exp):
def compress_exp_dir():
tar_file_path = Path(exp.path).parent / f"{exp.name}.tar.xz"
exp_dir_path = Path(exp.path)
with tarfile.open(tar_file_path, mode="w:xz", dereference=True) as tar:
for file in exp_dir_path.rglob("*"):
relpath = file.relative_to(exp_dir_path.parent)
print(f"Adding {relpath}")
tar.add(file, arcname=relpath)
shutil.rmtree(exp_dir_path)
exp.add_step("compress-exp-dir", compress_exp_dir)
def fetch_algorithm(exp, expname, algo, *, new_algo=None):
"""Fetch (and possibly rename) a single algorithm from *expname*."""
new_algo = new_algo or algo
def rename_and_filter(run):
if run["algorithm"] == algo:
run["algorithm"] = new_algo
run["id"][0] = new_algo
return run
return False
exp.add_fetcher(
f"data/{expname}-eval",
filter=rename_and_filter,
name=f"fetch-{new_algo}-from-{expname}",
merge=True,
)
def fetch_algorithms(exp, expname, *, algos=None, name=None, filters=None):
"""
Fetch multiple or all algorithms.
"""
assert not expname.rstrip("/").endswith("-eval")
algos = set(algos or [])
filters = filters or []
if algos:
def algo_filter(run):
return run["algorithm"] in algos
filters.append(algo_filter)
exp.add_fetcher(
f"data/{expname}-eval",
filter=filters,
name=name or f"fetch-from-{expname}",
merge=True,
)
def add_absolute_report(exp, *, name=None, outfile=None, **kwargs):
report = AbsoluteReport(**kwargs)
if name and not outfile:
outfile = f"{name}.{report.output_format}"
elif outfile and not name:
name = Path(outfile).name
elif not name and not outfile:
name = f"{exp.name}-abs"
outfile = f"{name}.{report.output_format}"
if not Path(outfile).is_absolute():
outfile = Path(exp.eval_dir) / outfile
exp.add_report(report, name=name, outfile=outfile)
if not REMOTE:
exp.add_step(f"open-{name}", subprocess.call, ["xdg-open", outfile])
def add_scatter_plot_reports(exp, algorithm_pairs, attributes, *, filter=None):
suffix = "-relative" if RELATIVE else ""
for algo1, algo2 in algorithm_pairs:
for attribute in attributes:
exp.add_report(
ScatterPlotReport(
relative=RELATIVE,
get_category=None if TEX else lambda run1, run2: run1["domain"],
attributes=[attribute],
filter_algorithm=[algo1, algo2],
filter=[add_evaluations_per_time, group_domains]
+ tools.make_list(filter),
format="tex" if TEX else "png",
),
name=f"{exp.name}-{algo1}-{algo2}-{attribute}{suffix}",
)
def check_initial_h_value(run):
h = run.get("initial_h_value")
task = f"{run['domain']}:{run['problem']}"
if h == 9223372036854775807 and task not in UNSOLVABLE_TASKS:
tools.add_unexplained_error(run, f"infinite initial h value: {h}")
return True
def check_search_started(run):
if "search_start_time" not in run:
error = run.get("error")
if error not in ["search-unsolvable-incomplete", "translate-out-of-memory"]:
tools.add_unexplained_error(run, f"search not started due to {error}")
return True
class OptimalityCheckFilter:
"""Check that all algorithms have the same cost for commonly solved tasks.
>>> from downward.reports.absolute import AbsoluteReport
>>> filter = OptimalityCheckFilter()
>>> report = AbsoluteReport(filter=[filter.check_costs])
"""
def __init__(self):
self.tasks_to_costs = defaultdict(list)
self.warned_tasks = set()
def _get_task(self, run):
return (run["domain"], run["problem"])
def check_costs(self, run):
cost = run.get("cost")
if cost is not None:
assert run["coverage"]
task = self._get_task(run)
self.tasks_to_costs[task].append(cost)
if (
task not in self.warned_tasks
and len(set(self.tasks_to_costs[task])) > 1
):
tools.add_unexplained_error(
run,
f"found multiple costs for task {task}: "
f"{self.tasks_to_costs[task]}",
)
self.warned_tasks.add(task)
return True
import logging
import re
from lab.parser import Parser
class CommonParser(Parser):
def add_repeated_pattern(
self, name, regex, file="run.log", required=False, type=int
):
def find_all_occurences(content, props):
matches = re.findall(regex, content)
if required and not matches:
logging.error(f"Pattern {regex} not found in file {file}")
props[name] = [type(m) for m in matches]
self.add_function(find_all_occurences, file=file)
def add_bottom_up_pattern(
self, name, regex, file="run.log", required=False, type=int
):
def search_from_bottom(content, props):
reversed_content = "\n".join(reversed(content.splitlines()))
match = re.search(regex, reversed_content)
if required and not match:
logging.error(f"Pattern {regex} not found in file {file}")
if match:
props[name] = type(match.group(1))
self.add_function(search_from_bottom, file=file)
def get_parser():
parser = CommonParser()
parser.add_bottom_up_pattern(
"search_start_time",
r"\[t=(.+)s, \d+ KB\] g=0, 1 evaluated, 0 expanded",
type=float,
)
parser.add_bottom_up_pattern(
"search_start_memory",
r"\[t=.+s, (\d+) KB\] g=0, 1 evaluated, 0 expanded",
type=int,
)
parser.add_pattern(
"initial_h_value",
r"f = (\d+) \[1 evaluated, 0 expanded, t=.+s, \d+ KB\]",
type=int,
)
parser.add_repeated_pattern(
"h_values",
r"New best heuristic value for .+: (\d+)\n",
type=int,
)
return parser
#! /usr/bin/env python
from pathlib import Path
import project
from lab.experiment import Experiment
ATTRIBUTES = [
"error",
"run_dir",
"planner_time",
"initial_h_value",
"coverage",
"cost",
"evaluations",
"memory",
project.EVALUATIONS_PER_TIME,
]
exp = Experiment()
exp.add_step(
"remove-combined-properties", project.remove_file, Path(exp.eval_dir) / "properties"
)
project.fetch_algorithm(exp, "2020-09-11-A-cg-vs-ff", "01-cg", new_algo="cg")
project.fetch_algorithms(exp, "2020-09-11-B-bounded-cost")
filters = [project.add_evaluations_per_time]
project.add_absolute_report(
exp, attributes=ATTRIBUTES, filter=filters, name=f"{exp.name}"
)
exp.run_steps()
The Downward Lab API shows you how to adjust this example to your needs. You may also find the other example experiments useful.