downward.experiment — Fast Downward experiment

Note

The FastDownwardExperiment class makes it easy to write “standard” experiments with little boilerplate code, but it assumes a rigid experiment structure: it only allows you to run each added algorithm on each added task, and individual runs cannot easily be customized. An example for this is the 2020-09-11-A-cg-vs-ff.py experiment. If you need more flexibility, you can use the lab.experiment.Experiment class instead and fill it by using FastDownwardAlgorithm, FastDownwardRun, CachedFastDownwardRevision, and Task objects. The 2020-09-11-B-bounded-cost.py script shows an example. All of these classes are documented below.

class downward.experiment.FastDownwardExperiment(path=None, environment=None, revision_cache=None)[source]

Conduct a Fast Downward experiment.

The most important methods for customizing an experiment are add_algorithm(), add_suite(), add_parser(), add_step() and add_report().

Note

To build the experiment, execute its runs and fetch the results, add the following steps:

>>> exp = FastDownwardExperiment()
>>> exp.add_step("build", exp.build)
>>> exp.add_step("start", exp.start_runs)
>>> exp.add_step("parse", exp.parse)
>>> exp.add_fetcher(name="fetch")

See lab.experiment.Experiment for an explanation of the path and environment parameters.

revision_cache is the directory for caching Fast Downward revisions. It defaults to <scriptdir>/data/revision-cache. This directory can become very large since each revision uses about 30 MB.

>>> from lab.environments import BaselSlurmEnvironment
>>> env = BaselSlurmEnvironment(email="my.name@unibas.ch")
>>> exp = FastDownwardExperiment(environment=env)

You can add parsers with add_parser(). See Parser for how to write custom parsers and Bundled parsers for the list of built-in parsers. Which parsers you should use depends on the algorithms you’re running. For single-search experiments, we recommend adding the following parsers in this order:

>>> exp.add_parser(exp.EXITCODE_PARSER)
>>> exp.add_parser(exp.TRANSLATOR_PARSER)
>>> exp.add_parser(exp.SINGLE_SEARCH_PARSER)
>>> exp.add_parser(exp.PLANNER_PARSER)
add_algorithm(name, repo, rev, component_options, build_options=None, driver_options=None)[source]

Add a Fast Downward algorithm to the experiment, i.e., a planner configuration in a given repository at a given revision.

name is a string describing the algorithm (e.g. "issue123-lmcut").

repo must be a path to a Fast Downward repository.

rev must be a valid revision in the given repository (e.g., "e9c2370e6", "my-branch", "issue123").

component_options must be a list of strings. By default these options are passed to the search component. Use "--translate-options", "--preprocess-options" or "--search-options" within the component options to override the default for the following options, until overridden again.

If given, build_options must be a list of strings. They will be passed to the build.py script. Options can be build names (e.g., "releasenolp"), build.py options (e.g., "--debug") or options for Make. If build_options is omitted, the "release" version is built.

If given, driver_options must be a list of strings. They will be passed to the fast-downward.py script. See fast-downward.py --help for available options. The list is always prepended with ["--validate", "--overall-time-limit", "30m", "--overall-memory-limit', "3584M"]. Specifying custom limits overrides the default limits.

Example experiment setup:

>>> import os
>>> exp = FastDownwardExperiment()
>>> repo = os.environ["DOWNWARD_REPO"]
>>> rev = "main"

Run iPDB using the latest revision on the main branch:

>>> exp.add_algorithm("ipdb", repo, rev, ["--search", "astar(ipdb())"])

Run blind search in debug mode:

>>> exp.add_algorithm(
...     "blind",
...     repo,
...     rev,
...     ["--search", "astar(blind())"],
...     build_options=["--debug"],
...     driver_options=["--debug"],
... )

Run LAMA-2011 with custom planner time limit:

>>> exp.add_algorithm(
...     "lama",
...     repo,
...     rev,
...     [],
...     driver_options=[
...         "--alias",
...         "seq-saq-lama-2011",
...         "--overall-time-limit",
...         "5m",
...     ],
... )
add_suite(benchmarks_dir, suite)[source]

Add PDDL or SAS+ benchmarks to the experiment.

benchmarks_dir must be a path to a benchmark directory. It must contain domain directories, which in turn hold PDDL or SAS+ files (ending with “.pddl” or “.sas”).

suite must be a list of domain or domain:task names.

>>> benchmarks_dir = os.environ["DOWNWARD_BENCHMARKS"]
>>> exp = FastDownwardExperiment()
>>> exp.add_suite(benchmarks_dir, ["depot", "gripper"])
>>> exp.add_suite(benchmarks_dir, ["gripper:prob01.pddl"])
>>> exp.add_suite(benchmarks_dir, ["rubiks-cube:p01.sas"])

One source for benchmarks is https://github.com/aibasel/downward-benchmarks. After cloning the repo, you can generate suites with the suites.py script. We recommend using the suite optimal_strips for optimal STRIPS planners and satisficing for satisficing planners:

# Create standard optimal planning suite. $
path/to/downward-benchmarks/suites.py optimal_strips ['airport',
..., 'zenotravel']

Then you can copy the generated list into your experiment script:

>>> exp.add_suite(benchmarks_dir, ["airport", "zenotravel"])

Bundled parsers

The following constants are paths to default parsers that can be passed to exp.add_parser(). The “Used attributes” and “Parsed attributes” lists describe the dependencies between the parsers.

FastDownwardExperiment.EXITCODE_PARSER

Parsed attributes: “error”, “planner_exit_code”, “unsolvable”.

FastDownwardExperiment.TRANSLATOR_PARSER

Parsed attributes: “translator_peak_memory”, “translator_time_done”, etc.

FastDownwardExperiment.SINGLE_SEARCH_PARSER

Parsed attributes: “coverage”, “memory”, “total_time”, etc.

FastDownwardExperiment.ANYTIME_SEARCH_PARSER

Parsed attributes: “cost”, “cost:all”, “coverage”.

FastDownwardExperiment.PLANNER_PARSER

Used attributes: “memory”, “total_time”, “translator_peak_memory”, “translator_time_done”.

Parsed attributes: “node”, “planner_memory”, “planner_time”, “planner_wall_clock_time”, “score_planner_memory”, “score_planner_time”.

class downward.experiment.FastDownwardAlgorithm(name: str, cached_revision: CachedFastDownwardRevision, driver_options, component_options)[source]

A Fast Downward algorithm is the combination of revision, driver options and component options.

cached_revision

An instance of CachedFastDownwardRevision.

component_options

Component options, e.g., ["--search", "astar(lmcut())"].

driver_options

Driver options, e.g., ["--build", "debug"].

name

Algorithm name, e.g., "rev123:astar-lmcut".

class downward.experiment.FastDownwardRun(exp: Experiment, algo: FastDownwardAlgorithm, task: Task)[source]

An experiment run that uses algo to solve task.

See Run for inherited methods.

class downward.cached_revision.CachedFastDownwardRevision(revision_cache, repo, rev, build_options, subdir='')[source]

This class represents Fast Downward checkouts.

It provides methods for caching compiled revisions, so they can be reused quickly in different experiments.

  • revision_cache: Path to revision cache.

  • repo: Path to Fast Downward repository.

  • rev: Fast Downward revision.

  • build_options: List of build.py options.

  • subdir: relative path from repo to Fast Downward subdir.

cache()

Check out the solver revision to self.path and compile the solver.

get_relative_exp_path(relpath='')

Return a path relative to the experiment directory.

Use this function to find out where files from the cache will be put in the experiment directory.

downward.suites — Select benchmarks

class downward.suites.Task(domain: str, problem: str, problem_file, domain_file=None, properties=None)[source]

domain and problem are the display names of the domain and problem, domain_file and problem_file are paths to the respective files on the disk. If domain_file is not given, assume that problem_file is a SAS task.

properties may be a dictionary of entries that should be added to the properties file of each run that uses this problem.

>>> task = Task(
...     "gripper",
...     "p01.pddl",
...     problem_file="/path/to/prob01.pddl",
...     domain_file="/path/to/domain.pddl",
...     properties={"relaxed": False},
... )
downward.suites.build_suite(benchmarks_dir, descriptions)[source]

Compute a list of Task objects.

The path benchmarks_dir must contain a subdir for each domain.

descriptions must be a list of domain or problem descriptions:

build_suite(benchmarks_dir, ["gripper", "grid:prob01.pddl"])