lab.experiment — Create experiments

Experiment

class lab.experiment.Experiment(path=None, environment=None)[source]

Base class for Lab experiments.

See Concepts for a description of how Lab experiments are structured.

The experiment will be built at path. It defaults to <scriptdir>/data/<scriptname>/. E.g., for the script experiments/myexp.py, the default path will be experiments/data/myexp/.

environment must be an Environment instance. You can use LocalEnvironment to run your experiment on a single computer (default). If you have access to the computer grid in Basel you can use the predefined grid environment BaselSlurmEnvironment. Alternatively, you can derive your own class from Environment.

add_command(name, command, time_limit=None, memory_limit=None, soft_stdout_limit=1024, hard_stdout_limit=10240, soft_stderr_limit=64, hard_stderr_limit=10240, **kwargs)

Call an executable.

If invoked on a run, this method adds the command to the specific run. If invoked on the experiment, the command is appended to the list of commands of all runs.

name is a string describing the command. It must start with a letter and consist exclusively of letters, numbers, underscores and hyphens.

command has to be a list of strings where the first item is the executable.

After time_limit seconds the signal SIGXCPU is sent to the command. The process can catch this signal and exit gracefully. If it doesn’t catch the SIGXCPU signal, the command is aborted with SIGKILL after five additional seconds. The time spent by a command is the sum of time spent across all threads of the process.

The command is aborted with SIGKILL when it uses more than memory_limit MiB.

You can limit the log size (in KiB) with a soft and hard limit for both stdout and stderr. When the soft limit is hit, an unexplained error is registered for this run, but the command is allowed to continue running. When the hard limit is hit, the command is killed with SIGTERM. This signal can be caught and handled by the process.

By default, there are limits for the log and error output, but time and memory are not restricted.

All kwargs (except stdin) are passed to subprocess.Popen. Instead of file handles you can also pass filenames for the stdout and stderr keyword arguments. Specifying the stdin kwarg is not supported.

>>> exp = Experiment()
>>> run = exp.add_run()
>>> # Add commands to a *specific* run.
>>> run.add_command("solver", ["mysolver", "input-file"], time_limit=60)
>>> # Add a command to *all* runs.
>>> exp.add_command("cleanup", ["rm", "my-temp-file"])

Make sure to call all Python programs from the currently active Python interpreter, i.e., sys.executable. Otherwise, the system Python version might be used instead of the Python version from the virtual environment.

>>> run.add_command("myplanner", [sys.executable, "planner.py", "input-file"])
add_fetcher(src=None, dest=None, merge=None, name=None, filter=None, **kwargs)[source]

Add a step that fetches results from an experiment or evaluation directory into a new or existing evaluation directory.

You can use this method to combine results from multiple experiments.

src can be an experiment or evaluation directory or a properties file. It defaults to exp.path.

dest must be a new or existing evaluation directory. It defaults to exp.eval_dir. If dest already contains data and merge is set to None, the user will be prompted whether to override the existing data or to merge the old and new data. Setting merge to True or to False has the effect that the old data is merged or replaced (and the user will not be prompted).

If no name is given, call this step “fetch-basename(src)”.

You can fetch only a subset of runs (e.g., runs for specific domains or algorithms) by passing filters with the filter argument.

Example setup:

>>> exp = Experiment("/tmp/exp")

Fetch all results and write a single combined properties file to the default evaluation directory (this step is added by default):

>>> exp.add_fetcher(name="fetch")

Merge the results from “other-exp” into this experiment’s results:

>>> exp.add_fetcher(src="/path/to/other-exp-eval")

Fetch only the runs for certain algorithms:

>>> exp.add_fetcher(filter_algorithm=["algo_1", "algo_5"])
add_new_file(name, dest, content, permissions=420)

Write content to /path/to/exp-or-run/dest and make the new file available to the commands as name.

name is an alias for the resource in commands. It must start with a letter and consist exclusively of letters, numbers and underscores.

>>> exp = Experiment()
>>> run = exp.add_run()
>>> run.add_new_file("learn", "learn.txt", "a = 5; b = 2; c = 5")
>>> run.add_command("print-trainingset", ["cat", "{learn}"])
add_parser(parser)[source]

Add a lab.parser.Parser to each run of the experiment.

Each parser is executed in each run directory and manipulates the run’s “properties” file. For information about how to write parsers see Parser.

add_report(report, name='', eval_dir='', outfile='')[source]

Add report to the list of experiment steps.

This method is a shortcut for add_step(name, report, eval_dir, outfile) and uses sensible defaults for omitted arguments.

If no name is given, use outfile or the report’s class name.

By default, use the experiment’s standard eval_dir.

If outfile is omitted, compose a filename from name and the report’s format. If outfile is a relative path, put it under eval_dir.

>>> from downward.reports.absolute import AbsoluteReport
>>> exp = Experiment("/tmp/exp")
>>> exp.add_report(AbsoluteReport(attributes=["coverage"]))
add_resource(name, source, dest='', symlink=False)

Include the file or directory source in the experiment or run.

name is an alias for the resource in commands. It must start with a letter and consist exclusively of letters, numbers and underscores. If you don’t need an alias for the resource, set name=’’.

source is copied to /path/to/exp-or-run/dest. If dest is omitted, the last part of the path to source will be taken as the destination filename. If you only want an alias for your resource, but don’t want to copy or link it, set dest to None.

Example:

>>> exp = Experiment()
>>> exp.add_resource("planner", "path/to/my-planner")

includes my-planner in the experiment directory. You can use {planner} to reference my-planner in a run’s commands:

>>> run = exp.add_run()
>>> run.add_resource("domain", "path-to/gripper/domain.pddl")
>>> run.add_resource("task", "path-to/gripper/prob01.pddl")
>>> run.add_command("plan", ["{planner}", "{domain}", "{task}"])
add_run(run=None)[source]

Schedule run to be part of the experiment.

If run is None, create a new run, add it to the experiment and return it.

add_step(name, function, *args, **kwargs)[source]

Add a step to the list of experiment steps.

Use this method to add experiment steps like writing the experiment file to disk, removing directories and publishing results. To add fetch and report steps, use the convenience methods add_fetcher() and add_report().

name is a descriptive name for the step. When selecting steps on the command line, you may either use step names or their indices.

function must be a callable Python object, e.g., a function or a class implementing __call__.

args and kwargs will be passed to function when the step is executed.

>>> import shutil
>>> import subprocess
>>> from lab.experiment import Experiment
>>> exp = Experiment("/tmp/myexp")
>>> exp.add_step("build", exp.build)
>>> exp.add_step("start", exp.start_runs)
>>> exp.add_step("rm-eval-dir", shutil.rmtree, exp.eval_dir)
>>> exp.add_step("greet", subprocess.call, ["echo", "Hello"])
build(write_to_disk=True)[source]

Finalize the internal data structures, then write all files needed for the experiment to disk.

If write_to_disk is False, only compute the internal data structures. This is only needed on grids for FastDownwardExperiments.build() which turns the added algorithms and benchmarks into Runs.

property eval_dir

Return the name of the default evaluation directory.

This is the directory where the fetched and parsed results will land by default.

property name

Return the directory name of the experiment’s path.

parse()[source]

Run all parsers that have been added to the experiment with add_parser().

After parsing, you’ll want to run a “fetch” step to collect the parsed data from the experiment into the evaluation directory.

run_steps()[source]

Parse the commandline and run selected steps.

set_property(name, value)

Add a key-value property.

These can be used later, for example, in reports.

>>> exp = Experiment()
>>> exp.set_property("suite", ["gripper", "grid"])
>>> run = exp.add_run()
>>> run.set_property("domain", "gripper")
>>> run.set_property("problem", "prob01.pddl")

Each run must have the property id which must be a unique list of strings. They determine where the results for this run will land in the combined properties file.

>>> run.set_property("id", ["algo1", "task1"])
>>> run.set_property("id", ["algo2", "domain1", "problem1"])
start_runs()[source]

Execute all runs that were added to the experiment.

Depending on the selected environment this method will start the runs locally or on a computer grid.

Custom command line arguments

lab.experiment.ARGPARSER

ArgumentParser instance that can be used to add custom command line arguments. You can import it, add your arguments and call its parse_args() method to retrieve the argument values. To avoid confusion with step names you shouldn’t use positional arguments.

Note

Custom command line arguments are only passed to locally executed steps.

from lab.experiment import ARGPARSER

ARGPARSER.add_argument(
    "--test",
    choices=["yes", "no"],
    required=True,
    dest="test_run",
    help="run experiment on small suite locally")

args = ARGPARSER.parse_args()
if args.test_run:
    print "perform test run"
else:
    print "run real experiment"

Run

class lab.experiment.Run(experiment)[source]

An experiment consists of multiple runs. There should be one run for each (algorithm, benchmark) pair.

A run consists of one or more commands.

experiment must be an Experiment instance.

add_command(name, command, time_limit=None, memory_limit=None, soft_stdout_limit=1024, hard_stdout_limit=10240, soft_stderr_limit=64, hard_stderr_limit=10240, **kwargs)

Call an executable.

If invoked on a run, this method adds the command to the specific run. If invoked on the experiment, the command is appended to the list of commands of all runs.

name is a string describing the command. It must start with a letter and consist exclusively of letters, numbers, underscores and hyphens.

command has to be a list of strings where the first item is the executable.

After time_limit seconds the signal SIGXCPU is sent to the command. The process can catch this signal and exit gracefully. If it doesn’t catch the SIGXCPU signal, the command is aborted with SIGKILL after five additional seconds. The time spent by a command is the sum of time spent across all threads of the process.

The command is aborted with SIGKILL when it uses more than memory_limit MiB.

You can limit the log size (in KiB) with a soft and hard limit for both stdout and stderr. When the soft limit is hit, an unexplained error is registered for this run, but the command is allowed to continue running. When the hard limit is hit, the command is killed with SIGTERM. This signal can be caught and handled by the process.

By default, there are limits for the log and error output, but time and memory are not restricted.

All kwargs (except stdin) are passed to subprocess.Popen. Instead of file handles you can also pass filenames for the stdout and stderr keyword arguments. Specifying the stdin kwarg is not supported.

>>> exp = Experiment()
>>> run = exp.add_run()
>>> # Add commands to a *specific* run.
>>> run.add_command("solver", ["mysolver", "input-file"], time_limit=60)
>>> # Add a command to *all* runs.
>>> exp.add_command("cleanup", ["rm", "my-temp-file"])

Make sure to call all Python programs from the currently active Python interpreter, i.e., sys.executable. Otherwise, the system Python version might be used instead of the Python version from the virtual environment.

>>> run.add_command("myplanner", [sys.executable, "planner.py", "input-file"])
add_new_file(name, dest, content, permissions=420)

Write content to /path/to/exp-or-run/dest and make the new file available to the commands as name.

name is an alias for the resource in commands. It must start with a letter and consist exclusively of letters, numbers and underscores.

>>> exp = Experiment()
>>> run = exp.add_run()
>>> run.add_new_file("learn", "learn.txt", "a = 5; b = 2; c = 5")
>>> run.add_command("print-trainingset", ["cat", "{learn}"])
add_resource(name, source, dest='', symlink=False)

Include the file or directory source in the experiment or run.

name is an alias for the resource in commands. It must start with a letter and consist exclusively of letters, numbers and underscores. If you don’t need an alias for the resource, set name=’’.

source is copied to /path/to/exp-or-run/dest. If dest is omitted, the last part of the path to source will be taken as the destination filename. If you only want an alias for your resource, but don’t want to copy or link it, set dest to None.

Example:

>>> exp = Experiment()
>>> exp.add_resource("planner", "path/to/my-planner")

includes my-planner in the experiment directory. You can use {planner} to reference my-planner in a run’s commands:

>>> run = exp.add_run()
>>> run.add_resource("domain", "path-to/gripper/domain.pddl")
>>> run.add_resource("task", "path-to/gripper/prob01.pddl")
>>> run.add_command("plan", ["{planner}", "{domain}", "{task}"])
set_property(name, value)

Add a key-value property.

These can be used later, for example, in reports.

>>> exp = Experiment()
>>> exp.set_property("suite", ["gripper", "grid"])
>>> run = exp.add_run()
>>> run.set_property("domain", "gripper")
>>> run.set_property("problem", "prob01.pddl")

Each run must have the property id which must be a unique list of strings. They determine where the results for this run will land in the combined properties file.

>>> run.set_property("id", ["algo1", "task1"])
>>> run.set_property("id", ["algo2", "domain1", "problem1"])

CachedRevision

class lab.cached_revision.CachedRevision(revision_cache, repo, rev, build_cmd, exclude=None, subdir='')[source]

Cache compiled revisions of a solver for quick reuse.

  • revision_cache: path to revision cache directory.

  • repo: path to solver repository.

  • rev: solver revision.

  • build_cmd: list with build script and any build options (e.g., ["./build.py", "release"], ["make"]). Will be executed under subdir.

  • exclude: list of relative paths under subdir that are not needed for building and running the solver. Instead of this parameter, you can also use a .gitattributes file for Git repositories.

  • subdir: relative path from repo to solver subdir.

The following example caches a Fast Downward revision. When you use the FastDownwardExperiment class, you don’t need to cache revisions yourself since the class will do it transparently for you.

>>> import os
>>> repo = os.environ["DOWNWARD_REPO"]
>>> revision_cache = os.environ.get("DOWNWARD_REVISION_CACHE")
>>> if revision_cache:
...     rev = "main"
...     cr = CachedRevision(
...         revision_cache, repo, rev, ["./build.py"], exclude=["experiments"]
...     )
...     # cr.cache()  # Uncomment to actually cache the code.
...

You can now copy the cached repo to your experiment:

...     from lab.experiment import Experiment
...     exp = Experiment()
...     dest_path = os.path.join(exp.path, f"code-{cr.name}")
...     exp.add_resource(f"solver_{cr.name}", cr.path, dest_path)
cache()[source]

Check out the solver revision to self.path and compile the solver.

get_relative_exp_path(relpath='')[source]

Return a path relative to the experiment directory.

Use this function to find out where files from the cache will be put in the experiment directory.

Parser

class lab.parser.Parser[source]

Parse logs or files in a given directory and write results into the properties file.

add_function(function, file='run.log')[source]

Call function(open(file).read(), properties) during parsing.

Functions are applied after all patterns have been evaluated and in the order in which they are added to the parser.

The function is passed the file contents and the properties dictionary. It must manipulate the passed properties dictionary. The return value is ignored.

Example:

>>> import re
>>> from lab.parser import Parser
>>> def parse_states_over_time(content, props):
...     matches = re.findall(r"(.+)s: (\d+) states\n", content)
...     props["states_over_time"] = [(float(t), int(s)) for t, s in matches]
...
>>> parser = Parser()
>>> parser.add_function(parse_states_over_time)

You can use props.add_unexplained_error("message") when your parsing function detects that something went wrong during the run.

add_pattern(attribute, regex, file='run.log', type=<class 'int'>, flags='', required=False)[source]

Look for regex in file, cast what is found in brackets to type and store it in the properties dictionary under attribute. During parsing roughly the following code will be executed:

contents = open(file).read()
match = re.compile(regex).search(contents)
properties[attribute] = type(match.group(1))

flags must be a string of Python regular expression flags (see https://docs.python.org/3/library/re.html). E.g., flags="M" lets “^” and “$” match at the beginning and end of each line, respectively.

If required is True and the pattern is not found in file, an error message is printed to stderr.

>>> parser = Parser()
>>> parser.add_pattern("facts", r"Facts: (\d+)", type=int)
parse(run_dir, props)[source]

Search all patterns and apply all functions.

Add the found values to props.

Environment

class lab.environments.Environment(randomize_task_order=True)[source]

Abstract base class for all environments.

If randomize_task_order is True (default), tasks for runs are started in a random order. This is useful to avoid systematic noise due to, e.g., one of the algorithms being run on a machine with heavy load. Note that due to the randomization, run directories may be pristine while the experiment is running even though the logs say the runs are finished.

Lab supports several built-in environments below. Additionally, support for HTCondor clusters is provided by a third-party repository.

class lab.environments.LocalEnvironment(processes=None, **kwargs)[source]

Environment for running experiments locally on a single machine.

If given, processes must be between 1 and #CPUs. If omitted, it will be set to #CPUs.

See Environment for inherited parameters.

class lab.environments.SlurmEnvironment(email=None, extra_options=None, partition=None, qos=None, time_limit_per_task=None, memory_per_cpu=None, cpus_per_task=1, export=None, setup=None, **kwargs)[source]

Abstract base class for Slurm environments.

If the main experiment step is part of the selected steps, the selected steps are submitted to Slurm. Otherwise, the selected steps are run locally.

Note

If the steps are run by Slurm, this class writes job files to the directory <exppath>-grid-steps and makes them depend on one another. Please inspect the *.log and *.err files in this directory if something goes wrong. Since the job files call the experiment script during execution, it mustn’t be changed during the experiment.

If email is provided and the steps run on the grid, a message will be sent when the last experiment step finishes.

Use extra_options to pass additional options. The extra_options string may contain newlines. The first example below uses only a given set of nodes (additional nodes will be used if the given ones don’t satisfy the resource constraints). The second example shows show to specify a project account (needed on NSC if you’re part of multiple projects).

extra_options="#SBATCH --nodelist=ase[1-5,7,10]"
extra_options="#SBATCH --account=snic2021-5-330"

partition must be a valid Slurm partition name. In Basel you can choose from

  • “infai_1”: 24 nodes with 16 cores, 64GB memory, 500GB Sata (default)

  • “infai_2”: 24 nodes with 20 cores, 128GB memory, 240GB SSD

  • “infai_3”: 12 nodes with 128 cores, 512GB memory, 240GB SSD

qos must be a valid Slurm QOS name. In Basel this must be “normal”.

time_limit_per_task sets the wall-clock time limit for each Slurm task. The BaselSlurmEnvironment subclass uses a default of “0”, i.e., no limit. (Note that there may still be an external limit set in slurm.conf.) The TetralithEnvironment class uses a default of “24:00:00”, i.e., 24 hours. This is because in certain situations, the scheduler prefers to schedule tasks shorter than 24 hours.

memory_per_cpu must be a string specifying the memory allocated for each core. The string must end with one of the letters K, M or G. The default is “3872M”. The value for memory_per_cpu should not surpass the amount of memory that is available per core, which is “3872M” for infai_1, “6354M” for infai_2, and “4028M” for infai_3. Processes that surpass the memory_per_cpu limit are terminated with SIGKILL. To impose a soft limit that can be caught from within your programs, you can use the memory_limit kwarg of add_command(). Fast Downward users should set memory limits via the driver_options.

Slurm limits the memory with cgroups. Unfortunately, this often fails on our nodes, so we set our own soft memory limit for all Slurm jobs. We derive the soft memory limit by multiplying the value denoted by the memory_per_cpu parameter with 0.98 (the Slurm config file contains “AllowedRAMSpace=99” and we add some slack). We use a soft instead of a hard limit so that child processes can raise the limit.

cpus_per_task sets the number of cores to be allocated per Slurm task (default: 1).

Examples that reserve the maximum amount of memory available per core:

>>> env1 = BaselSlurmEnvironment(partition="infai_1", memory_per_cpu="3872M")
>>> env2 = BaselSlurmEnvironment(partition="infai_2", memory_per_cpu="6354M")
>>> env3 = BaselSlurmEnvironment(partition="infai_3", memory_per_cpu="4028M")

Example that reserves 12 GiB of memory on infai_1:

>>> # 12 * 1024 / 3872 = 3.17 -> round to next int -> 4 cores per task
>>> # 12G / 4 = 3G per core
>>> env = BaselSlurmEnvironment(
...     partition="infai_1",
...     memory_per_cpu="3G",
...     cpus_per_task=4,
... )

Example that reserves 12 GiB of memory on infai_2:

>>> # 12 * 1024 / 6354 = 1.93 -> round to next int -> 2 cores per task
>>> # 12G / 2 = 6G per core
>>> env = BaselSlurmEnvironment(
...     partition="infai_2",
...     memory_per_cpu="6G",
...     cpus_per_task=2,
... )

Example that reserves 12 GiB of memory on infai_3:

>>> # 12 * 1024 / 4028 = 3.05 -> round to next int -> 4 cores per task
>>> # 12G / 4 = 3G per core
>>> env = BaselSlurmEnvironment(
...     partition="infai_3",
...     memory_per_cpu="3G",
...     cpus_per_task=4,
... )

Use export to specify a list of environment variables that should be exported from the login node to the compute nodes (default: [“PATH”]).

You can alter the environment in which the experiment runs with the setup argument. If given, it must be a string of Bash commands. Example:

# Load Singularity module.
setup="module load Singularity/2.6.1 2> /dev/null"

Slurm limits the number of job array tasks. You must set the appropriate value for your cluster in the MAX_TASKS class variable. Lab groups ceil(runs/MAX_TASKS) runs in one array task.

See Environment for inherited parameters.

class lab.environments.BaselSlurmEnvironment(email=None, extra_options=None, partition=None, qos=None, time_limit_per_task=None, memory_per_cpu=None, cpus_per_task=1, export=None, setup=None, **kwargs)[source]

Environment for Basel’s AI group.

class lab.environments.TetralithEnvironment(email=None, extra_options=None, partition=None, qos=None, time_limit_per_task=None, memory_per_cpu=None, cpus_per_task=1, export=None, setup=None, **kwargs)[source]

Environment for the NSC Tetralith cluster in Linköping.

Various

lab.__version__

Lab version number. A “+” is appended to all non-tagged revisions.