Planning Domain Definition Language


The Planning Domain Definition Language is an attempt to standardize Artificial Intelligence planning languages. It was first developed by Drew McDermott and his colleagues in 1998 mainly to make the 1998/2000 possible, and then evolved with each competition. "The adoption of a common formalism for describing planning domains fosters far greater reuse of research and allows more direct comparison of systems and approaches, and therefore supports faster progress in the field. A common formalism is a compromise between expressive power and the progress of basic research. The role of a common formalism as a communication medium for exchange demands that it is provided with a clear semantics."

De facto official versions of PDDL

PDDL1.2

This was the official language of the 1st and IPC in 1998 and 2000 respectively.
It separated the model of the planning problem in two major parts: ' domain description and ' the related problem description. Such a division of the model allows for an intuitive separation of those elements, which are ' present in every specific problem of the problem-domain, and those elements, which ' determine the specific planning-problem. Thus several problem-descriptions may be connected to the same domain-description or in OWL. Thus a domain and a connecting problem description forms the PDDL-model of a planning-problem, and eventually this is the input of a planner software, which aims to solve the given planning-problem via some appropriate planning algorithm. The output of the planner is not specified by PDDL, but it is usually a totally or partially ordered plan. Now lets take a look at the contents of a PDDL1.2 domain and problem description in general...
' The domain description consisted of a domain-name definition, definition of requirements, definition of object-type hierarchy, definition of constant objects, definition of predicates, and also the definition of possible actions. Actions had parameters, preconditions and effects. The effects of actions could be also conditional .
' The problem description consisted of a problem-name definition, the definition of the related domain-name, the definition of all the possible objects, initial conditions, and the definition of goal-states. Thus eventually PDDL1.2 captured the "physics" of a deterministic single-agent discrete fully accessible planning environment.

PDDL2.1

This was the official language of the IPC in 2002.
It introduced numeric fluents, plan-metrics, and durative/continuous actions. Eventually PDDL2.1 allowed the representation and solution of many more real-world problems than the original version of the language.

PDDL2.2

This was the official language of the deterministic track of the IPC in 2004.
It introduced derived predicates, and timed initial literals. Eventually PDDL2.2 extended the language with a few important elements, but wasn't a radical evolution compared to PDDL2.1 after PDDL1.2.

PDDL3.0

This was the official language of the deterministic track of the IPC in 2006.
It introduced state-trajectory constraints and preferences to enable preference-based planning. Eventually PDDL3.0 updated the expressiveness of the language to be able to cope with recent, important developments in planning.

PDDL3.1

This was the official language of the deterministic track of the and IPC in 2008 and 2011 respectively.
It introduced object-fluents. Thus PDDL3.1 adapted the language even more to modern expectations with a syntactically seemingly small, but semantically quite significant change in expressiveness.

Current situation

The latest version of the language is . The BNF syntax definition of PDDL3.1 can be found among the resources of the or the .

Successors/variants/extensions of PDDL

PDDL+

This extension of PDDL2.1 from around 2002–2006 provides a more flexible model of continuous change through the use of autonomous processes and events.
The key this extension provides is the ability to model the interaction between the agent's behaviour and changes that are initiated by the agent's environment. Processes run over time and have a continuous effect on numeric values. They are initiated and terminated either by the direct action of the agent or by events triggered in the environment. This 3-part structure is referred to as the start-process-stop model. Distinctions are made between logical and numeric states: transitions between logical states are assumed to be instantaneous whilst occupation of a given logical state can endure over time. Thus in PDDL+ continuous update expressions are restricted to occur only in process effects. Actions and events, which are instantaneous, are restricted to the expression of discrete change. This introduces the before mentioned 3-part modelling of periods of continuous change: ' an action or event starts a period of continuous change on a numeric variable expressed by means of a process; ' the process realizes the continuous change of the numeric variable; an action or event finally stops the execution of the process and terminates its effect on the numeric variable. Comment: the goals of the plan might be achieved before an active process is stopped.

NDDL

NDDL is NASA's response to PDDL from around 2002.
Its representation differs from PDDL in several respects: 1) it uses a variable/value representation there is no concept of states or actions, only of intervals and constraints between those activities'''. In this respect, models in NDDL look more like schemas for SAT encodings of planning problems rather than PDDL models. Because of the mentioned differences planning and execution of plans may be more robust when using NDDL, but the correspondence to standard planning-problem representations other than PDDL may be much less intuitive than in case of PDDL.

[|MAPL]

MAPL is an extension of PDDL2.1 from around 2003.
It is a quite serious modification of the original language. It introduces non-propositional state-variables. It introduces a temporal model given with modal operators. Nonetheless in PDDL3.0 a more thorough temporal model was given, which is also compatible with the original PDDL syntax. MAPL also introduces actions whose duration will be determined in runtime and explicit plan synchronization which is realized through speech act based communication among agents. This assumption may be artificial, since agents executing concurrent plans shouldn't necessarily communicate to be able to function in a multi-agent environment. Finally, MAPL introduces events for the sake of handling concurrency of actions. Thus events become part of plans explicitly, and are assigned to agents by a control function, which is also part of the plan.

OPT

OPT was a profound extension of PDDL2.1 by Drew McDermott from around 2003–2005.
It was an attempt to create a general-purpose notation for creating ontologies, defined as formalized conceptual frameworks for planning domains about which planning applications are to reason. Its syntax was based on PDDL, but it had a much more elaborate type system, which allowed users to make use of higher-order constructs such as explicit λ-expressions allowing for efficient type inference, but also the functions/fluents defined above these objects had types in the form of arbitrary mappings, which could be generic, so their parameters could be defined with variables, which could have an even higher level type not to speak of that the mappings could be arbitrary, i.e. the domain or range of a function. OPT was basically intended to be upwardly compatible with PDDL2.1. The notation for processes and durative actions was borrowed mainly from PDDL+ and PDDL2.1, but beyond that OPT offered many other significant extensions.

PPDDL

PPDDL 1.0 was the official language of the probabilistic track of the and IPC in 2004 and 2006 respectively.
It extended PDDL2.1 with probabilistic effects, reward fluents, goal rewards, and goal-achieved fluents. Eventually these changes allowed PPDDL1.0 to realize Markov Decision Process planning, where there may be uncertainty in the state-transitions, but the environment is fully observable for the planner/agent.

APPL

APPL is a newer variant of NDDL from 2006, which is more abstract than most existing planning languages such as PDDL or NDDL.
The goal of this language was to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. APPL used the same concepts as NDDL with the extension of actions, and also some other concepts, but still its expressive power is much less than PDDL's.

RDDL

RDDL was the official language of the uncertainty track of the IPC in 2011.
Conceptually it is based on PPDDL1.0 and PDDL3.0, but practically it is a completely different language both syntactically and semantically. The introduction of partial observability is one of the most important changes in RDDL compared to PPDDL1.0. It allows efficient description of Markov Decision Processes and Partially Observable Markov Decision Processes by representing everything with variables. This way RDDL departs from PDDL significantly. Grounded RDDL corresponds to Dynamic Bayesian Networks similarly to PPDDL1.0, but RDDL is more expressive than PPDDL1.0.

MA-PDDL

MA-PDDL is a minimalistic, modular extension of PDDL3.1 introduced in 2012 that allows planning by and for multiple agents. The addition is compatible with all the features of PDDL3.1 and addresses most of the issues of MAPL. It adds the possibility to distinguish between the possibly different actions of different agents. Similarly different agents may have different goals and/or metrics. The preconditions of actions now may directly refer to concurrent actions and thus actions with interacting effects can be represented in a general, flexible way. Moreover, as kind of syntactic sugar, a simple mechanism for the inheritance and polymorphism of actions, goals and metrics was also introduced in MA-PDDL. Since PDDL3.1 assumes that the environment is deterministic and fully observable, the same holds for MA-PDDL, i.e. every agent can access the value of every state fluent at every time-instant and observe every previously executed action of each agent, and also the concurrent actions of agents unambiguously determine the next state of the environment. This was improved later by the addition of partial-observability and probabilistic effects.

Example

This is the domain definition of a STRIPS instance for the automated planning of a robot with two gripper arms.


)

:precondition

)
:effect
))

:precondition




)
:effect

))

:precondition



)
:effect

)))

And this is the problem definition that instantiates the previous domain definition with a concrete environment with two rooms and two balls.













)
)