spacing
design

The Motion Grammar


The Motion Grammar is a powerful new representation for task decomposition, perception, planning, and hybrid control that provides a computationally tractable way to control robots in uncertain environments with guarantees on correctness and completeness. The grammar represents a policy for the task which is parsed in real-time based on perceptual input. Branches of the syntax tree form the levels of a hierarchical decomposition, and the individual robot sensor readings are given by tokens. We implement this approach in the interactive games of Yamakuzushi and Chess on a physical robot resulting in a system that repeatably responds to a strategic and physically unpredictable human opponent in sustained game-play.

This project is supported by the National Science Foundation.

Software

The Motion Grammar Kit implements many algorithms for formal language analysis, verification, and code generation.

Publications

Journal

  • Neil T. Dantam and Mike Stilman The Motion Grammar: Analysis of a Linguistic Method for Robot Control IEEE/RAS Transactions on Robotics. no. 3. 2013.

    We present the Motion Grammar: an approach to represent and verify robot control policies based on Context-Free Grammars. The production rules of the grammar represent a top-down task decomposition of robot behavior. The terminal symbols of this language represent sensor readings that are parsed in real-time. Efficient algorithms for context-free parsing guarantee that online parsing is computationally tractable. We analyze verification properties and language constraints of this linguistic modeling approach, show a linguistic basis that unifies several existing methods, and demonstrate effectiveness through experiments on a 14-DOF manipulator interacting with 32 objects (chess pieces) and an unpredictable human adversary. We provide many of the algorithms discussed as Open Source, permissively licensed software.

    @article{dantam2013motion,
      title = {The Motion Grammar: Analysis of a Linguistic Method for Robot Control},
      number = {3},
      volume = {29},
      pages = {704--718},
      journal = {IEEE/RAS Transactions on Robotics},
      author = {Neil T. Dantam and Mike Stilman},
      year = {2013}
    }
    

Conference

  • Neil T. Dantam, Ayonga Hereid, Aaron Ames, and Mike Stilman Correct Software Synthesis for Stable Speed-Controlled Robotic Walking Robotics: Science and Systems. 2013.

    We present a software synthesis method for speed-controlled robot walking based on supervisory control of a context-free Motion Grammar. First, we use Human-Inspired control to identify parameters for stable fixed speed walking and for transitions between fixed speeds. Next, we build a Motion Grammar representing the discrete-time control for this set of speeds. Then, we synthesize C code from this grammar and generate supervisors online to achieve desired walking speeds, ensuring correctness of discrete behavior. Finally, we demonstrate this approach on the Aldebaran NAO, showing stable walking transitions with dynamically selected speeds.

    @inproceedings{dantam2013rss,
      title = {Correct Software Synthesis for Stable Speed-Controlled Robotic Walking},
      month = {June},
      booktitle = {Robotics: Science and Systems},
      author = {Neil T. Dantam and Hereid, Ayonga and Ames, Aaron and Mike Stilman},
      year = {2013}
    }
    
  • 2012
  • Neil T. Dantam, Irfan Essa, and Mike Stilman Linguistic Transfer of Human Assembly Tasks to Robots IEEE/RSJ International Conference on Intelligent Robots and Systems. 2012.

    We demonstrate the automatic transfer of an assembly task from human to robot. This work extends efforts showing the utility of linguistic models in verifiable robot control policies by now performing real visual analysis of human demonstrations to automatically extract a policy for the task. This method tokenizes each human demonstration into a sequence of object connection symbols, then transforms the set of sequences from all demonstrations into an automaton, which represents the task-language for assembling a desired object. Finally, we combine this assembly automaton with a kinematic model of a robot arm to reproduce the demonstrated task.

    @inproceedings{dantam2012mgassem,
      title = {Linguistic Transfer of Human Assembly Tasks to Robots},
      pages = {237--242},
      month = {October},
      booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems},
      author = {Neil T. Dantam and Irfan Essa and Mike Stilman},
      year = {2012}
    }
    
  • Neil T. Dantam and Mike Stilman The Motion Grammar Calculus for Context-Free Hybrid Systems American Control Conference. 2012. Best Presentation in Session

    This paper provides a method for deriving provably correct controllers for Hybrid Dynamical Systems with Context-Free discrete dynamics, nonlinear continuous dynamics, and nonlinear state partitioning. The proposed method models the system using a Context-Free Motion Grammar and specifies correct performance using a Regular language representation such as Linear Temporal Logic. The initial model is progressively rewritten via a calculus of symbolic transformation rules until it satisfies the desired specification.

    @inproceedings{dantam2012mgcalc,
      title = {The Motion Grammar Calculus for Context-Free Hybrid Systems},
      pages = {5294--5301},
      month = {June},
      booktitle = {American Control Conference},
      author = {Neil T. Dantam and Mike Stilman},
      year = {2012}
    }
    
  • Neil T. Dantam, Carlos Nieto-Granda, Henrik Christensen, and Mike Stilman Linguistic Composition of Semantic Maps and Hybrid Controllers International Symposium on Experimental Robotics. 2012.

    This work combines semantic maps with hybrid control models, generating a direct link between action and environment models to produce a control policy for mobile manipulation in unstructured environments. First, we generate a semantic map for our environment and design a base model of robot action. Then, we combine this map and action model using the Motion Grammar Calculus to produce a combined robot-environment model. Using this combined model, we apply supervisory control to produce a policy for the manipulation task. We demonstrate this approach on a Segway RMP-200 mobile platform.

    @inproceedings{dantam2012composition,
      title = {Linguistic Composition of Semantic Maps and Hybrid Controllers},
      pages = {17--21},
      month = {June},
      booktitle = {International Symposium on Experimental Robotics},
      author = {Neil T. Dantam and Carlos Nieto-Granda and Henrik Christensen and Mike Stilman},
      year = {2012}
    }
    
  • 2011
  • Neil T. Dantam, Pushkar Kolhe, and Mike Stilman The Motion Grammar for Physical Human-Robot Games IEEE International Conference on Robotics and Automation. 2011. SAIC/Georgia Tech Achievement Award

    We introduce the Motion Grammar, a powerful new representation for robot decision making, and validate its properties through the successful implementation of a physical human-robot game. The Motion Grammar is a formal tool for task decomposition and hybrid control in the presence of significant online uncertainty. In this paper, we describe the Motion Grammar, introduce some of the formal guarantees it can provide, and represent the entire game of human-robot chess through a single formal language. This language includes game-play, safe handling of human motion, uncertainty in piece positions, misplaced and collapsed pieces. We demonstrate the simple and effective language formulation through experiments on a 14-DOF manipulator interacting with 32 objects (chess pieces) and an unpredictable human adversary.

    @inproceedings{dantam2011chess,
      title = {The Motion Grammar for Physical Human-Robot Games},
      pages = {5463--5469},
      month = {May},
      booktitle = {IEEE International Conference on Robotics and Automation},
      author = {Neil T. Dantam and Pushkar Kolhe and Mike Stilman},
      year = {2011}
    }
    
  • Neil T. Dantam and Mike Stilman The Motion Grammar: Linguistic Perception, Planning, and Control Robotics: Science and Systems. 2011.

    We present and analyze the Motion Grammar: a novel unified representation for task decomposition, perception, planning, and control that provides both fast online control of robots in uncertain environments and the ability to guarantee completeness and correctness. The grammar represents a policy for the task which is parsed in real-time based on perceptual input. Branches of the syntax tree form the levels of a hierarchical decomposition, and the individual robot sensor readings are given by tokens. We implement this approach in the interactive game of Yamakuzushi on a physical robot resulting in a system that repeatably competes with a human opponent in sustained gameplay for the roughly six minute duration of each match.

    @inproceedings{dantam2011yama,
      title = {The Motion Grammar: Linguistic Perception, Planning, and Control},
      pages = {49--56},
      month = {June},
      booktitle = {Robotics: Science and Systems},
      author = {Neil T. Dantam and Mike Stilman},
      year = {2011}
    }
    

Workshop

  • Arash Rouhani, Neil T. Dantam, and Mike Stilman Software-Synthesis via LL(*) for Context-Free Robot Programs 4th Workshop on Formal Methods for Robotics and Automation, RSS. 2013.

    Producing reliable software for robotic systems requires formal techniques to ensure correctness. Some popular approaches model the discrete dynamics and computation of the robot using finite state automata or linear temporal logic. We can represent more complicated systems and tasks, and still retain key guarantees on verifiability and runtime performance, by modeling the system instead with a context-free grammar. The challenge with a context-free model is the need for a more advanced software synthesis algorithm. We address this challenge by adapting the LL(*) parser generation algorithm, originally developed for program translation, to the domain of online robot control. We demonstrate this LL(*) parser generation implementation in the Motion Grammar Kit, permitting synthesis for robot control software for complex, hierarchical, and recursive tasks.

    @inproceedings{rouhani2013software,
      title = {Software-Synthesis via LL(*) for Context-Free Robot Programs},
      month = {June},
      booktitle = {4th Workshop on Formal Methods for Robotics and Automation, RSS},
      author = {Rouhani, Arash and Neil T. Dantam and Mike Stilman},
      year = {2013}
    }
    
  • Neil T. Dantam, Magnus Egerstedt, and Mike Stilman Make Your Robot Talk Correctly: Deriving Models of Hybrid System RSS Workshop on Grounding Human-Robot Dialog for Spatial Tasks. 2011.

    Using both formal language and differential equations to model a robotic system, we introduce a calculus of transformation rules for the symbolic derivation of hybrid controllers. With a Context-Free Motion Grammar, we show how to test reachability between different regions of state-space and give several symbolic transformations to modify the set of event strings the system may generate. This approach lets one modify the language of the hybrid system, providing a way to change system behavior so that it satisfies linguistic constraints on correct operation.

    @inproceedings{dantam2011talk,
      title = {Make Your Robot Talk Correctly: Deriving Models of Hybrid System},
      month = {June},
      booktitle = {RSS Workshop on Grounding Human-Robot Dialog for Spatial Tasks},
      author = {Neil T. Dantam and Magnus Egerstedt and Mike Stilman},
      year = {2011}
    }
    

Technical Reports

  • Neil T. Dantam, Irfan Essa, and Mike Stilman Algorithms for Linguistic Robot Policy Inference from Demonstration of Assembly Tasks no. GT-GOLEM-2012-002. Georgia Institute of Technology, Atlanta, GA. 2012.

    We describe several algorithms used for the inference of linguistic robot policies from human demonstration. First, tracking and match objects using the Hungarian Algorithm. Then, we convert Regular Expressions to Nondeterministic Finite Automata (NFA) using the McNaughton-Yamada-Thompson Algorithm. Next, we use Subset Construction to convert to a Deterministic Finite Automaton. Finally, we minimize finite automata using either Hopcroft's Algorithm or Brzozowski's Algorithm.

    @techreport{dantam2012algorithms,
      title = {Algorithms for Linguistic Robot Policy Inference from Demonstration of Assembly Tasks},
      number = {GT-GOLEM-2012-002},
      institution = {Georgia Institute of Technology, Atlanta, GA},
      author = {Neil T. Dantam and Irfan Essa and Mike Stilman},
      year = {2012}
    }
    
  • Neil T. Dantam and Mike Stilman The Motion Grammar: Linguistic Perception, Planning, and Control no. GT-GOLEM-2010-001. Georgia Institute of Technology, Atlanta, GA. 2010.

    We present the Motion Grammar: a novel unified representation for task decomposition, perception, planning, and hybrid control that provides a computationally tractable way to control robots in uncertain environments with guarantees on completeness and correctness. The grammar represents a policy for the task which is parsed in real-time based on perceptual input. Branches of the syntax tree form the levels of a hierarchical decomposition, and the individual robot sensor readings are given by tokens. We implement this approach in the interactive game of Yamakuzushi on a physical robot resulting in a system that repeatably competes with a human opponent in sustained game-play for matches up to six minutes.

    @techreport{dantam2010mgtech,
      title = {The Motion Grammar: Linguistic Perception, Planning, and Control},
      number = {GT-GOLEM-2010-001},
      institution = {Georgia Institute of Technology, Atlanta, GA},
      author = {Neil T. Dantam and Mike Stilman},
      year = {2010}
    }
    

Project Members

design
spacing
transparency


spacing