Retalis language for Information Engineering in Autonomous Robot Software

Retalis (ETALIS for Robotics) is a logic-based language for the processing and management of sensory data in robot software.

Retalis integrates ETALIS and SLR.
  • ETALIS is used for temporal and logical reasoning, and data transformation in flows of data.
  • SLR is used to implement a knowledge base maintaining a history of events. SLR supports state-based representation of knowledge built upon discrete sensory data, management of sensory data in active memories and synchronization of queries over asynchronous sensory data.


  • UseCase: see retalis wiki
    Results: see the publication section
    Status: released (see retalis wiki)

    RobAPL Language for BDI-Based Autonomous Robot Control

    RobAPL extends the 2APL agent programming language with PLEXIL-Like plan representation and execution capabilities for autonomous robot programming.

    Results: see RobAPL paper
    Status: Development
    Release: by July, 2015

    NAO Perception and Action Capabilities


    Retalis-ROS Integration: NAO recognizes appearance, disappearance and re-appearance of objects and updates of over certain threshold in their positions, processing tens of events per seconds. Only high-level events are sent to 2APL (the control component implemented using 2APL agent programming language) which are then communicated to human user through voice. NAO can be asked to follow an object by its head. This is performed by 2APL subscribing the gaze control component to Etalis for updates in position events of the object of interest. In this way, 2APL itself (control component) stays out of the loop and does not need to process such events.



    ROS-2APL Integration: NAO robot is controlled by voice. It can remember people's faces and greet them by their name. Functional modules such as voice recognition, face recognition and NAO's movements are developed in ROS framework (Robot Operating System). Application logic is implemented in 2APL agent programming language.
    The face recognition package, developed by Pouyan Ziafati, University of Luxembourg is accessible here!



    Localization by Head Marker and Odometry: NAO uses odometry for smoother localization comparing to the case when only the marker is used for the localization. As the error in odometry localization due to NAO's imprecise walking is accumulated over time, the position information obtained by tracking the top head marker is used to correct that. Implementation is by using the following ROS packages: footstep_planner, ar_pose, map server, tf, NAO stack and usb_cam.





    Localization by Head Marker: In addition to NAO's imprecise walking, the error in marker position estimation results in NAO observing that it's away from its planned path and replan its path many times and therefore uncontinuous movement. Implementation is by using the following ROS packages: footstep_planner, ar_pose, map server, tf, NAO stack and usb_cam.


    R-ENVI toolset

    In cooperation with Technical University Delft

    Description: The Nao environment was originally written by the Delft University of Technology, and conforms to EIS (Environment Interface Standard). This environment written in Java combined with GOAL (an agent programming language also developed at the Delft University) makes it possible to program the Nao robots using a high level of abstraction focusing on the robots knowledge rather than on implementation details of actions.

    The environment developed by the Delft University provided the needed functions to connect to the robots and launch functions remotely as well. In a first step we expanded the Nao Environment with a mechanism to extend the environment with new actions contained in jar files.

    The original version of the environment doesn't return any sensor information and if you wanted to receive information of a particular set of sensors you needed to modify the JAVA code according to your needs. Our version of the Environment was modified in order to make it easy to convert selected sensor information into a Percept usable in GOAL. This is done using configuration files that can be created using the newly developed Nao Environment Configurator.

    The Nao Environment Configurator is a tool built to make it easy to configure the Nao Environment by making it possible to setup which sensors we are interested in, and create a well-defined way to convert sensor information to percepts that can be interpreted by GOAL.

    Those tools should make it now quick and easy to program the robots in GOAL.

    Status: Test phase
    Download: The Nao Environment and the configurator can be downloaded here. This package includes both the environment and the configurator as also a documentation and screenshots.
    Sourceforge: https://sourceforge.net/projects/robotenv/


    Generation of opinions for GOAL controlled NAO

    Description: We broadly define an opinion to be a truth-value assignment to a given concept obtained by processing a set of sensory input. Artificial agents use a set of beliefs about their environment to reason about and govern their actions. When the agents' environment is the real world, it is not possible to list all the beliefs that the agents may need before hand. We need to enable the agent to form an opinion about a concept when needed and use it to reason. For example, if the agent needs to move through a opaque surface, if needs to form an opinion on whether the surface is impenetrable (a wall), penetrable (a curtain) or can be asked to move (an animate object or another agent).

    The outcome of this project is to enable the Nao robot to combine several sensory inputs and use them to form an opinion. When the project is completed, the robot will be able to give his opinion on given simpli:ied questions. For example, for the question "obstacle on route" it will check first its already formed believes, then whether it has a rule to establish this opinion, and if none of this works, it will return "I do not know".

    Status: DONE

    Judgement Aggregation on the Nao Robot

    Description: Judgment aggregation is a formal theory reasoning about how a group of agents can aggregate individual judgments on connected propositions into a collective judgment on the same propositions. Judgement Aggregation is the usage of aggregation functions on a set of individual multi values into a collective unique set value for the group.

    The Nao Robot is an autonomous, programmable and medium-sized humanoid robot, developed by the French company Aldebaran Robotics In a multi-agent system, judgement aggregation could be used to generate group beliefs about the environment based on individual sensory input and judgements. The thesis will focus on evaluating the usage of judgement aggregation techniques on the Nao Robot in a multi-agent scenario to form collective beliefs.

    Status: DONE

    Controlling Robots using the Emotiv Epoc.

    Description: The aim of this project is to be able to control our robots using uniquely our thoughts. For this we use the Emotiv Epoc device.

    This project is part of a Master Thesis and the main objective is to create a good intention detection classifier.

    Status: DONE