US 12,011,828 B2
Method for controlling a plurality of robot effectors
Jérôme Monceaux, Paris (FR); Thibault Hervier, Vincennes (FR); and Aymeric Masurelle, Paris (FR)
Assigned to SPOON, Paris (FR)
Appl. No. 17/052,703
Filed by SPOON
PCT Filed Apr. 26, 2019, PCT No. PCT/FR2019/050983
§ 371(c)(1), (2) Date Mar. 10, 2021,
PCT Pub. No. WO2019/211552, PCT Pub. Date Nov. 7, 2019.
Claims priority of application No. 1853868 (FR), filed on May 4, 2018.
Prior Publication US 2022/0009082 A1, Jan. 13, 2022
Int. Cl. B25J 9/00 (2006.01); B25J 9/16 (2006.01); B25J 15/00 (2006.01)
CPC B25J 9/0084 (2013.01) [B25J 9/1602 (2013.01); B25J 15/0052 (2013.01)] 7 Claims
OG exemplary drawing
 
1. A method for controlling a plurality of effectors of a robot by a plurality of primitives made up of parameterizable coded functions:
the plurality of primitives being activated conditionally by an action selection system, the action selection system:
comprising a declarative memory wherein is stored a dynamic library of rules, each associating a context with an action; and
based on a list of coded objects stored in a memory determining the robot's representation of the world,
the coded objects stored in the memory being computed by perception functions;
the perception functions being computed from signals provided by one or more sensors of the robot;
wherein the method is based on associating, in every step, coded objects with a sequence of characters corresponding to their semantic description, comprising:
a semantic description of the coded objects stored in the memory, made up of a string of characters representing a perception function of the robot and another string of characters representing a perceived object;
a semantic description of the activated primitives made up of a string of characters representing a possible action of the robot and another, optional string of characters representing optional parameters of the possible action of the robot;
a semantic description of the rules, made up of the combination of a string of characters representing the associated context and another string of characters representing an associated action; and
recording new rules associated with one or more new actions and one or more new contexts, via a learning module comprising a speech recognition, a gesture recognition, a mimicry recognition, and a semantic analysis module which analyzes sentences pronounced by an operator to extract the one or more new actions and the one or more new contexts defining the new rules.