US 12,257,727 B2
Event-driven visual-tactile sensing and learning for robots
Chee Keong Tee, Singapore (SG); Hian Hian See, Singapore (SG); Brian Lim, Singapore (SG); Soon Hong Harold Soh, Singapore (SG); Tasbolat Taunyazov, Singapore (SG); Weicong Sng, Singapore (SG); Sheng Yuan Jethro Kuan, Singapore (SG); and Abdul Fatir Ansari, Singapore (IN)
Assigned to NATIONAL UNIVERSITY OF SINGAPORE, Singapore (SG)
Appl. No. 18/010,656
Filed by NATIONAL UNIVERSITY OF SINGAPORE, Singapore (SG)
PCT Filed Jun. 15, 2021, PCT No. PCT/SG2021/050350
§ 371(c)(1), (2) Date Dec. 15, 2022,
PCT Pub. No. WO2021/256999, PCT Pub. Date Dec. 23, 2021.
Claims priority of application No. 10202005663U (SG), filed on Jun. 15, 2020.
Prior Publication US 2023/0330859 A1, Oct. 19, 2023
Int. Cl. B25J 9/00 (2006.01); B25J 9/16 (2006.01); B25J 13/08 (2006.01); G06N 3/049 (2023.01)
CPC B25J 9/1697 (2013.01) [B25J 9/161 (2013.01); B25J 13/084 (2013.01); G06N 3/049 (2013.01)] 14 Claims
OG exemplary drawing
 
1. A classifying sensing system comprising:
a first spiking neural network, SNN, encoder configured for encoding an event-based output of a vision sensor into individual vision modality spiking representations with a first output size;
a second SNN encoder configured for encoding an event-based output of a tactile sensor into individual tactile modality spiking representations with a second output size;
a combination layer configured for merging the vision modality spiking representations and the tactile modality spiking representations; and
a task SNN configured to receive the merged vision modality spiking representations and tactile modality spiking representations and output vision-tactile modality spiking representations with a third output size for classification.