US 12,270,657 B2
Scene intelligence for collaborative semantic mapping with mobile robots
Ruchika Singh, Chandler, AZ (US); Mandar Chincholkar, Portland, OR (US); Hassnaa Moustafa, San Jose, CA (US); Francesc Guim Bernat, Barcelona (ES); and Rita Chattopadhyay, Chandler, AZ (US)
Assigned to Intel Corporation, Santa Clara, CA (US)
Filed by Intel Corporation, Santa Clara, CA (US)
Filed on Mar. 25, 2022, as Appl. No. 17/704,934.
Prior Publication US 2022/0214170 A1, Jul. 7, 2022
Int. Cl. G01C 21/16 (2006.01); G01C 21/00 (2006.01); G06N 20/20 (2019.01)
CPC G01C 21/1656 (2020.08) [G01C 21/3811 (2020.08); G01C 21/3896 (2020.08); G06N 20/20 (2019.01)] 24 Claims
OG exemplary drawing
 
1. At least one non-transitory machine readable medium, including instructions for operating an autonomous mobile robot (AMR), which when executed by processing circuitry of the AMR, cause the AMR to:
receive an environmental map at the AMR;
cause the AMR to navigate through an environment corresponding to the environmental map;
capture, at a location during navigation of the AMR through the environment, audio or video data using a sensor of the AMR;
perform a classification of the audio or video data using a trained classifier;
identify a coordinate of the environmental map corresponding to the location in the environment where the audio or video data was captured by the sensor during navigation of the AMR;
update the environmental map to include the classification as metadata corresponding to the coordinate;
communicate the updated environmental map to an edge device; and
cause the AMR to access the environmental map from the edge device, the environmental map generated using federated learning, the federated learning based on data from a plurality of AMRs.