US 12,435,990 B2
Personal protective equipment for navigation and map generation within a visually obscured environment
John M. Kruse, Minneapolis, MN (US); Elisa J. Collins, Oakdale, MN (US); Nicholas T. Gabriel, Grand Rapids, MN (US); Richard J. Sabacinski, Charlotte, NC (US); Darin K Thompson, Huntersville, NC (US); Adam C. Nyland, St. Paul, MN (US); and Samuel J. Fahey, Woodbury, MN (US)
Assigned to 3M Innovative Properties Company, St. Paul, MN (US)
Appl. No. 18/030,912
Filed by 3M INNOVATIVE PROPERTIES COMPANY, St. Paul, MN (US)
PCT Filed Sep. 1, 2021, PCT No. PCT/IB2021/057991
§ 371(c)(1), (2) Date Apr. 7, 2023,
PCT Pub. No. WO2022/084761, PCT Pub. Date Apr. 28, 2022.
Claims priority of provisional application 63/198,441, filed on Oct. 19, 2020.
Prior Publication US 2023/0384114 A1, Nov. 30, 2023
Int. Cl. G01C 21/00 (2006.01); A62B 18/08 (2006.01); G01C 21/16 (2006.01); G01C 21/20 (2006.01); G10L 15/22 (2006.01); G06V 40/20 (2022.01)
CPC G01C 21/3811 (2020.08) [A62B 18/08 (2013.01); G01C 21/165 (2013.01); G01C 21/1652 (2020.08); G01C 21/206 (2013.01); G01C 21/3848 (2020.08); G01C 21/3856 (2020.08); G01C 21/3859 (2020.08); G10L 15/22 (2013.01); G06V 40/20 (2022.01)] 13 Claims
OG exemplary drawing
 
1. A system comprising:
a first personal protective equipment (PPE) configured to be worn by an agent, wherein the first PPE includes a sensor assembly comprising a radar device configured to generate radar data, a microphone configured to capture speech input from the agent, and an inertial measurement device configured to generate inertial data; and
at least one computing device comprising a memory and one or more processors coupled to the memory, wherein the at least one computing device is configured to:
process sensor data from the sensor assembly, wherein the sensor data includes at least the radar data and the inertial data;
generate pose data of the agent based on the processed sensor data, wherein the pose data includes a location and an orientation of the agent as a function of time;
process a first speech input to identify an item of interest in a hazardous environment in which the first PPE is deployed;
form mapping information for the hazardous environment with the item of interest being marked, based on the processed first speech input;
combine information obtained from the first speech input with the pose data of the agent within a space to discern information about surroundings of the agent, thereby representing a portion of the hazardous environment;
receive, from a second PPE of the system, a second speech input identifying the item of interest;
determine a location difference between the item of interest as identified in the first speech input and as identified in the second speech input; and
update the mapping information based on the determined location difference.