US 12,093,880 B2
Edge computing device and system for vehicle, container, railcar, trailer, and driver verification
Ashutosh Prasad, Dallas, TX (US); and Vivek Prasad, Patna (IN)
Assigned to KoiReader Technologies, Inc., Dallas, TX (US)
Appl. No. 17/910,420
Filed by KoiReader Technologies, Inc., Dallas, TX (US)
PCT Filed Mar. 10, 2021, PCT No. PCT/US2021/021699
§ 371(c)(1), (2) Date Sep. 9, 2022,
PCT Pub. No. WO2021/183641, PCT Pub. Date Sep. 16, 2021.
Claims priority of provisional application 62/988,082, filed on Mar. 11, 2020.
Prior Publication US 2023/0114688 A1, Apr. 13, 2023
Int. Cl. G06Q 10/0833 (2023.01); G06V 20/00 (2022.01); G06V 20/59 (2022.01); G06V 40/16 (2022.01)
CPC G06Q 10/0833 (2013.01) [G06V 20/00 (2022.01); G06V 20/59 (2022.01); G06V 20/593 (2022.01); G06V 40/16 (2022.01)] 20 Claims
OG exemplary drawing
 
16. An EDGE computing system comprising:
one or more processors; and
one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
one or more image devices of the EDGE computing system;
one or more depth sensors of the EDGE computing system;
receiving, from the one or more image devices, first image data associated with an exterior of a vehicle;
receiving, from the one or more image devices, second image data associated with an operator of the vehicle;
receiving, from the one or more image devices, third image data associated with an exterior of a container associated with the vehicle;
receiving, from the one or more depth sensors, first depth data associated with the exterior of the vehicle;
receiving, from the one or more depth sensors, second depth data associated with the operator of the vehicle;
receiving, from the one or more depth sensors, third depth data associated with the exterior of the container associated with the vehicle;
determining, based at least in part on inputting the first image data and the first depth data into a first machine learned neural network, an identity of the vehicle, the first machine learned neural network trained based at least in part on image data and depth data of vehicles;
determining, based at least in part on inputting the second image data and the second depth data into a second machine learned neural network, an identity of the operator, the second machine learned neural network trained based at least in part on image data and depth data of operators;
determining, based at least in part on inputting the third image data and the third depth data into a third machine learned neural network, an identity of the container, the third machine learned neural network trained based at least in part on image data and depth data of containers;
determining an identity of an entity associated with contents of the container based at least in part on the identity of the vehicle, the identity of the operator, and the identity of the container;
identifying a document to be completed based at least in part on the identity of an entity;
completing the document based at least in part on content extracted from the first image data, the second image data, and the third image data; and
transmitting, via one or more networks, the document to a remote system associated with the entity.