CPC G08B 13/08 (2013.01) [G06V 40/00 (2022.01); G07C 9/10 (2020.01); G07C 9/253 (2020.01); G07C 9/27 (2020.01)] | 8 Claims |
1. A controlled access gate comprising:
at least one frame (11), which defines at least one entry area (I1), at least one exit area (I2) and at least one transit area (P), wherein a user (U) is able to pass through said entry and exit areas (I1, I2) and said transit area (P),
an actuation unit (13), which allows the passage of one or more users (U),
an electronic control unit (14),
a plurality of sensors (15) or video camera sensors (16) or a plurality of mobile electronic devices containing said sensors (15, 16),
a badge reader (17) or other equivalent device connected to said electronic control unit (14) and configured to read information on a badge or other identification element of a user (U) or a biometric reader (17), the reader being also configured to send the read information to the control unit (14) for processing, in order to verify the identity of the user (U) in real time, so that an authorized user (AU) is able to pass through said gate,
a database (18) containing data of not authorized and/or authorized (AU) users (U),
wherein said sensors or video camera sensors (15, 16) are configured to detect at least partially the data relating to the distance (DT) between each sensor (15, 16) and each user (U) travelling along said entry and exit areas (I1, I2) or said transit area (P), said sensors (15, 16) are also configured to detect parameters relating to the instantaneous position, instantaneous speed and trajectory of each user (U), as well as images of at least one part of each user (U),
each sensor (15, 16) being fixed or being movable with respect to said frame (11) and having a detection cone (CD) and a discrete unit detection resolution (UDD) for sensing at least one user (U) standing in or passing through said detection cone (CD),
each sensor (15, 16) also being configured to communicate, by means of a wireless or a wired communication, with said electronic control unit (14), so as to be identified with data relating to a relative position and orientation (x, y, z) with respect to a reference system (X, Y, Z) which is coupled with said frame (11) of the gate, said data relating to a relative position and orientation of each sensor (15, 16) being sent to the electronic control unit (14) upon installation of said sensors (15, 16) and subsequently automatically in real time,
wherein said sensors (15, 16) are installed in a volume corresponding to said entry area (I1) and/or to said exit area (I2) and/or to said transit area (P) of said gate without constraints positioning or orientation, so that at least two of said detection cones (CD) of respective sensors (15, 16) overlap,
said data relating to the distance (DT), said parameters relating to the instantaneous position, instantaneous speed and trajectory of each user (U) and said images of at least one part of each user (U) being sent to said electronic control unit (14), which receives and processes said data, said parameters and said images through interpolation processes and machine-learning and/or deep-learning algorithms in order to independently learn the features of said gate and to estimate the shapes and directions of each user (U) within said volume corresponding to said entry (I1) and exit areas (I2) and to said transit area (P) of the gate, in order to reconstruct a scene in the space corresponding to said volume and to obtain a 2D and/or 3D detection of said users (U) with their positions, speeds and trajectories during the passage of said users (U) through said volume, starting from said parameters of instantaneous position, instantaneous speed and trajectory detected by said sensors (15, 16), and to estimate subsequent trajectories of said users (U) based on said parameters,
characterized in that each sensor (15, 16) incorporates different detection technologies and is configured to self-calibrate, via said electronic control unit (14), once each sensor (15, 16) is installed, said sensors (15, 16) being configured to automatically send their relative data of position and orientation with respect to said reference system (X, Y, Z) when said sensors (15, 16) are installed and subsequently automatically and in real time and
wherein a camera sensor (ST) of each sensor (15, 16) detects a partial and discrete image formed by discrete units (UDDi) or pixels of said at least one user (U), said image being coupled, inside said detection cone (CD), with the distances (DT) measured over time between each discrete unit (UDDi) and/or pixel of the image of said user (U) and said camera sensor (ST) and with the positions of each user (U) detected in said image, so as to reconstruct, through said interpolation processes and machine learning and/or deep learning algorithms, a 2D and/or 3D estimated image of each user (U) travelling along said entry and exit areas (I1, I2) and said transit area (P).
|