US 11,703,820 B2
Monitoring management and control system based on panoramic big data
Ying Ma, Fujian (CN); Shunzhi Zhu, Fujian (CN); Yu Sun, Fujian (CN); Junwen Lu, Fujian (CN); and Keshou Wu, Fujian (CN)
Assigned to Xiamen University of Technology, Fujian (CN)
Filed by c/o Xiamen University of Technology, Fujian (CN)
Filed on Mar. 24, 2022, as Appl. No. 17/703,074.
Application 17/703,074 is a continuation of application No. PCT/CN2020/097865, filed on Jun. 24, 2020.
Claims priority of application No. 202010514142.5 (CN), filed on Jun. 8, 2020.
Prior Publication US 2022/0214657 A1, Jul. 7, 2022
Int. Cl. G05B 19/042 (2006.01)
CPC G05B 19/0428 (2013.01) [G05B 2219/24024 (2013.01)] 5 Claims
OG exemplary drawing
 
1. A monitoring management and control system based on panoramic big data, comprising:
an imaging device,
a credential of the imaging device,
a memory,
a networking device, and
a processing unit, wherein:
the imaging device is configured to detect a first object and determine an approximate location of the first object and a reliability value of the approximate location of the first object based on the credential of the imaging device, the memory, the networking device,
and the processing unit,
the credential of the imaging device is configured to be a topology of a surface that is associated with the first object or proximity between the imaging device and the first object,
the memory is configured to store a predefined location of a second object,
the imaging device is configured to determine whether the predefined location of the second object matches the approximate location of the first object within a predefined margin of error,
when there is a mismatch greater than the predefined margin of error, the first object and distributed image elements of the approximate location of the first object are stored,
when there is a match within the predefined margin of error, the imaging device determines an exact location of the imaging device and sets an estimated position according to image elements stored in the memory,
the processing unit is configured to execute instructions stored in the memory to perform a method for collecting data by the imaging device, comprising:
generating encoded data from image data;
generating a plurality of encoded retinal images from the encoded data; and
generating dimension reduced data from the plurality of encoded retinal images,
generating the dimension reduced data comprises generating feature data for a plurality of retinal image regions from the plurality of encoded retinal images,
the feature data comprises components,
the components are divided based on values associated with different regions of the plurality of retinal image regions from the plurality of encoded retinal images,
the generating the dimension reduced data comprises applying a dimension reduction algorithm to the plurality of encoded retinal images,
the dimension reduction algorithm selects a subset of features of the encoded retinal images for a specific camera or monitor and ignores other functions of the encoded retinal images of a task of the specific camera or monitor,
the generating encoded data from the image data comprises dimensionally reducing the image data,
generating the dimension reduced data from the plurality of encoded retinal images comprises an additional dimension reduction operation,
the processing unit receives the image data from a camera or a monitor,
the feature data comprises motion data corresponding to each of the plurality of retinal image regions,
the motion data comprises speed data corresponding to each of the plurality of retinal image regions,
the feature data comprises optical flow data corresponding to each of the plurality of retinal image regions,
the generating the plurality of encoded retinal images comprises applying a trained algorithm to the encoded data,
the trained algorithm comprises a convolutional neural network,
the additional dimension reduction operation comprises additionally compressing the encoded data that is already dimensionally reduced relative to the image data, and
the processing unit is configured to execute instructions stored in the memory to perform a method for collecting K sets of data from the feature data, comprising:
combining the K sets of data to form an X*Y matrix, and putting the X*Y matrix into a formula (1) to adjust an angle of the imaging device,
Sit(Xi,Yj)=E[(Xi)−E(Xi)*(Yj−E(Yj))]  (1),
wherein Sit represents a coordinate value, E represents an expected value, Xi is i components of the feature data extracted from the X column, Yj is j components of the feature data extracted from the Y row, and a matrix is obtained,
putting the matrix into a formula (2) for calculating a rotation vector to obtain an adjusted parameter value, the rotation vector, and a translation vector, wherein the rotation vector is a compact realization form of a rotation matrix, and the rotation vector is a 1×3 row vector, and

OG Complex Work Unit Math
obtaining the rotation matrix R through the formula (2), wherein in the formula (2), r is the rotation vector, a direction of the rotation vector is a rotation axis, and a module of the rotation vector is an angle of rotation around the rotation axis, or
obtaining a rotation vector T through a formula (3) when the rotation matrix has been known before obtaining the rotation vector,

OG Complex Work Unit Math
wherein the rotation vector T is a parameter for adjusting the monitoring management and control system.