CPC H04N 23/661 (2023.01) [G01S 19/42 (2013.01); G08B 13/19641 (2013.01); G08B 13/1966 (2013.01); G08B 13/19682 (2013.01); G08B 13/19684 (2013.01); G08C 17/02 (2013.01); G16H 40/67 (2018.01); H04N 5/28 (2013.01); H04N 7/18 (2013.01); H04N 7/181 (2013.01); H04N 7/183 (2013.01); H04N 7/185 (2013.01); H04N 21/21805 (2013.01); H04N 21/234363 (2013.01); H04N 21/2365 (2013.01); H04N 21/2385 (2013.01); H04N 21/25825 (2013.01); H04N 21/2662 (2013.01); H04N 21/4621 (2013.01); H04N 23/50 (2023.01); H04N 23/55 (2023.01); H04N 23/617 (2023.01); H04N 23/66 (2023.01); G01S 19/19 (2013.01); G08C 2201/93 (2013.01); H04N 23/51 (2023.01); H04W 80/02 (2013.01)] | 24 Claims |
1. A first video camera, comprising:
a lens;
an image sensor configured to generate first image data from light propagating through the lens;
at least one non-audio data sensor configured to produce first non-audio sensor data associated with the first video camera;
a wireless connection protocol device; and
a processor, comprising:
a video encoder, and
memory,
wherein the processor is configured to:
receive the first image data from the image sensor,
receive the first non-audio sensor data from the at least one non-audio data sensor,
generate at least one encoded video data stream using the video encoder, wherein a data type of the first non-audio sensor data is different from a data type of the at least one encoded video data stream,
send, using the wireless connection protocol device, the at least one encoded video data stream by wireless transmission to a first remote computing device, wherein the first remote computing device is connected to a first data storage medium, and wherein the first remote computing device is configured to store the at least one encoded video data stream on the first data storage medium as a first file,
combine the first non-audio sensor data with the at least one encoded video data stream to form a combined video stream,
communicate at least part of the combined video stream to the memory, wherein the at least one encoded video data stream is stored as a first track and the first non-audio sensor data is stored as a second track that is distinct from the first track, and
generate time-synchronizing data,
wherein the time-synchronizing data is used to synchronize the first track with the second track,
wherein the first video camera is configured as a media server that enables access to the combined video stream.
|