| CPC G06F 40/58 (2020.01) [G06F 3/014 (2013.01); G06F 3/017 (2013.01); G06F 3/0346 (2013.01); G06V 10/26 (2022.01); G06V 20/20 (2022.01); G06V 40/113 (2022.01); G06V 40/28 (2022.01)] | 20 Claims |

|
1. A method comprising:
accessing a first image generated by a first camera of a first augmented reality device and a second image generated by a second camera of a second augmented reality device, the first image and the second image depicting a hand gesture of a user of the first augmented reality device;
synchronizing the first augmented reality device with the second augmented reality device by mapping first image metadata of the first image with second image metadata of the second image;
generating synchronized information that comprises the first image metadata, the first image, first pose data of the first augmented reality device, the second image metadata, the second image, and second pose data of the second augmented reality device;
in response to synchronizing, distributing one or more processes of a sign language recognition system between the first augmented reality device and the second augmented reality device by sharing the synchronized information between the first augmented reality device and the second augmented reality device, wherein the one or more processes are performed on a corresponding augmented reality device;
collecting results from the one or more processes from the first augmented reality device and from the second augmented reality device; and
displaying, in near real-time in a first display of the first augmented reality device or in a second display of the second augmented reality device, text indicating a sign language translation of the hand gesture based on the results.
|