US 12,307,764 B2
Augmented reality translation of sign language classifier constructions
Charles E. Beller, Baltimore, MD (US); Zachary A. Silverstein, Austin, TX (US); Jeremy R. Fox, Georgetown, TX (US); and Clement Decrop, Arlington, VA (US)
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (US)
Filed by INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (US)
Filed on May 26, 2021, as Appl. No. 17/303,313.
Prior Publication US 2022/0383025 A1, Dec. 1, 2022
Int. Cl. G06V 20/20 (2022.01); G06T 11/00 (2006.01); G06T 13/00 (2011.01); G06V 40/20 (2022.01)
CPC G06V 20/20 (2022.01) [G06T 11/00 (2013.01); G06T 13/00 (2013.01); G06V 40/28 (2022.01)] 20 Claims
OG exemplary drawing
 
1. A method for translating a classifier construction into a graphical representation, the method comprising:
observing a classifier handshape by an augmented reality device;
analyzing the observed classifier handshape according to an object recognition algorithm to determine a contextual meaning of the classifier handshape;
converting the contextual meaning of the observed classifier handshape into a graphical representation, wherein converting the contextual meaning into the graphical representation further comprises retrieving a generated image or video imagery from a repository, and based on one or more additional adjectival modifiers of a term in a conversation and based on a context of the conversation associated with the observed classifier handshape, automatically modifying the generated image or video imagery with parameters matching the contextual meaning of the one or more additional adjectival modifiers from the conversation;
displaying the graphical representation, wherein displaying the graphical representation further comprises, superimposing the graphical representation in the augmented reality device.