US 11,893,980 B2
Electronic apparatus and control method thereof
Sichen Jin, Suwon-si (KR); Kwangyoun Kim, Seoul (KR); Sungsoo Kim, Suwon-si (KR); Junmo Park, Suwon-si (KR); Dhairya Sandhyana, Suwon-si (KR); and Changwoo Han, Suwon-si (KR)
Assigned to SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
Appl. No. 17/430,614
Filed by SAMSUNG ELECTRONICS CO., LTD., Suwon-si (KR)
PCT Filed Jun. 22, 2021, PCT No. PCT/KR2021/007818
§ 371(c)(1), (2) Date Aug. 12, 2021,
PCT Pub. No. WO2022/169038, PCT Pub. Date Aug. 11, 2022.
Claims priority of application No. 10-2021-0017815 (KR), filed on Feb. 8, 2021.
Prior Publication US 2023/0360645 A1, Nov. 9, 2023
Int. Cl. G10L 15/183 (2013.01); H04N 21/488 (2011.01); G06V 10/20 (2022.01); G10L 15/26 (2006.01)
CPC G10L 15/183 (2013.01) [G06V 10/255 (2022.01); G10L 15/26 (2013.01); H04N 21/4884 (2013.01)] 15 Claims
OG exemplary drawing
 
1. An electronic apparatus comprising:
a communication interface configured to receive content comprising image data and speech data;
a memory configured to store a language contextual model trained with relevance between words;
a display; and
a processor configured to:
extract an object and a character included in the image data,
identify an object name of the object and the character,
generate a bias keyword list comprising an image-related word that is associated with the image data, based on the identified object name and the identified character,
convert the speech data to a text based on the bias keyword list and the language contextual model, and
control the display to display the text that is converted from the speech data, as a caption.