US 11,741,314 B2
Method and system for generating dynamic text responses for display after a search
Huy Q Tran, Westminster, CA (US); Vlad Zarney, Calabasas, CA (US); Kapil Chaudhry, Cerritos, CA (US); Douglas T. Kuriki, Brea, CA (US); Todd T. Tran, West Covina, CA (US); David K Homan, Torrance, CA (US); An T. Lam, Alhambra, CA (US); Michael E. Yan, Redondo Beach, CA (US); and Ashley B. Tarnow, Playa Del Rey, CA (US)
Assigned to DIRECTV, LLC, El Segundo, CA (US)
Filed by DIRECTV, LLC, El Segundo, CA (US)
Filed on Nov. 19, 2020, as Appl. No. 16/952,702.
Application 16/952,702 is a continuation of application No. 16/048,918, filed on Jul. 30, 2018, granted, now 10,878,200.
Application 16/048,918 is a continuation of application No. 13/832,874, filed on Mar. 15, 2013, granted, now 10,067,934, issued on Sep. 4, 2018.
Claims priority of provisional application 61/768,163, filed on Feb. 22, 2013.
Prior Publication US 2021/0073477 A1, Mar. 11, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. G06F 17/00 (2019.01); G06F 40/40 (2020.01); G06F 16/632 (2019.01); G06F 16/332 (2019.01); G06F 16/9032 (2019.01); H04N 21/422 (2011.01); H04N 21/439 (2011.01); H04N 21/482 (2011.01); H04N 21/4147 (2011.01); H04N 21/433 (2011.01); H04N 21/472 (2011.01); H04N 21/4722 (2011.01); H04N 21/488 (2011.01); H04N 21/61 (2011.01); G06F 3/16 (2006.01); G08C 17/02 (2006.01); H04N 21/4415 (2011.01); H04N 21/475 (2011.01); G10L 15/26 (2006.01); G06F 3/04842 (2022.01); H04N 21/41 (2011.01); G06F 3/0482 (2013.01); G06F 3/0488 (2022.01); G10L 15/22 (2006.01); G10L 15/06 (2013.01); G10L 15/30 (2013.01); H04N 21/458 (2011.01); H04N 21/222 (2011.01)
CPC G06F 40/40 (2020.01) [G06F 3/0482 (2013.01); G06F 3/0488 (2013.01); G06F 3/04842 (2013.01); G06F 3/165 (2013.01); G06F 3/167 (2013.01); G06F 16/3325 (2019.01); G06F 16/3329 (2019.01); G06F 16/632 (2019.01); G06F 16/90332 (2019.01); G08C 17/02 (2013.01); G10L 15/22 (2013.01); G10L 15/26 (2013.01); H04N 21/4147 (2013.01); H04N 21/41265 (2020.08); H04N 21/4222 (2013.01); H04N 21/42203 (2013.01); H04N 21/42204 (2013.01); H04N 21/42209 (2013.01); H04N 21/42222 (2013.01); H04N 21/42224 (2013.01); H04N 21/4334 (2013.01); H04N 21/4394 (2013.01); H04N 21/4398 (2013.01); H04N 21/4415 (2013.01); H04N 21/475 (2013.01); H04N 21/4722 (2013.01); H04N 21/4753 (2013.01); H04N 21/47214 (2013.01); H04N 21/4828 (2013.01); H04N 21/4882 (2013.01); H04N 21/6143 (2013.01); H04N 21/6175 (2013.01); G08C 2201/31 (2013.01); G10L 15/063 (2013.01); G10L 15/30 (2013.01); G10L 2015/0638 (2013.01); H04N 21/2221 (2013.01); H04N 21/4583 (2013.01); H04N 21/47211 (2013.01)] 18 Claims
OG exemplary drawing
 
1. A method comprising:
forming, by a processing system including a processor, a last merged context object from a first intent object and a combination of all prior related search requests after a previous context switch, wherein the first intent object includes text of a first audible request;
receiving, by the processing system, a second audible request;
generating, by the processing system, a second intent object from the second audible request;
using, by the processing system, a state vector machine to determine whether a new context switch has occurred based on whether the last merged context object and the second intent object are related; and
responsive to determining that the new context switch has occurred, performing, by the processing system, a subsequent content search based on the last merged context object and the second intent object.