US 10,594,747 C1 (12,353rd)
System and method for augmented and virtual reality
Samuel A. Miller, Hollywood, FL (US)
Filed by Magic Leap, Inc., Plantation, FL (US)
Assigned to MAGIC LEAP, INC., Ft. Lauderdale, FL (US)
Reexamination Request No. 90/014,989, Mar. 25, 2022.
Reexamination Certificate for Patent 10,594,747, issued Mar. 17, 2020, Appl. No. 16/673,880, Nov. 4, 2019.
Application 90/014,989 is a continuation of application No. 16/261,352, filed on Jan. 29, 2019, granted, now 10,469,546.
Application 16/261,352 is a continuation of application No. 15/920,201, filed on Mar. 13, 2018, abandoned.
Application 15/920,201 is a continuation of application No. 15/238,657, filed on Aug. 16, 2016, granted, now 10,021,149.
Application 15/238,657 is a continuation of application No. 14/965,169, filed on Dec. 10, 2015, abandoned.
Application 14/965,169 is a continuation of application No. 14/514,115, filed on Oct. 14, 2014, abandoned.
Application 14/514,115 is a continuation of application No. 13/663,466, filed on Oct. 29, 2012, granted, now 9,215,293.
Claims priority of provisional application 61/552,941, filed on Oct. 28, 2011.
Ex Parte Reexamination Certificate issued on Aug. 14, 2023.
Int. Cl. H04L 65/401 (2022.01); A63F 13/35 (2014.01); A63F 13/92 (2014.01); G06F 16/954 (2019.01); H04L 67/131 (2022.01); G06F 3/01 (2006.01); G06T 19/00 (2011.01); H04L 67/02 (2022.01); H04L 69/14 (2022.01)
CPC H04L 65/4015 (2013.01) [A63F 13/35 (2014.09); A63F 13/92 (2014.09); G06F 3/013 (2013.01); G06F 3/016 (2013.01); G06F 3/017 (2013.01); G06F 16/954 (2019.01); G06T 19/006 (2013.01); H04L 67/02 (2013.01); H04L 67/131 (2022.05); A63F 2300/1093 (2013.01); A63F 2300/577 (2013.01); A63F 2300/695 (2013.01); A63F 2300/8082 (2013.01); H04L 69/14 (2013.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1-4 and 6 are determined to be patentable as amended.
Claims 5 and 7, dependent on an amended claim, are determined to be patentable.
New claims 8-32 are added and determined to be patentable.
1. A system for interacting with a virtual world comprising virtual world data, the system comprising:
a first user device operatively coupled to a computer network comprising one or more computing devices, the one or more computing devices comprising: one or more processors, and memory storing instructions which, when executed by the one or more processors, cause the one or more processors to process a first portion of the virtual world data;
[ wherein the first user device comprises a head-wearable sensor configured to detect whether a first physical object is in a field of view of a first user;]
wherein the first user device is configured to:
receive, from a [ the ] first user, a first input,
transmit the first input to the computer network,
receive, [ via the head-wearable sensor, ] from a local environment of the first user device, a second input [ indicating that the first physical object is in the field of view] , and
transmit the second input to the computer network,
wherein one or more of the one or more computing devices are configured to alter the virtual world data, based on at least one of the first input and the second input, [ and based further on semantic information describing the first physical object, ] to produce altered virtual world data,
wherein the first user device is further configured to present virtual content, based on the altered virtual world data, to the first user, and
wherein presenting the virtual content to the first user comprises presenting a visual rendering of the virtual world in a 3D format.
2. The system of claim 1, wherein at least one of the first input and the second input comprises audio data, and altering the virtual world data comprises altering the virtual world data based [ further ] on the audio data.
3. The system of claim 1, wherein the first user device comprises a see-through display, and presenting the virtual content to the first user further comprises presenting the virtual content to the first user via the see-through display [ , concurrently with presenting a view of the local environment to the first user via the see-through display] .
4. The system of claim 1, wherein the virtual content comprises one or more of visual content, audio content, and haptic content.
6. The system of claim 1, [ wherein the head-wearable sensor comprises a camera and ] wherein the second input comprises one or more of audio data from the local environment and visual data from a [ the first ] physical object in the local environment [ field of view] .
[ 8. The system of claim 1, wherein the first input comprises the semantic information.]
[ 9. The system of claim 8, wherein the first user device is further configured to receive the first input comprising the semantic information from the first user via a physical keypad.]
[ 10. The system of claim 8, wherein the first user device is further configured to receive the semantic information from the first user via a virtual keypad.]
[ 11. The system of claim 8, wherein the first user device is further configured to receive the semantic information from the first user via a wireless connection.]
[ 12. The system of claim 8, wherein the first user device is further configured to present a query to the first user, and wherein the semantic information comprises a response to the query.]
[ 13. The system of claim 1, wherein the one or more computing devices are further configured to associate the semantic information with the first physical object.]
[ 14. The system of claim 1, wherein the one or more computing devices are further configured to identify a second physical object based on the semantic information.]
[ 15. The system of claim 1, wherein the semantic information comprises a capability of the first physical object.]
[ 16. The system of claim 1, wherein the semantic information comprises a behavior of the first physical object.]
[ 17. The system of claim 1, wherein the semantic information comprises a brand name associated with the first physical object.]
[ 18. The system of claim 17, wherein the first user device is further configured to present an advertisement based on the brand name.]
[ 19. The system of claim 2, wherein the audio data comprises voice input, and the one or more of the one or more computing devices are further configured to alter the virtual world data based on a voice inflection of the voice input.]
[ 20. The system of claim 19, wherein the one or more of the one or more computing devices are further configured to determine an emotion associated with the voice input and alter the virtual world data based on the emotion.]
[ 21. The system of claim 1, wherein the one or more of the one or more computing devices are further configured to recognize the first physical object based on the second input.]
[ 22. The system of claim 21, wherein the one or more of the one or more computing devices are further configured to recognize the first physical object based further on a 2D image.]
[ 23. The system of claim 1, wherein the one or more of the one or more computing devices are further configured to segment a representation of the first physical object in a 3D point cloud.]
[ 24. The system of claim 1, wherein the one or more of the one or more computing devices are further configured to tag one or more points in a 3D point cloud according to the semantic information.]
[ 25. The system of claim 24, wherein the one or more of the one or more computing devices are further configured to apply a point-based algorithm.]
[ 26. The system of claim 25, wherein the one or more of the one or more computing devices are further configured to analyze a pose-tagged image.]
[ 27. The system of claim 1, wherein the one or more of the one or more computing devices are further configured to imbue a world model with the semantic information.]
[ 28. The system of claim 27, wherein the world model comprises a 3D point cloud.]
[ 29. The system of claim 1, wherein the first input comprises a facial expression of the first user, and the one or more of the one or more computing devices are further configured to alter the virtual world data based further on the facial expression.]
[ 30. The system of claim 1, wherein the head-wearable sensor comprises a visible light camera.]
[ 31. The system of claim 1, wherein the head-wearable sensor comprises an infrared camera.]
[ 32. The system of claim 1, wherein the head-wearable sensor comprises a structured light sensor.]