US 11,373,376 C1 (12,998th)
Matching content to a spatial 3D environment
Denys Bastov, Palo Alto, CA (US); Victor Ng-Thow-Hing, Los Altos, CA (US); Benjamin Zaaron Reihardt, San Francisco, CA (US); Leonid Zolotarev, Weston, FL (US); Yannick Pellet, Plantation, FL (US); Aleksei Marchenko, Sunnyvale, CA (US); Brian Everett Meaney, Parkland, FL (US); Marc Coleman Shelton, Fort Lauderdale, FL (US); Megan Ann Geiman, Fort Lauderdale, FL (US); John A. Gotcher, Prosper, TX (US); Matthew Schon Bogue, McKinney, TX (US); Shivakumar Balasubramanyam, Rancho Santa Fe, CA (US); Jeffrey Edward Ruediger, McKinney, TX (US); and David Charles Lundmark, Los Altos, CA (US)
Filed by MAGIC LEAP, INC., Plantation, FL (US)
Reexamination Request No. 90/019,599, Jul. 26, 2024.
Reexamination Certificate for Patent 11,373,376, issued Jun. 28, 2022, Appl. No. 17/142,210, Jan. 5, 2021.
Application 90/019,599 is a continuation of application No. 15/968,673, filed on May 1, 2018, granted, now 10,930,076.
Claims priority of provisional application 62/492,292, filed on May 1, 2017.
Claims priority of provisional application 62/610,108, filed on Dec. 22, 2017.
Claims priority of provisional application 62/644,377, filed on Mar. 16, 2018.
Ex Parte Reexamination Certificate issued on Jul. 28, 2025.
Int. Cl. G06F 3/04815 (2022.01); G06F 3/01 (2006.01); G06T 19/00 (2011.01); G06T 19/20 (2011.01); H04N 21/254 (2011.01); H04N 21/431 (2011.01); H04N 21/81 (2011.01)
CPC G06T 19/006 (2013.01) [G06F 3/011 (2013.01); G06F 3/04815 (2013.01); G06T 19/20 (2013.01); G06T 2207/10028 (2013.01); H04N 21/2542 (2013.01); H04N 21/4312 (2013.01); H04N 21/816 (2013.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1-20 are determined to be patentable as amended.
New claims 21-43 are added and determined to be patentable.
1. A method for matching content to a plurality of surfaces of environment of the user, the method comprising:
identifying a content element having [ from a plurality of content elements for display via an augmented (AR) display device to a user wearing the AR display device in a dynamic physical environment;
determining ] a [ respective ] plurality of different [ element ] attributes [ for the content element, the respective plurality of element attributes including a specific element attribute ] and corresponding to the [ a respective ] plurality of different [ surface ] attributes of each [ surface ] of the [ a ] plurality of surfaces;
determining a [ the respective ] plurality of different [ surface ] attributes of [ the ] each [ surface ] of the plurality of surfaces [ , the respective plurality of surface attributes ] respectively corresponding to the [ respective ] plurality of different [ element ] attributes of the content element;
respectively [ respectively prioritizing each element attribute of one or more element attributes of the respective plurality of element attributes with an attribute priority based at least in part upon the specific element attribute of the respective plurality of element attributes;
determining, based at least in part upon one or more attribute priorities for the one or more element attributes of the respective plurality of element attributes, a final surface from the plurality of surfaces for a display of the content element on the final surface at least by:
respectively prioritizing each content element of the plurality of content elements with an element priority so that a plurality of element priorities respectively corresponds to the plurality of content elements;
respectively prioritizing the each surface of the plurality of surfaces with a surface priority so that a plurality of surface priorities respectively corresponds to the plurality of surfaces;
determining a plurality of candidate surfaces from the plurality of surfaces based at least in part upon the element priorities and the surface priorities at least by ] comparing the [ respective ] plurality of different element attributes of the content element to the [ respective ] plurality of different [ surface ] attributes of [ the ] each [ surface ] of the plurality of surfaces;
[ generating a first reduced set of candidate surfaces at least by disqualifying or filtering out a first candidate surface from the plurality of candidate surfaces based at least in part upon a user attribute pertaining to the user and measured at a time instant by one or more sensors of the display device worn by the user in the dynamic physical environment and a comparison between the user attribute and a specific surface attribute in a first respective set of surface attributes of the first candidate surface;
determining whether or not a second respective set of surface attributes of a second candidate surface includes a disqualifying surface attribute having a disqualifying surface attribute value;
when it is determined that the second respective set of surface attributes includes the disqualifying surface attribute having the disqualifying surface attribute value, generating a second reduced set of candidate surfaces at least by disqualifying or filtering out the second candidate surface from the plurality of candidate surfaces;]
calculating a plurality of scores for the respective plurality [ second reduced set ] of [ candidate ] surfaces based on [ a result of comparing ] the respective comparisons [ plurality of element attributes of the content element to the respective plurality of surface attributes of the each surface of the second reduced set of candidate surfaces] ; [ and]
selecting a [ the final ] surface having the highest score from [ at least ] the [ second reduced set ] plurality of [ candidate ] surfaces;
storing a mapping of the content element to the [ a ] selected surface; and
[ dynamically ] displaying [ , via the AR display device, ] the content element on the selected [ final ] surface to the user [ as perceived by the user via the AR display device] .
2. The method of claim 1, wherein the identified content element [ that is identified ] is a 3D [ three-dimensional (3D) ] content element.
3. The method of claim 1, wherein the [ respective ] plurality of different [ element ] attributes of the content element are weighted differently.
4. The method of claim 1, wherein the [ respective ] plurality of different [ element ] attributes of the content element comprise a dot product orientation surface relationship, a texture, and a color.
5. The method of claim 1, [ further comprising determining whether the specific element attribute takes precedent over other element attributes for other content elements based at least in part upon a value of the specific element attribute and a respective element priority for the content element, ] wherein the [ final ] surface on which the content element is displayed to the user is the selected [ for displaying the content element on the final ] surface.
6. The method of claim 1, further comprising comparing the highest score to a threshold score, displaying the content element on either the selected [ final ] surface or a virtual surface based on the comparison [ a result of comparing the highest score to the threshold score] .
7. The method of claim 6, wherein the content element is displayed on the selected [ final ] surface if the threshold [ highest ] score is greater than the threshold score, and displaying the content element on the virtual surface if the threshold [ highest ] score is less than the threshold score.
8. The method of claim 1, further comprising overriding the selected [ final ] surface, and selecting another surface, wherein the [ final ] surface on which the content element is displayed to the user is the other [ another ] surface.
9. The method of claim 1, further comprising moving the displayed content element that is [ displayed on the final surface ] from the [ final ] surface to another surface.
10. The method of claim 1, wherein the displayed content element [ displayed on the final ] surface is moved from the [ final ] surface to the an [ another ] surface via a hand gesture of the user.
11. An augmented reality (AR) display system, comprising: a head-mounted system comprising: one or more sensors, and one or more cameras comprising outward facing cameras; a processor to execute a set of program code instructions; and a memory to hold the set of program code instructions, in which the set of program code instructions comprises program code [ which, when executed by the processor, causes the processor ] to perform [ a set of acts, the set of acts ] comprising:
identifying a content element having [ from a plurality of content elements for display via an augmented (AR) display device to a user wearing the AR display device in a dynamic physical environment;]
[ determining ] a [ respective ] plurality of different [ element ] attributes [ for the content element, the respective plurality of element attributes including a specific element attribute ] and corresponding to the [ a respective ] plurality of different [ surface ] attributes of each [ surface ] of the [ a ] plurality of surfaces;
determining a [ the ] plurality of different [ surface ] attributes of [ the ] each [ surface ] of the plurality of surfaces [ , the respective plurality of surface attributes ] respectively corresponding to the [ respective ] plurality of different [ element ] attributes of the content element;
respectively [ respectively prioritizing each element attribute of one or more element attributes of the respective plurality of element attributes with an attribute priority based at least in part upon the specific element attribute of the respective plurality of element attributes;
determining, based at least in part upon one or more attribute priorities for the one or more element attributes of the respective plurality of element attributes, a final surface from the plurality of surfaces for a display of the content element on the final surface at least by:
respectively prioritizing each content element of the plurality of content elements with an element priority so that a plurality of element priorities respectively corresponds to the plurality of content elements;
respectively prioritizing the each surface of the plurality of surfaces with a surface priority so that a plurality of surface priorities respectively corresponds to the plurality of surfaces;
determining a plurality of candidate surfaces from the plurality of surfaces based at least in part upon the element priorities and the surface priorities at least by ] comparing the [ respective ] plurality of different [ element ] attributes of the content element to the [ respective ] plurality of different surface attributes of [ the ] each [ surface ] of the plurality of surfaces; [ generating a first reduced set of candidate surfaces at least by disqualifying or filtering out a first candidate surface from the plurality of candidate surfaces based at least in part upon the user attribute pertaining to a user and measured by the one or more sensors of the AR display device worn by the user in a dynamic physical environment and a comparison between the user attribute and a specific surface attribute in a first respective set of surface attributes of the first candidate surface;
determining whether or not a second respective set of surface attributes of a second candidate surface includes a disqualifying surface attribute having a disqualifying surface attribute value;
when it is determined that the second respective set of surface attributes includes the disqualifying surface attribute having the disqualifying surface attribute value, generating a second reduced set of candidate surfaces at least by disqualifying or filtering out the second candidate surface from the plurality of candidate surfaces;]
calculating a plurality of scores for the respective plurality [ second reduced set ] of [ candidate ] surfaces based on [ a result of comparing ] the respective comparisons [ plurality of element attributes of the content element to the respective plurality of surface attributes of the each surface of the second reduced set of candidate surfaces] ; [ and]
selecting a [ the final ] surface having the highest score from [ at least ] the plurality [ second reduced set ] of [ candidate ] surfaces;
storing a mapping of the content element to the selected [ final ] surface; and
displaying [ , via the AR display device, ] the content element on the selected [ final ] surface to the user [ as perceived by the user via the AR display device] .
12. The system of claim 11, wherein the identified content element [ that is identified ] is a 3D [ three-dimensional (3D ) ] content element.
13. The system of claim 11, wherein the [ respective ] plurality of different [ element ] attributes of the content element are weighted differently.
14. The system of claim 11, wherein the [ respective ] plurality of different [ element ] attributes of the content element comprise a dot product orientation surface relationship, a texture, and a color.
15. The system of claim 11, [ the set of program code instructions comprises the program code to further perform: determining whether the specific element attribute takes precedent over other element attributes for other content elements based at least in part upon a value of the specific element attribute and a respective element priority for the content element, ] wherein the [ final ] surface on which the content element is displayed to the user is the selected [ for displaying the content element on the final ] surface.
16. The system of claim 11, wherein the program code further performs comparing the highest score to a threshold score, displaying the content element on either the selected [ final ] surface or a virtual surface based on the comparison [ a result of comparing the highest score to the threshold score] .
17. The system of claim 16, wherein the content element is displayed on the selected [ final ] surface if the threshold [ highest ] score is greater than the threshold score, and displaying the content element on the virtual surface if the threshold [ highest ] score is less than the threshold score.
18. The system of claim 11, wherein the program code further preforms overriding the selected [ final ] surface, and selecting another surface, wherein the [ final ] surface on which the content element is displayed to the user is the other [ another ] surface.
19. The system of claim 11, wherein the program code further preforms moving the displayed content element [ that is displayed on the final surface ] from the [ final ] surface to another surface.
20. The system of claim 11, wherein the programmed code further allows the displayed content element [ that is displayed on the final surface ] to be moved from the [ final ] surface to the other [ another ] surface via a hand gesture of the user.
[ 21. The method of claim 1, further comprising:
determining the element priorities for the plurality of content elements at least by:
determining a single element attribute from the respective plurality of element attributes to be an element priority for the content element of the plurality of content elements; and
ordering element entries corresponding to the plurality of content elements in an element data structure into ordered element entries based at least in part upon the element priorities that respectively correspond to the plurality of content elements, wherein the element data structure further stores, according to first corresponding locations of the plurality of content elements in the element data structure, the respective plurality of element attributes and the one or more attribute priorities;
determining the surface priorities that respectively correspond to the plurality of surfaces based at least in part upon the respective plurality of surface attributes of the each surface;
ordering surface entries corresponding to the plurality of surfaces in a surface data structure into ordered surface entries based at least in part upon the surface priorities that respectively correspond to the plurality of surfaces, wherein the surface data structure further stores, according to second corresponding locations in the surface data structure for the plurality of surfaces, the respective plurality of surface attributes for the each surface of the plurality of surfaces; and
associating the each surface of the plurality of surfaces with a respective adjacency parameter.]
[ 22. The method of claim 1, further comprising:
determining the plurality of surfaces and the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon environment data in the dynamic physical environment, determining the plurality of surfaces and the respective plurality of surface attributes comprising:
collecting depth information of the dynamic physical environment from at least one sensor of a plurality of sensors of the AR display device;
determining a set of connected vertices among a set of points in the depth information or the environment data at least by performing a first analysis;
generating a virtual mesh representative of at least a portion of the dynamic physical environment;
determine mesh properties at least by performing a second analysis, wherein the mesh properties are indicative of a common surface or an interpretation of the common surface;
determining the plurality of surfaces based at least in part upon a result of the second analysis; and
determining the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon the mesh properties, a result of the first analysis, or a rotation or a position of the AR display device, wherein
the dynamic physical environment is dynamic in that the dynamic physical environment or one or more objects therein are changing over time or the user wearing the AR display device is changing one or more user attributes including the user attribute over timed.]
[ 23. The method of claim 22, determining the plurality of surfaces and the respective plurality of surface attributes further comprising:
determining the user attribute of the user, comprising:
determining real-time inertial measurement unit (IMU) data and one or more images both of which captured by the at least one sensor of the plurality of sensors of the AR display device;
determining a rotation of the AR display device worn by the user based at least in part upon the real-time IMU data;
determining a position of the AR display device relative to the dynamic physical environment based at least in part upon the real-time IMU data and the one or more images; and
determining the user attribute based at least in part upon the rotation of the AR display device and the position of the AR display device, wherein
the plurality of surfaces is determined further based at least in part upon the user attribute or the attribute value thereof, and
the respective plurality of surface attributes for the each surface of the plurality of surfaces is determined further based at least in part upon the user attribute.]
[ 24. The method of claim 22, identifying the content element from the plurality of content elements comprising:
receiving and deconstructing a content into at least one content element of the plurality of content elements;
inferring and storing the respective plurality of element attributes for the content element of the plurality of content elements based at least in part upon placement of the at least one content element in the content;
respectively associating the respective plurality of element attributes with the element priorities that respectively correspond to the plurality of content elements; and
ordering attribute entries corresponding to the respective plurality of element attributes in an element data structure into ordered attribute entries based at least in part upon the element priorities, wherein the element data structure further stores the plurality of content elements and the one or more attribute priorities.]
[ 25. The method of claim 24, wherein the respective plurality of element attributes are inferred further based at least in part upon the placement of the at least one content element with respect to one or more other content elements in the content.]
[ 26. The method of claim 24, wherein the respective plurality of element attributes are inferred from one or more tags that pertain to the placement of the at least one content element in the content or are inferred by extracting one or more hints or the one or more tags from the at least one content element.]
[ 27. The method of claim 1, determining the plurality of candidate surfaces from the plurality of surfaces comprising:
identifying a surface data structure for the plurality of surfaces and an element data structure for the plurality of content elements including the content element;
determining whether the content element includes or is associated with a hint;
when the content element is determined to include or to be associated with the hint, searching the surface data structure for displaying the content element based at least in part on a result of analyzing the hint;
determining whether the hint or a pre-defined rules is to be used to match the plurality of content elements to the plurality of surfaces;
determining whether or not the pre-defined rule overrides the hint; and
for the content element, determining the plurality of candidate surfaces from the plurality of surfaces based at least in part on the element priorities, the hint or the pre-defined rule, a first result of determining whether the hint or the pre-defined rules is to be used, and a second result of determining whether or not the pre-defined rule overrides the hint.]
[ 28. The method of claim 27, determining the plurality of candidate surfaces from the plurality of surfaces further comprising:
identifying a first content element that is associated with a highest element priority at least by traversing the element data structure;
identifying one or more first matching surfaces at least by comparing the respective plurality of element attributes of the first content element to the respective plurality of surface attributes of one or more first surfaces of the plurality of surfaces;
identifying a second content element that is associated with a second highest element priority at least by traversing the element data structure;
identifying one or more second matching surfaces at least by comparing the respective plurality of element attributes of the second content element to the respective plurality of surface attributes of one or more second surfaces of the plurality of surfaces; and
determining the plurality of candidate surfaces based at least in part upon the one or more first matching surfaces and the one or more second matching surfaces.]
[ 29. The method of claim 1, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by disambiguating one or more conflicts among two or more candidate surfaces.]
[ 30. The method of claim 1, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by removing and excluding a particular candidate surface from further processing when a score of the particular candidate surface exceeds a threshold.]
[ 31. The method of claim 30, wherein the content element comprises an environment driven content element and is identified from the plurality of content elements based at least in part upon respective scores of a plurality of environment driven content elements after identifying the particular candidate surface in the dynamic physical environment.]
[ 32. The method of claim 1, further comprising:
determining, for the content element, that no surfaces in the plurality of surfaces are compatible with displaying the content element on based at least in part upon the respective plurality of content element attributes of the content element and the respective plurality of surface attributes of the plurality of surfaces.]
[ 33. The method of claim 1, further comprising:
determining, for the user wearing the AR display device and perceiving a representation of the content element through the AR display device, a first value for the user attribute;
detecting, by the AR display device, a change that updates the first value into a second value for the user attribute;
determining whether the change exceeds a threshold for changes; and
determining whether or not the second value for the user attribute is maintained for over a temporal threshold.]
[ 34. The method of claim 33, further comprising:
when it is determined that the change is smaller than the threshold for changes, or that the second value for the user attribute is maintained for a first temporal duration shorter than the temporal threshold, maintaining the representation of the content element on the final surface.]
[ 35. The method of claim 33, further comprising:
when it is determined that the change is greater than the threshold for changes, and that the second value for the user attribute is maintained for a second temporal duration longer than the temporal threshold,
determining whether or not a new surface in the dynamic physical environment is compatible with changing the representation of the content element onto the new surface based at least in part upon the respective plurality of surface attributes for the new surface.]
[ 36. The method of claim 35, further comprising:
when it is determined that the new surface is compatible with changing the representation of the content element onto the new surface, moving the representation of the content element onto the new surface at least by rendering the content element on the new surface; and
when it is determined that the new surface is incompatible with changing the representation of the content element onto the new surface, creating a virtual surface for the representation of the content element; and
moving the representation of the content element onto the virtual surface at least by rendering the content element on the virtual surface, wherein the user attribute comprise a head-pose.]
[ 37. The method of claim 36, moving the representation comprising:
incrementally rendering the content element through one or more intermediate positions so that the representation of the content element is perceived by the user through the AR display device at the one or more intermediate positions before finally rendered on the new surface or the virtual surface.]
[ 38. The augmented reality display system of clam 11, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts, the set of acts further comprising:
determining the plurality of surfaces and the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon environment data in the dynamic physical environment, determining the plurality of surfaces and the respective plurality of surface attributes comprising:
collecting depth information of the dynamic physical environment from at least one sensor of the plurality of sensors of the AR display device;
determining a set of connected vertices among a set of points in the depth information or the environment data at least by performing a first analysis;
generating a virtual mesh representative of at least a portion of the dynamic physical environment;
determine mesh properties at least by performing a second analysis, wherein the mesh properties are indicative of a common surface or an interpretation of the common surface;
determining the plurality of surfaces based at least in part upon a result of the second analysis; and
determining the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon the mesh properties, a result of the first analysis, or a rotation or a position of the AR display device, wherein
the dynamic physical environment is dynamic in that the dynamic physical environment or one or more objects therein are changing over time or the user wearing the AR display device is changing one or more user attributes including the user attribute.]
[ 39. The augmented reality display system of clam 38, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts that determines the plurality of surfaces and the respective plurality of surface attributes, the set of acts further comprising:
determining the user attribute of the user, comprising:
determining real-time inertial measurement unit (IMU) data and one or more images both of which captured by the at least one sensor of the plurality of sensors of the AR display device;
determining a rotation of the AR display device worn by the user based at least in part upon the real-time IMU data;
determining a position of the AR display device relative to the dynamic physical environment based at least in part upon the real-time IMU data and the one or more images; and
determining the user attribute based at least in part upon the rotation of the AR display device and the position of the AR display device, wherein
the plurality of surfaces is determined further based at least in part upon the user attribute or the attribute value thereof, and
the respective plurality of surface attributes for the each surface of the plurality of surfaces is determined further based at least in part upon the user attribute.]
[ 40. The augmented reality display system of clam 38, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts that identifies the content element from the plurality of content elements, the set of acts further comprising:
receiving and deconstructing a content into at least one content element of the plurality of content elements;
inferring and storing the respective plurality of element attributes for the content element of the plurality of content elements based at least in part upon placement of the at least one content element in the content;
respectively associating the respective plurality of element attributes with the element priorities that respectively correspond to the plurality of content elements; and
ordering attribute entries corresponding to the respective plurality of element attributes in an element data structure into ordered attribute entries based at least in part upon the element priorities, wherein the element data structure further stores the plurality of content elements and the one or more attribute priorities.]
[ 41. The augmented reality display system of clam 11, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by disambiguating one or more conflicts among two or more candidate surfaces.]
[ 42. The augmented reality display system of clam 11, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by removing and excluding a particular candidate surface from further processing when a score of the particular candidate surface exceeds a threshold.]
[ 43. The augmented reality display system of clam 11, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts, the set of acts further comprising:
determining, for the user wearing the AR display device and perceiving a
representation of the content element through the AR display device,
a first value for the user attribute;
detecting, by the AR display device, a change that updates the first value into a second value for the user attribute;
determining whether the change exceeds a threshold for changes;
determining whether or not the second value for the user attribute is maintained for over a temporal threshold; and
incrementally moving the content element from the final surface through one or more intermediate positions to a different surface so that the representation of the content element is perceived by the user through the AR display device at the one or more intermediate positions before finally being rendered on the different surface.]