CPC G06T 19/006 (2013.01) [G06F 3/011 (2013.01); G06F 3/04815 (2013.01); G06T 19/20 (2013.01); G06T 2207/10028 (2013.01); H04N 21/2542 (2013.01); H04N 21/4312 (2013.01); H04N 21/816 (2013.01)] |
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT: |
Claims 1-20 are determined to be patentable as amended. |
New claims 21-43 are added and determined to be patentable. |
1. A method for matching content to a plurality of surfaces of environment of the user, the method comprising:
identifying a content element
determining ] a [ respective ] plurality of
determining
determining, based at least in part upon one or more attribute priorities for the one or more element attributes of the respective plurality of element attributes, a final surface from the plurality of surfaces for a display of the content element on the final surface at least by:
respectively prioritizing each content element of the plurality of content elements with an element priority so that a plurality of element priorities respectively corresponds to the plurality of content elements;
respectively prioritizing the each surface of the plurality of surfaces with a surface priority so that a plurality of surface priorities respectively corresponds to the plurality of surfaces;
determining a plurality of candidate surfaces from the plurality of surfaces based at least in part upon the element priorities and the surface priorities at least by ] comparing the [ respective ] plurality of
[ generating a first reduced set of candidate surfaces at least by disqualifying or filtering out a first candidate surface from the plurality of candidate surfaces based at least in part upon a user attribute pertaining to the user and measured at a time instant by one or more sensors of the display device worn by the user in the dynamic physical environment and a comparison between the user attribute and a specific surface attribute in a first respective set of surface attributes of the first candidate surface;
determining whether or not a second respective set of surface attributes of a second candidate surface includes a disqualifying surface attribute having a disqualifying surface attribute value;
when it is determined that the second respective set of surface attributes includes the disqualifying surface attribute having the disqualifying surface attribute value, generating a second reduced set of candidate surfaces at least by disqualifying or filtering out the second candidate surface from the plurality of candidate surfaces;]
calculating a plurality of scores for the
selecting
storing a mapping of the content element to
[ dynamically ] displaying [ , via the AR display device, ] the content element on the
|
2. The method of claim 1, wherein the
|
3. The method of claim 1, wherein the [ respective ] plurality of
|
4. The method of claim 1, wherein the [ respective ] plurality of
|
5. The method of claim 1, [ further comprising determining whether the specific element attribute takes precedent over other element attributes for other content elements based at least in part upon a value of the specific element attribute and a respective element priority for the content element, ] wherein the [ final ] surface on which the content element is displayed to the user is
|
6. The method of claim 1, further comprising comparing the highest score to a threshold score, displaying the content element on either the
|
7. The method of claim 6, wherein the content element is displayed on the
|
8. The method of claim 1, further comprising overriding the
|
9. The method of claim 1, further comprising moving the
|
10. The method of claim 1, wherein the
|
11. An augmented reality (AR) display system, comprising: a head-mounted system comprising: one or more sensors, and one or more cameras comprising outward facing cameras; a processor to execute a set of program code instructions; and a memory to hold the set of program code instructions, in which the set of program code instructions comprises program code [ which, when executed by the processor, causes the processor ] to perform [ a set of acts, the set of acts ] comprising:
identifying a content element
[ determining ] a [ respective ] plurality of
determining
determining, based at least in part upon one or more attribute priorities for the one or more element attributes of the respective plurality of element attributes, a final surface from the plurality of surfaces for a display of the content element on the final surface at least by:
respectively prioritizing each content element of the plurality of content elements with an element priority so that a plurality of element priorities respectively corresponds to the plurality of content elements;
respectively prioritizing the each surface of the plurality of surfaces with a surface priority so that a plurality of surface priorities respectively corresponds to the plurality of surfaces;
determining a plurality of candidate surfaces from the plurality of surfaces based at least in part upon the element priorities and the surface priorities at least by ] comparing the [ respective ] plurality of
determining whether or not a second respective set of surface attributes of a second candidate surface includes a disqualifying surface attribute having a disqualifying surface attribute value;
when it is determined that the second respective set of surface attributes includes the disqualifying surface attribute having the disqualifying surface attribute value, generating a second reduced set of candidate surfaces at least by disqualifying or filtering out the second candidate surface from the plurality of candidate surfaces;]
calculating a plurality of scores for the
selecting
storing a mapping of the content element to the
displaying [ , via the AR display device, ] the content element on the
|
12. The system of claim 11, wherein the
|
13. The system of claim 11, wherein the [ respective ] plurality of
|
14. The system of claim 11, wherein the [ respective ] plurality of
|
15. The system of claim 11, [ the set of program code instructions comprises the program code to further perform: determining whether the specific element attribute takes precedent over other element attributes for other content elements based at least in part upon a value of the specific element attribute and a respective element priority for the content element, ] wherein the [ final ] surface on which the content element is displayed to the user is
|
16. The system of claim 11, wherein the program code further performs comparing the highest score to a threshold score, displaying the content element on either the
|
17. The system of claim 16, wherein the content element is displayed on the
|
18. The system of claim 11, wherein the program code further preforms overriding the
|
19. The system of claim 11, wherein the program code further preforms moving the
|
20. The system of claim 11, wherein the programmed code further allows the
|
[ 21. The method of claim 1, further comprising:
determining the element priorities for the plurality of content elements at least by:
determining a single element attribute from the respective plurality of element attributes to be an element priority for the content element of the plurality of content elements; and
ordering element entries corresponding to the plurality of content elements in an element data structure into ordered element entries based at least in part upon the element priorities that respectively correspond to the plurality of content elements, wherein the element data structure further stores, according to first corresponding locations of the plurality of content elements in the element data structure, the respective plurality of element attributes and the one or more attribute priorities;
determining the surface priorities that respectively correspond to the plurality of surfaces based at least in part upon the respective plurality of surface attributes of the each surface;
ordering surface entries corresponding to the plurality of surfaces in a surface data structure into ordered surface entries based at least in part upon the surface priorities that respectively correspond to the plurality of surfaces, wherein the surface data structure further stores, according to second corresponding locations in the surface data structure for the plurality of surfaces, the respective plurality of surface attributes for the each surface of the plurality of surfaces; and
associating the each surface of the plurality of surfaces with a respective adjacency parameter.]
|
[ 22. The method of claim 1, further comprising:
determining the plurality of surfaces and the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon environment data in the dynamic physical environment, determining the plurality of surfaces and the respective plurality of surface attributes comprising:
collecting depth information of the dynamic physical environment from at least one sensor of a plurality of sensors of the AR display device;
determining a set of connected vertices among a set of points in the depth information or the environment data at least by performing a first analysis;
generating a virtual mesh representative of at least a portion of the dynamic physical environment;
determine mesh properties at least by performing a second analysis, wherein the mesh properties are indicative of a common surface or an interpretation of the common surface;
determining the plurality of surfaces based at least in part upon a result of the second analysis; and
determining the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon the mesh properties, a result of the first analysis, or a rotation or a position of the AR display device, wherein
the dynamic physical environment is dynamic in that the dynamic physical environment or one or more objects therein are changing over time or the user wearing the AR display device is changing one or more user attributes including the user attribute over timed.]
|
[ 23. The method of claim 22, determining the plurality of surfaces and the respective plurality of surface attributes further comprising:
determining the user attribute of the user, comprising:
determining real-time inertial measurement unit (IMU) data and one or more images both of which captured by the at least one sensor of the plurality of sensors of the AR display device;
determining a rotation of the AR display device worn by the user based at least in part upon the real-time IMU data;
determining a position of the AR display device relative to the dynamic physical environment based at least in part upon the real-time IMU data and the one or more images; and
determining the user attribute based at least in part upon the rotation of the AR display device and the position of the AR display device, wherein
the plurality of surfaces is determined further based at least in part upon the user attribute or the attribute value thereof, and
the respective plurality of surface attributes for the each surface of the plurality of surfaces is determined further based at least in part upon the user attribute.]
|
[ 24. The method of claim 22, identifying the content element from the plurality of content elements comprising:
receiving and deconstructing a content into at least one content element of the plurality of content elements;
inferring and storing the respective plurality of element attributes for the content element of the plurality of content elements based at least in part upon placement of the at least one content element in the content;
respectively associating the respective plurality of element attributes with the element priorities that respectively correspond to the plurality of content elements; and
ordering attribute entries corresponding to the respective plurality of element attributes in an element data structure into ordered attribute entries based at least in part upon the element priorities, wherein the element data structure further stores the plurality of content elements and the one or more attribute priorities.]
|
[ 25. The method of claim 24, wherein the respective plurality of element attributes are inferred further based at least in part upon the placement of the at least one content element with respect to one or more other content elements in the content.]
|
[ 26. The method of claim 24, wherein the respective plurality of element attributes are inferred from one or more tags that pertain to the placement of the at least one content element in the content or are inferred by extracting one or more hints or the one or more tags from the at least one content element.]
|
[ 27. The method of claim 1, determining the plurality of candidate surfaces from the plurality of surfaces comprising:
identifying a surface data structure for the plurality of surfaces and an element data structure for the plurality of content elements including the content element;
determining whether the content element includes or is associated with a hint;
when the content element is determined to include or to be associated with the hint, searching the surface data structure for displaying the content element based at least in part on a result of analyzing the hint;
determining whether the hint or a pre-defined rules is to be used to match the plurality of content elements to the plurality of surfaces;
determining whether or not the pre-defined rule overrides the hint; and
for the content element, determining the plurality of candidate surfaces from the plurality of surfaces based at least in part on the element priorities, the hint or the pre-defined rule, a first result of determining whether the hint or the pre-defined rules is to be used, and a second result of determining whether or not the pre-defined rule overrides the hint.]
|
[ 28. The method of claim 27, determining the plurality of candidate surfaces from the plurality of surfaces further comprising:
identifying a first content element that is associated with a highest element priority at least by traversing the element data structure;
identifying one or more first matching surfaces at least by comparing the respective plurality of element attributes of the first content element to the respective plurality of surface attributes of one or more first surfaces of the plurality of surfaces;
identifying a second content element that is associated with a second highest element priority at least by traversing the element data structure;
identifying one or more second matching surfaces at least by comparing the respective plurality of element attributes of the second content element to the respective plurality of surface attributes of one or more second surfaces of the plurality of surfaces; and
determining the plurality of candidate surfaces based at least in part upon the one or more first matching surfaces and the one or more second matching surfaces.]
|
[ 29. The method of claim 1, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by disambiguating one or more conflicts among two or more candidate surfaces.]
|
[ 30. The method of claim 1, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by removing and excluding a particular candidate surface from further processing when a score of the particular candidate surface exceeds a threshold.]
|
[ 31. The method of claim 30, wherein the content element comprises an environment driven content element and is identified from the plurality of content elements based at least in part upon respective scores of a plurality of environment driven content elements after identifying the particular candidate surface in the dynamic physical environment.]
|
[ 32. The method of claim 1, further comprising:
determining, for the content element, that no surfaces in the plurality of surfaces are compatible with displaying the content element on based at least in part upon the respective plurality of content element attributes of the content element and the respective plurality of surface attributes of the plurality of surfaces.]
|
[ 33. The method of claim 1, further comprising:
determining, for the user wearing the AR display device and perceiving a representation of the content element through the AR display device, a first value for the user attribute;
detecting, by the AR display device, a change that updates the first value into a second value for the user attribute;
determining whether the change exceeds a threshold for changes; and
determining whether or not the second value for the user attribute is maintained for over a temporal threshold.]
|
[ 34. The method of claim 33, further comprising:
when it is determined that the change is smaller than the threshold for changes, or that the second value for the user attribute is maintained for a first temporal duration shorter than the temporal threshold, maintaining the representation of the content element on the final surface.]
|
[ 35. The method of claim 33, further comprising:
when it is determined that the change is greater than the threshold for changes, and that the second value for the user attribute is maintained for a second temporal duration longer than the temporal threshold,
determining whether or not a new surface in the dynamic physical environment is compatible with changing the representation of the content element onto the new surface based at least in part upon the respective plurality of surface attributes for the new surface.]
|
[ 36. The method of claim 35, further comprising:
when it is determined that the new surface is compatible with changing the representation of the content element onto the new surface, moving the representation of the content element onto the new surface at least by rendering the content element on the new surface; and
when it is determined that the new surface is incompatible with changing the representation of the content element onto the new surface, creating a virtual surface for the representation of the content element; and
moving the representation of the content element onto the virtual surface at least by rendering the content element on the virtual surface, wherein the user attribute comprise a head-pose.]
|
[ 37. The method of claim 36, moving the representation comprising:
incrementally rendering the content element through one or more intermediate positions so that the representation of the content element is perceived by the user through the AR display device at the one or more intermediate positions before finally rendered on the new surface or the virtual surface.]
|
[ 38. The augmented reality display system of clam 11, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts, the set of acts further comprising:
determining the plurality of surfaces and the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon environment data in the dynamic physical environment, determining the plurality of surfaces and the respective plurality of surface attributes comprising:
collecting depth information of the dynamic physical environment from at least one sensor of the plurality of sensors of the AR display device;
determining a set of connected vertices among a set of points in the depth information or the environment data at least by performing a first analysis;
generating a virtual mesh representative of at least a portion of the dynamic physical environment;
determine mesh properties at least by performing a second analysis, wherein the mesh properties are indicative of a common surface or an interpretation of the common surface;
determining the plurality of surfaces based at least in part upon a result of the second analysis; and
determining the respective plurality of surface attributes for the each surface of the plurality of surfaces based at least in part upon the mesh properties, a result of the first analysis, or a rotation or a position of the AR display device, wherein
the dynamic physical environment is dynamic in that the dynamic physical environment or one or more objects therein are changing over time or the user wearing the AR display device is changing one or more user attributes including the user attribute.]
|
[ 39. The augmented reality display system of clam 38, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts that determines the plurality of surfaces and the respective plurality of surface attributes, the set of acts further comprising:
determining the user attribute of the user, comprising:
determining real-time inertial measurement unit (IMU) data and one or more images both of which captured by the at least one sensor of the plurality of sensors of the AR display device;
determining a rotation of the AR display device worn by the user based at least in part upon the real-time IMU data;
determining a position of the AR display device relative to the dynamic physical environment based at least in part upon the real-time IMU data and the one or more images; and
determining the user attribute based at least in part upon the rotation of the AR display device and the position of the AR display device, wherein
the plurality of surfaces is determined further based at least in part upon the user attribute or the attribute value thereof, and
the respective plurality of surface attributes for the each surface of the plurality of surfaces is determined further based at least in part upon the user attribute.]
|
[ 40. The augmented reality display system of clam 38, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts that identifies the content element from the plurality of content elements, the set of acts further comprising:
receiving and deconstructing a content into at least one content element of the plurality of content elements;
inferring and storing the respective plurality of element attributes for the content element of the plurality of content elements based at least in part upon placement of the at least one content element in the content;
respectively associating the respective plurality of element attributes with the element priorities that respectively correspond to the plurality of content elements; and
ordering attribute entries corresponding to the respective plurality of element attributes in an element data structure into ordered attribute entries based at least in part upon the element priorities, wherein the element data structure further stores the plurality of content elements and the one or more attribute priorities.]
|
[ 41. The augmented reality display system of clam 11, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by disambiguating one or more conflicts among two or more candidate surfaces.]
|
[ 42. The augmented reality display system of clam 11, wherein the first reduced set of candidate surface or the second reduced set of candidate surfaces is generated further at least by removing and excluding a particular candidate surface from further processing when a score of the particular candidate surface exceeds a threshold.]
|
[ 43. The augmented reality display system of clam 11, wherein the set of program code instructions comprises the program code which, when executed by the processor, causes the processor to perform the set of acts, the set of acts further comprising:
determining, for the user wearing the AR display device and perceiving a
representation of the content element through the AR display device,
a first value for the user attribute;
detecting, by the AR display device, a change that updates the first value into a second value for the user attribute;
determining whether the change exceeds a threshold for changes;
determining whether or not the second value for the user attribute is maintained for over a temporal threshold; and
incrementally moving the content element from the final surface through one or more intermediate positions to a different surface so that the representation of the content element is perceived by the user through the AR display device at the one or more intermediate positions before finally being rendered on the different surface.]
|