US 10,282,907 C1 (12,352nd)
Interacting with a network to transmit virtual image data in augmented or virtual reality systems
Samuel A. Miller, Hollywood, FL (US); and Rony Abovitz, Hollywood, FL (US)
Filed by Magic Leap, Inc., Dania Beach, FL (US)
Assigned to MAGIC LEAP, INC., Plantation, FL (US)
Reexamination Request No. 90/014,999, Apr. 8, 2022.
Reexamination Certificate for Patent 10,282,907, issued May 7, 2019, Appl. No. 14/703,854, May 4, 2015.
Application 90/014,999 is a continuation of application No. 14/205,126, filed on Mar. 11, 2014, granted, now 10,629,003.
Claims priority of provisional application 61/776,771, filed on Mar. 11, 2013.
Ex Parte Reexamination Certificate issued on Aug. 14, 2023.
Int. Cl. G06T 19/00 (2011.01); G02B 27/01 (2006.01); G06T 7/73 (2017.01); H04L 67/131 (2022.01); G02B 27/00 (2006.01); G06F 3/01 (2006.01); G06T 1/20 (2006.01); G09G 5/00 (2006.01); H04L 67/10 (2022.01)
CPC G06T 19/006 (2013.01) [G02B 27/0093 (2013.01); G02B 27/017 (2013.01); G02B 27/0172 (2013.01); G06F 3/013 (2013.01); G06F 3/016 (2013.01); G06F 3/017 (2013.01); G06T 1/20 (2013.01); G06T 7/73 (2017.01); G09G 5/006 (2013.01); H04L 67/10 (2013.01); H04L 67/131 (2022.05); G02B 2027/014 (2013.01); G02B 2027/0127 (2013.01); G02B 2027/0138 (2013.01); G02B 2027/0178 (2013.01); G02B 2027/0187 (2013.01); G06T 2207/30201 (2013.01); G06T 2219/024 (2013.01); G09G 2370/02 (2013.01); G09G 2370/20 (2013.01)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1 and 10 are determined to be patentable as amended.
Claims 2-9 and 11-17, dependent on an amended claim, are determined to be patentable.
New claims 18-38 are added and determined to be patentable.
1. A method, comprising:
capturing, by one or more sensors of a wearable display system located in a first physical environment, an image of a real physical object, the real physical object located in a field of view of a first user, wherein
a location in the first physical environment of the real physical object is known to the wearable display system;
provisioning, by the wearable display system and within the field of view of the first user, an interface having a first interface feature and a second interface feature, wherein
the first and second interface features, when activated, respectively configure the wearable display system for respective, different visualization mode of a plurality of visualization modes that includes a blended reality mode, and
the blended reality mode is a combination of a virtual reality mode and an augmented reality mode;
rendering, by the wearable display system and from the captured image of the real physical object, a rendered physical object based at least in part upon a digital model for the real physical model, wherein the rendered physical object appears substantially identical to the real physical object; and
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes via the interface, the rendered physical object at a display location in a display that corresponds to an area of the display through which the real physical object is viewable, wherein
the blended reality mode comprises displaying virtual objects and the rendered physical object in the first physical environment of the first user, and
the rendered physical object is displayed in place of the real physical object in the display of the wearable display system [ ;
transmitting, from the wearable display system to a mobile communication device operated by a third user located in the first physical environment in which the first user is also located, a first data stream for one or more first interactions by the first user with one or more real physical objects in the first physical environment; and
transmitting, from a computing system to the mobile communication device, a second data stream for a second user object as dynamically controlled by a second user at a second physical environment in near real-time] .
10. An augmented reality display system, comprising:
a camera of a wearable display system to capture an image of a real physical object in a field of view of a first user, wherein a location of the real physical object in the first physical environment in a real world is known to the wearable display system;
a module for processing data, wherein the module is stored in a memory, the module rendering from the captured image of the real physical object a rendered physical object based at least in part upon a digital model for the real physical model, wherein the rendered physical object appears substantially identical to the real physical object;
the wearable display system configured to provision, within the field of view of the first user, an interface having a first interface feature and a second interface feature, wherein the first and second interface features, when activated, respectively configure the wearable display system for respective, different visualization mode of a plurality of visualization modes that includes a blended reality mode, and the blended reality mode is a combination of a virtual reality mode and an augmented reality mode; and
a display of the wearable display system for displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes via the interface, the rendered physical object at a display location in the display that corresponds to an area of the display through which the real physical object is viewable, wherein the blended reality mode comprises displaying virtual objects and the rendered physical object in the first physical environment of the first user, wherein the rendered physical object is displayed in place of the real physical object in the display of the wearable display system [ ;
a mobile communication device operated by a third user located in the first physical environment in which the first user is also located to transmit to the mobile communication device, a first data stream for one or more first interactions by the first user with one or more real physical objects in the first physical environment; and
a computing system to transmit to the mobile communication device, a second data stream for a second user object as dynamically controlled by a second user at a second physical environment in near real-time] .
[ 18. The method of claim 1, further comprising:
receiving a first physical interaction by the first user with the rendered physical object; and
re-rendering the rendered physical object into a modified rendered physical object in response to the first physical interaction.]
[ 19. The method of claim 1, further comprising:
receiving a first verbal interaction by the first user with the rendered physical object; and
re-rendering the rendered physical object into a modified rendered physical object in response to the first verbal interaction.]
[ 20. The method of claim 1, further comprising:
receiving a second physical interaction by a component and the first user with the rendered physical object, the second physical interaction comprises the spatial relationship between a component operatively linked to the mobile electronic system and the rendered physical object; and
re-rendering the rendered physical object into a modified rendered physical object in response to the second physical interaction.]
[ 21. The method of claim 1, further comprising:
capturing, by the one or more sensors of the mobile electronic system, a first image of a first real physical object at a first location and at a first time point, the first real physical object located in the field of view of the first user, wherein the first location of the first real physical object is known to the mobile electronic system;
rendering, based at least in part upon the first captured image of the first real physical object, a first rendered physical object that appears substantially identical to the first real physical object;
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes, the first rendered physical object at a first display location in relation to the area for the rendered physical object in the display through which the real physical object and the first rendered physical object are both perceived by the first user, wherein
the first rendered physical object is displayed in place of the first physical object in the display.]
[ 22. A method, comprising:
capturing, by one or more sensors of a wearable display system located in a first physical environment, an image of a real physical object, the real physical object located in a field of view of a first user, wherein
a location in the first physical environment of the real physical object is known to the wearable display system;
provisioning, by the wearable display system and within the field of view of the first user, an interface having a first interface feature and a second interface feature, wherein
the first and second interface features, when activated, respectively configure the wearable display system for respective, different visualization mode of a plurality of visualization modes that includes a blended reality mode, and
the blended reality mode is a combination of a virtual reality mode and an augmented reality mode;
rendering, by the wearable display system and from the captured image of the real physical object, a rendered physical object based at least in part upon a digital model for the real physical model, wherein the rendered physical object appears substantially identical to the real physical object;
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes via the interface, the rendered physical object at a display location in a display that corresponds to an area of the display through which the real physical object is viewable, wherein
the blended reality mode comprises displaying virtual objects and the rendered physical object in the first physical environment of the first user, and
the rendered physical object is displayed in place of the real physical object in the display of the wearable display system;
capturing, by the one or more sensors of the mobile electronic system, a first image of a first real physical object at a first location and at a first time point, the first real physical object located in the field of view of the first user, wherein the first location of the first real physical object is known to the mobile electronic system;
rendering, based at least in part upon the first captured image of the first real physical object, a first rendered physical object that appears substantially identical to the first real physical object;
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes, the first rendered physical object at a first display location in relation to the area for the rendered physical object in the display through which the real physical object and the first rendered physical object are both perceived by the first user, wherein
the first rendered physical object is displayed in place of the first physical object in the display;
identifying, by the one or more sensors of the mobile electronic system, a second real physical object at a second location and at a second time point, the second real physical object located within the field of view of the first user, wherein
the second location of the second real physical object is known to the mobile electronic system;
generating an interaction between the second real physical object and the first rendered physical object in relation to the rendered physical object; and
rendering a different first rendered physical object for the first physical object to reflect an effect of the interaction of the second real physical object on the first rendered physical object.]
[ 23. The method of claim 1, further comprising:
capturing, by the one or more sensors of the mobile electronic system, a first image of a first real physical object at a first time point, the first real physical object located in the field of view of the first user, wherein a first location of the first real physical object is known to the mobile electronic system;
rendering, from the first captured image of the first real physical object, a first rendered physical object in place of the first real physical object, the first rendered physical object appearing substantially identical to the first real physical object; and
rendering a fully virtual object within the field of view of the first user through the mobile electronic system.]
[ 24. The method of claim 23, further comprising:
generating, by the mobile electronic system, an interaction between the fully virtual object and the first rendered physical object based at least in part upon a physical variable pertaining to the fully virtual object and the first rendered physical object;
re-rendering the first rendered physical object into a first modified rendered physical object in response to the interaction between the fully virtual object and the first rendered physical object; and
displaying to the first user, in the blended reality mode selected by the first user, the first modified rendered physical object at one or more display locations in relation to the rendered physical object in the display that correspond to at least the area of the display through which the rendered physical object, the first modified rendered physical object, and fully virtual object are viewable by the first user, wherein the first rendered physical object is displayed in place of the first physical object in the display.]
[ 25. The method of claim 1, further comprising:
scanning and rendering the first user as a first rendered user;
determining a spatial relationship between the first user and the real physical object located at a first geographic location; and
transposing the first rendered user and the rendered physical object to a different mobile electronic system of a different user at a second geographic location while maintaining the spatial relationship.]
[ 26. A method, comprising:
capturing, by one or more sensors of a wearable display system located in a first physical environment, an image of a real physical object, the real physical object located in a field of view of a first user, wherein
a location in the first physical environment of the real physical object is known to the wearable display system;
provisioning, by the wearable display system and within the field of view of the first user, an interface having a first interface feature and a second interface feature, wherein
the first and second interface features, when activated, respectively configure the wearable display system for respective, different visualization mode of a plurality of visualization modes that includes a blended reality mode, and
the blended reality mode is a combination of a virtual reality mode and an augmented reality mode;
rendering, by the wearable display system and from the captured image of the real physical object, a rendered physical object based at least in part upon a digital model for the real physical model, wherein the rendered physical object appears substantially identical to the real physical object;
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes via the interface, the rendered physical object at a display location in a display that corresponds to an area of the display through which the real physical object is viewable, wherein
the blended reality mode comprises displaying virtual objects and the rendered physical object in the first physical environment of the first user, and
the rendered physical object is displayed in place of the real physical object in the display of the wearable display system;
scanning and rendering the first user as a first rendered user;
determining a spatial relationship between the first user and the real physical object located at a first geographic location;
transposing the first rendered user and the rendered physical object to a different mobile electronic system of a different user at a second geographic location while maintaining the spatial relationship;
rendering, by the different mobile electronic system, the first rendered user and the rendered physical object within a different field of view of the different user through the different mobile electronic system based at least in part upon a result of transposing the first rendered user and the rendered physical object and the spatial relationship therebetween; and
identifying, at the different mobile electronic system in the blended reality mode of the plurality of visualization modes, an interaction with the first user or the real physical object by the different user through the different mobile electronic system that is operating in an augmented reality mode of the plurality of visualization modes.]
[ 27. The method of claim 26, further comprising:
receiving, at the mobile electronic system in the blended reality model from the different mobile electronic system in the augmented reality mode, a response to the interaction by the first user through the mobile electronic system that is operating in the blended reality mode of the plurality of visualization modes; and
transmitting, from the mobile electronic system to the different mobile electronic system, a data stream for the response to the second interaction.]
[ 28. The method of claim 1, wherein the mobile communication device does not have a wearable augmented or virtual reality display and comprises a mobile computer, a tablet, or a smart phone.]
[ 29. The method of claim 28, further comprising:
provisioning augmented reality capabilities to the mobile communication device, provisioning the augmented reality capabilities comprising:
operatively coupling the mobile communication to a console, the console configured to increase the augmented reality capabilities, wherein the console comprising a plurality of inward-facing image sensors, a plurality of outward-facing image sensors, a haptic feedback device, a localization sensor, an orientation sensor, and a motion or movement sensor that are operatively coupled to the mobile communication device through one or more wired or wireless connections, wherein
the plurality of inward-facing image sensors performs a first set of processes that comprises an eye tracking process,
the plurality of outward-facing image sensors performs a second set of processes that comprises a machine vision process, a localization process, or a registration process, and
the haptic feedback device performs a third set of processes that comprises providing multi-axis feedback for user interaction with the console.]
[ 30. The method of claim 29, provisioning the augmented reality capabilities comprising further comprising:
performing coarse localization and coarse orientation for the console integrated with the mobile communication device using at least the localization sensor and the orientation sensor;
loading, at the console or the mobile communication device from a wireless connected remote computing system, coarse local feature mapping information based at least in part upon the coarse localization and the coarse orientation;
determining fine localization and fine orientation based at least in part upon the coarse local feature mapping information, the coarse localization, and the coarse orientation;
loading, at the console or the mobile communication device, fine local feature mapping information based at least in part upon the fine localization and the fine orientation;
tracking movements or orientation changes of the console; and
updating a first representation of the one or more first interactions by the first user and a second representation of the second user object in a display device of the mobile communication device, which does not have a wearable augmented or virtual reality display, in response to a result of tracking the movements or the orientation changes that are tracked.]
[ 31. The method of claim 1, wherein
the computing system comprises a wearable display system that includes functionalities of presenting contents to the first user in the plurality of visualization modes that comprises an augmented reality mode, a virtual reality mode, and the blended reality mode that is a combination of the augmented reality mode and the virtual reality mode, and
the computing system comprises a laptop computer, a desktop computer, or a gaming console.]
[ 32. A method, comprising:
capturing, by one or more sensors of a wearable display system located in a first physical environment, an image of a real physical object, the real physical object located in a field of view of a first user, wherein
a location in the first physical environment of the real physical object is known to the wearable display system;
provisioning, by the wearable display system and within the field of view of the first user, an interface having a first interface feature and a second interface feature, wherein
the first and second interface features, when activated, respectively configure the wearable display system for respective, different visualization mode of a plurality of visualization modes that includes a blended reality mode, and
the blended reality mode is a combination of a virtual reality mode and an augmented reality mode;
rendering, by the wearable display system and from the captured image of the real physical object, a rendered physical object based at least in part upon a digital model for the real physical model, wherein the rendered physical object appears substantially identical to the real physical object;
displaying to the first user, in the blended reality mode selected by the first user from the plurality of visualization modes via the interface, the rendered physical object at a display location in a display that corresponds to an area of the display through which the real physical object is viewable, wherein
the blended reality mode comprises displaying virtual objects and the rendered physical object in the first physical environment of the first user, and
the rendered physical object is displayed in place of the real physical object in the display of the wearable display system;
in real-time or near real-time between the wearable display system and a display system operated by a second user located in a second physical environment remote from the first physical environment, transmitting, to the display system, a first data stream representing a first interaction by the first user with the rendered physical object with respect to one or more real physical objects in the first physical environment, wherein the display system operated by the second user comprises or is operatively coupled to a desktop display monitor or a television set; and
receiving, at the wearable display system, a second data stream representing a first response by the second user to the first interaction.]
[ 33. The method of claim 32, further comprising:
identifying a physical feature on the real physical object;
extending the physical feature with at least a function at least by:
detecting the first interaction by the first user with the physical feature;
upon detection of the interaction, rendering a virtual interface or panel in relation to the physical feature;
detecting a second interaction by the first user with the virtual interface or panel; and
performing, by the wearable display system, an operation to accomplish the function in response to the second interaction; and
rendering, in real-time or near real-time within the field of view of the first user through the wearable display system, a user object that represents and is dynamically controlled by the second user via the display system operatively coupled to or comprising the desktop display monitor or the television set in the second physical environment that is remote from the first physical environment perceived by the first user with the wearable display system.]
[ 34. The method of claim 32, further comprising:
creating a digital world for at least a portion of the first physical environment, the digital world including at least the rendered physical object, the use object representing the second user, and a separate user object representing the first user in the digital world; and
sharing the digital world between the wearable display system in the first physical environment and the display system in the second physical environment.]
[ 35. The method of claim 34, further comprising:
receiving, at the wearable display system, the first interaction by the first user with one or more real physical objects in the physical environment in which the first user is located, the one or more real physical objects different from the real physical object for which the rendered physical object is created; and
transmitting, in near real-time from the wearable display system to the display system, the first data stream for the first interaction by the first user with the one or more real physical objects different from the rendered physical object in the first physical environment for rendering the first interaction in the desktop display monitor, wherein
at least a portion of the digital world is shared with the display system having or operatively coupled to the desktop display monitor or the television set and operated by the second user located in the second physical environment that is remote from the first environment.]
[ 36. The method of claim 35, further comprising:
receiving, in near real-time at the wearable display system, a second data stream for a response by the second user using the display system to the first interaction by the first user for rendering the first interaction in the wearable display system; and
re-rendering, in near real-time, the user object for the second user using the second data stream to reflect the response by the second user to the first interaction.]
[ 37. The method of claim 36, further comprising:
receiving, in near real-time at the wearable display system, a third data stream for a second interaction by the second user via the user object controlled by the second user via the display system for rendering the second interaction in the wearable display system; and
re-rendering, in near real-time at the wearable display system, the user object for the second user using the third data stream to reflect the second interaction by the second user via the user object that is dynamically controlled by the second user in the second physical environment.]
[ 38. The method of claim 1, wherein
the first interface feature, when activated, configures the wearable display system for the blended reality mode of the plurality of visualization modes,
the blended reality mode is a combination of a virtual reality model and an augmented reality mode of the plurality of visualization modes, and
the second interface feature, when activated, configures the wearable display system for the augmented reality mode of the plurality of visualization modes.]