US 12,272,012 B2
Dynamic mixed reality content in virtual reality
Sarah Tanner Simpson, Mountain View, CA (US); Gregory Smith, San Francisco, CA (US); Jeffrey Witthuhn, Oakland, CA (US); Ying-Chieh Huang, Fremont, CA (US); Shuang Li, San Jose, CA (US); Wenliang Zhao, Belmont, CA (US); Peter Koch, Los Altos, CA (US); Meghana Reddy Guduru, Mountain View, CA (US); Ioannis Pavlidis, Newark, CA (US); Xiang Wei, Fremont, CA (US); Kevin Xiao, San Carlos, CA (US); Kevin Joseph Sheridan, Redwood City, CA (US); Bodhi Keanu Donselaar, London (GB); and Federico Adrian Camposeco Paulsen, Redwood City, CA (US)
Assigned to Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed by Meta Platforms Technologies, LLC, Menlo Park, CA (US)
Filed on May 22, 2023, as Appl. No. 18/321,712.
Application 18/321,712 is a continuation of application No. 17/336,776, filed on Jun. 2, 2021, granted, now 11,676,348.
Prior Publication US 2023/0290089 A1, Sep. 14, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06T 19/00 (2011.01); G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06V 20/20 (2022.01); G06V 40/10 (2022.01)
CPC G06T 19/006 (2013.01) [G06T 7/11 (2017.01); G06T 7/70 (2017.01); G06V 20/20 (2022.01); G06V 40/10 (2022.01); G06T 2207/30196 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A method comprising, by a mobile computing device:
capturing one or more images of a first user wearing a virtual reality (VR) display device in a real-world environment;
receiving, from a VR system of the VR display device, a VR rendering of a VR environment, wherein the VR rendering is from the perspective of the mobile computing device with respect to the VR display device;
generating, in real-time responsive to capturing the one or more images, a first mixed reality (MR) rendering of the first user in the VR environment, wherein the first MR rendering of the first user is based on a compositing of the one or more images of the first user and the VR rendering;
receiving, by the mobile computing device, an indication of a user interaction with one or more elements of the VR environment in the first MR rendering; and
generating, in real-time responsive to the indication of the user interaction with the one or more elements, a second MR rendering of the first user in the VR environment, wherein the one or more elements have been modified according to the user interaction.