US 11,738,275 B2
Virtual reality presentation of real world space
Mohammed Khan, San Mateo, CA (US); Miao Li, San Mateo, CA (US); and Ken Miyaki, San Mateo, CA (US)
Assigned to Sony Interactive Entertainment LLC, San Mateo, CA (US)
Filed by Sony Interactive Entertainment LLC, San Mateo, CA (US)
Filed on Jul. 20, 2021, as Appl. No. 17/381,172.
Application 17/381,172 is a continuation of application No. 16/787,897, filed on Feb. 11, 2020, granted, now 11,065,551, issued on Jul. 20, 2021.
Application 16/787,897 is a continuation of application No. 15/901,845, filed on Feb. 21, 2018, granted, now 10,556,185, issued on Feb. 11, 2020.
Claims priority of provisional application 62/566,266, filed on Sep. 29, 2017.
Prior Publication US 2021/0346811 A1, Nov. 11, 2021
This patent is subject to a terminal disclaimer.
Int. Cl. A63F 13/86 (2014.01); A63F 13/53 (2014.01); G06T 19/00 (2011.01); A63F 13/5255 (2014.01); A63F 13/211 (2014.01); A63F 13/77 (2014.01); A63F 13/63 (2014.01); G06T 15/20 (2011.01); A63F 13/213 (2014.01); A63F 13/215 (2014.01); A63F 13/216 (2014.01); A63F 13/25 (2014.01); A63F 13/26 (2014.01); A63F 13/35 (2014.01); A63F 13/428 (2014.01); A63F 13/44 (2014.01); A63F 13/525 (2014.01); A63F 13/61 (2014.01); A63F 13/79 (2014.01); G06T 11/60 (2006.01); H04N 13/117 (2018.01); H04N 13/344 (2018.01); G06F 3/01 (2006.01)
CPC A63F 13/86 (2014.09) [A63F 13/211 (2014.09); A63F 13/213 (2014.09); A63F 13/215 (2014.09); A63F 13/216 (2014.09); A63F 13/25 (2014.09); A63F 13/26 (2014.09); A63F 13/35 (2014.09); A63F 13/428 (2014.09); A63F 13/44 (2014.09); A63F 13/525 (2014.09); A63F 13/5255 (2014.09); A63F 13/53 (2014.09); A63F 13/61 (2014.09); A63F 13/63 (2014.09); A63F 13/77 (2014.09); A63F 13/79 (2014.09); G06T 11/60 (2013.01); G06T 15/20 (2013.01); G06T 19/006 (2013.01); A63F 2300/8082 (2013.01); G06F 3/012 (2013.01); H04N 13/117 (2018.05); H04N 13/344 (2018.05)] 17 Claims
OG exemplary drawing
 
1. A method for delivering a virtual reality (VR) presentation of a real world space to a remote user via a head mounted display (HMD), comprising:
identifying a viewing location within the real world space for the user, the viewing location being mapped to a real world capture system in the real world space;
receiving a video stream of the real world space from the real world capture system, the video stream including a plurality of images captured by one or more cameras of the real world capture system, the plurality of images being presented in the HMD from a perspective associated with the viewing location;
reskinning a real world object in the real world space by overlaying a graphical content element in place of image data associated with the real world object; and
sending for presentation the video stream with the reskinning for viewing to the HMD;
wherein the overlaying includes measuring real world distances between the real world capture system and vertices of the real world object.