US 12,450,829 B2
Virtual reality control system
Jeong Hwoan Choi, Yongin-si (KR); Jong Hyun Yuk, Seoul (KR); Chul Kwon, Seoul (KR); Young Moon Lee, Seoul (KR); and Seung Buem Back, Yongin-si (KR)
Assigned to SKONEC ENTERTAINMENT CO., LTD., Seoul (KR)
Filed by SKONEC ENTERTAINMENT CO., LTD., Seoul (KR)
Filed on Nov. 16, 2023, as Appl. No. 18/511,636.
Application 18/511,636 is a continuation of application No. 18/511,531, filed on Nov. 16, 2023, granted, now 12,322,040.
Application 18/511,531 is a continuation of application No. PCT/KR2023/015619, filed on Oct. 11, 2023.
Claims priority of application No. 10-2022-0189365 (KR), filed on Dec. 29, 2022.
Prior Publication US 2024/0221303 A1, Jul. 4, 2024
Int. Cl. G06T 17/00 (2006.01); G02B 27/01 (2006.01); G06F 3/01 (2006.01); G06T 7/246 (2017.01); G06T 7/73 (2017.01); G06T 19/00 (2011.01); G09B 5/02 (2006.01); G09G 3/00 (2006.01)
CPC G06T 17/00 (2013.01) [G02B 27/0172 (2013.01); G06F 3/012 (2013.01); G06T 7/246 (2017.01); G06T 7/73 (2017.01); G06T 19/003 (2013.01); G09G 3/001 (2013.01); G06T 2207/30196 (2013.01); G06T 2207/30204 (2013.01); G06T 2219/024 (2013.01); G09B 5/02 (2013.01); G09G 2354/00 (2013.01)] 12 Claims
OG exemplary drawing
 
1. A method of operating a virtual reality control system, comprising:
acquiring, by a physical layer, first sensor data on a head-mounted display (HMD), worn on a body of a user and outputting an image, through an optical camera sensor installed in a large space;
acquiring second sensor data on a tracking device, which is worn on the body of the user, through the optical camera sensor;
acquiring, by a data handling layer, the first sensor data and the second sensor data into data for a content layer;
transmitting, by the data handling layer, the data for the content layer to the content layer; and
transmitting, by a presentation layer image output data including a character corresponding to the user to the HMD based on content change information;
wherein the acquiring of the data for the content layer includes:
acquiring first virtual location information indicating a virtual location of the character based on the first sensor data;
acquiring second virtual location information indicating a virtual location of a specific part of the character corresponding to a part of the body of the user based on the second sensor data;
determining whether an area corresponding to the second virtual location information among a plurality of divided location areas for at least part of the user's body, which can be formed based on the first virtual location information, corresponds to a preset correction target area;
changing at least part of the second virtual location information based on the determination that the area corresponding to the second virtual location information is the correction target area; and
generating the data for the content layer based on the first virtual location information and the changed second virtual location information,
wherein a display of a character based on the second virtual location information and a display of a character based on the changed virtual location information are distinguishable.