US 11,989,356 B2
Method and device for transforming degree of freedom in mulsemedia system
Seung Moon Choi, Pohang-si (KR); Gyeo Re Yun, Daegu (KR); and Sang Yoon Han, Seoul (KR)
Assigned to POSTECH Research and Business Development Foundation, Pohang-si (KR)
Filed by POSTECH RESEARCH AND BUSINESS DEVELOPMENT FOUNDATION, Pohang-si (KR)
Filed on Mar. 25, 2022, as Appl. No. 17/704,790.
Claims priority of application No. 10-2021-0055061 (KR), filed on Apr. 28, 2021; and application No. 10-2022-0037252 (KR), filed on Mar. 25, 2022.
Prior Publication US 2022/0357804 A1, Nov. 10, 2022
Int. Cl. A63F 13/211 (2014.01); A63F 13/25 (2014.01); G06F 3/01 (2006.01); G06F 3/0346 (2013.01); G06T 7/20 (2017.01); G06T 7/70 (2017.01); G06T 11/00 (2006.01)
CPC G06F 3/0346 (2013.01) [A63F 13/211 (2014.09); A63F 13/25 (2014.09); G06F 3/011 (2013.01); G06T 7/20 (2013.01); G06T 7/70 (2017.01); G06T 11/00 (2013.01)] 14 Claims
OG exemplary drawing
 
8. A method of transforming a degree of freedom (DoF) in a multiple sensorial media (mulsemedia) system, the method comprising:
calculating, by a motion proxy calculator, a motion proxy corresponding to a motion of an object, using an object size (I) displayed on a display;
calculating and scaling, by a motion proxy visual velocity scaler, a visual velocity of the motion proxy according to an object-relative perception mode or a subject-relative perception mode; and
transforming, by a transformer, the motion proxy whose visual velocity is scaled into a motion command implementable within a motion range of a motion platform,
wherein, the motion proxy qcam of the object is expressed as
qcam=AnPcam+(wR(I)/wT(I))Bndcam
wherein An and Bn denote matrices used to obtain a motion proxy that matches a motion platform of n DoFs,
wherein a size of motion effects with respect to a rotation of the object is adjusted to a distance moved in a front direction by (wR(I)/wT(I)),
wherein scale factors wR and wT are determined according to the object size (I) displayed on the display,
wherein the motion of the object is expressed in pcam and dcam, and pcam represents a center position of the object, and dcam represents a unit vector in the front direction with respect to the motion platform, and
wherein in the subject-relative perception mode, the visual velocity of the motion proxy is calculated by scaling a change in a position of the object in a two-dimensional (2D) image of successive image frames and an actual depth direction velocity of the object.