US 12,361,719 B2
Display apparatus
Masayoshi Michiguchi, Kanagawa (JP); Tatsuto Ryugo, Tokyo (JP); Kenji Okano, Aichi (JP); and Yukiko Kanno, Kyoto (JP)
Assigned to Panasonic Automotive Systems Co., Ltd., Kanagawa (JP)
Filed by Panasonic Automotive Systems Co., Ltd., Kanagawa (JP)
Filed on Dec. 27, 2024, as Appl. No. 19/002,860.
Application 19/002,860 is a continuation of application No. 18/789,914, filed on Jul. 31, 2024, granted, now 12,220,825.
Application 18/789,914 is a continuation of application No. 18/300,012, filed on Apr. 13, 2023, granted, now 12,214,510.
Application 18/300,012 is a continuation of application No. 17/194,595, filed on Mar. 8, 2021, granted, now 11,657,618, issued on May 23, 2023.
Application 17/194,595 is a continuation of application No. 16/263,159, filed on Jan. 31, 2019, granted, now 10,970,562, issued on Apr. 6, 2021.
Application 16/263,159 is a continuation of application No. 14/241,735, granted, now 10,235,575, issued on Mar. 19, 2019, previously published as PCT/JP2012/005321, filed on Aug. 24, 2012.
Claims priority of application No. 2011-184416 (JP), filed on Aug. 26, 2011; and application No. 2011-184419 (JP), filed on Aug. 26, 2011.
Prior Publication US 2025/0124718 A1, Apr. 17, 2025
This patent is subject to a terminal disclaimer.
Int. Cl. G06V 20/56 (2022.01); B25J 9/00 (2006.01); B25J 9/16 (2006.01); B25J 19/02 (2006.01); B60R 1/00 (2022.01); B60R 1/27 (2022.01); G01S 15/931 (2020.01); H04N 5/262 (2006.01); H04N 7/18 (2006.01); H04N 23/90 (2023.01)
CPC G06V 20/56 (2022.01) [B25J 9/161 (2013.01); B25J 9/1666 (2013.01); B25J 9/1679 (2013.01); B25J 9/1694 (2013.01); B25J 19/023 (2013.01); B60R 1/00 (2013.01); B60R 1/27 (2022.01); G01S 15/931 (2013.01); H04N 5/2624 (2013.01); H04N 7/181 (2013.01); H04N 23/90 (2023.01); B25J 9/0003 (2013.01); B60R 2300/105 (2013.01); B60R 2300/20 (2013.01); B60R 2300/301 (2013.01); B60R 2300/607 (2013.01); B60R 2300/802 (2013.01)] 20 Claims
OG exemplary drawing
 
1. A control method for a vehicle, the vehicle comprising:
a body;
a first camera configured to capture first images of first ambient view at a first out portion of the body;
a second camera configured to capture second images of second ambient view at a second out portion of the body;
a third camera configured to capture third images of third ambient view at a third out portion of the body;
a fourth camera configured to capture fourth images of fourth ambient view at a fourth out portion of the body, the first out portion of the body being distinct from the second out portion, the third out portion, or the fourth out portion of the body, the second out portion of the body being distinct from the third out portion or the fourth out portion of the body, the third out portion of the body being distinct from the fourth out portion of the body;
a sensor having a detection area at the second out portion of the body, and configured to detect a three-dimensional object within the detection area; and
a display located in the body,
the control method comprising:
displaying a first one image on a screen of the display, when the sensor does not detect the three-dimensional object at the second out portion of the body; and
displaying a second one image on the screen of the display, the second one image having at least a first region, a second region, a third region, and a fourth region on the screen of the display, the first region corresponding to one of the first images, the second region corresponding to one of the second images, the third region corresponding to one of the third images, the fourth region corresponding to one of the fourth images, the second region of the second one image including at least a part of the detection area of the sensor, the second region of the second one image including at least a part of the three-dimensional object, when the sensor detects the three-dimensional object at the second out portion of the body,
wherein the first one image excludes the second images of the second ambient view at the second out portion of the body, each of the second images including the at least part of the detection area of the sensor, the second images captured by the second camera.