US 11,900,234 B2
System and method for designing efficient super resolution deep convolutional neural networks by cascade network training, cascade network trimming, and dilated convolutions
Haoyu Ren, San Diego, CA (US); Mostafa El-Khamy, San Diego, CA (US); and Jungwon Lee, San Diego, CA (US)
Assigned to Samsung Electronics Co., Ltd
Filed by Samsung Electronics Co., Ltd., Gyeonggi-do (KR)
Filed on Aug. 31, 2020, as Appl. No. 17/007,739.
Application 17/007,739 is a continuation of application No. 15/655,557, filed on Jul. 20, 2017, granted, now 10,803,378.
Claims priority of provisional application 62/471,816, filed on Mar. 15, 2017.
Prior Publication US 2020/0401870 A1, Dec. 24, 2020
Int. Cl. G06N 20/00 (2019.01); G06N 3/04 (2023.01); G06N 3/08 (2023.01); G06N 3/082 (2023.01); G06N 3/045 (2023.01)
CPC G06N 3/04 (2013.01) [G06N 3/045 (2023.01); G06N 3/08 (2013.01); G06N 3/082 (2013.01); G06N 20/00 (2019.01)] 26 Claims
OG exemplary drawing
 
1. An apparatus for generating a convolutional neural network (CNN), the apparatus comprising:
one or more non-transitory computer-readable media; and
at least one processor which, when executing instructions stored on the one or more non-transitory computer-readable media, performs the steps of:
starting cascade training on the CNN; and
inserting one or more layers into the CNN where a training error converges for each stage of the cascade training, until the training error is less than a threshold.