US 11,928,574 B2
Neural architecture search with factorized hierarchical search space
Mingxing Tan, Newark, CA (US); Quoc Le, Sunnyvale, CA (US); Bo Chen, Pasadena, CA (US); Vijay Vasudevan, Los Altos Hills, CA (US); and Ruoming Pang, New York, NY (US)
Assigned to GOOGLE LLC, Mountain View, CA (US)
Filed by Google LLC, Mountain View, CA (US)
Filed on Jan. 13, 2023, as Appl. No. 18/154,321.
Application 18/154,321 is a continuation of application No. 17/495,398, filed on Oct. 6, 2021, abandoned.
Application 17/495,398 is a continuation of application No. 16/258,927, filed on Jan. 28, 2019, granted, now 11,531,861, issued on Dec. 20, 2022.
Claims priority of provisional application 62/756,254, filed on Nov. 6, 2018.
Prior Publication US 2023/0244904 A1, Aug. 3, 2023
This patent is subject to a terminal disclaimer.
Int. Cl. G06N 3/04 (2023.01); G06F 17/15 (2006.01); G06N 3/044 (2023.01); G06N 3/045 (2023.01); G06N 3/084 (2023.01); G06N 20/10 (2019.01)
CPC G06N 3/04 (2013.01) [G06F 17/15 (2013.01); G06N 3/044 (2023.01); G06N 3/084 (2013.01); G06N 20/10 (2019.01); G06N 3/045 (2023.01)] 20 Claims
OG exemplary drawing
 
1. A computing system, comprising:
one or more processors; and
one or more non-transitory computer-readable media that store:
a machine-learned convolutional neural network; and
instructions that, when executed by the one or more processors, cause the computing system to employ the machine-learned convolutional neural network to process input image data to output an inference;
wherein the machine-learned convolutional neural network comprises a plurality of convolutional blocks arranged in a sequence one after the other;
wherein the plurality of convolutional blocks comprise two or more convolutional blocks that each perform an inverted bottleneck convolution to produce an output; and
wherein at least two of the two or more convolutional blocks apply convolutional kernels that have different respective kernel sizes to respectively perform the inverted bottleneck convolution.