[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[],[],null,["# Prebuilt search spaces\n\nA Neural Architecture Search search space is key to achieving a good performance.\nIt defines all the\npotential architectures or parameters to explore and search. Neural Architecture Search\nprovides a set of default search spaces in the\n[`search_spaces.py`](https://github.com/google/vertex-ai-nas/blob/main/search_spaces.py)\nfile:\n\n- `Mnasnet`\n- `Efficientnet_v2`\n- `Nasfpn`\n- `Spinenet`\n- `Spinenet_v2`\n- `Spinenet_mbconv`\n- `Spinenet_scaling`\n- `Randaugment_detection`\n- `Randaugment_segmentation`\n- `AutoAugmentation_detection`\n- `AutoAugmentation_segmentation`\n\n| **Note:** We publish detailed verification results only for MnasNet and SpineNet search spaces in [MnasNet classification notebook](https://github.com/google/vertex-ai-nas/blob/main/notebooks/vertex_nas_classification_tfvision.ipynb) and [SpineNet object detection notebook](https://github.com/google/vertex-ai-nas/blob/main/notebooks/vertex_nas_detection_tfvision.ipynb). The rest of the search spaces code is based on publications, but we don't publish verification results for these. These should be used *as an example only\n| and not for benchmarking*.\n\nIn addition, we also provide the following search space examples:\n\n- [Lidar search space for 3D point clouds](https://github.com/google/vertex-ai-nas/blob/main/notebooks/vertex_nas_pointpillars_tfvision.ipynb)\n- [PyTorch 3D medical image segmentation search space example](https://github.com/google/vertex-ai-nas/blob/main/third_party/medical_3d/medical_3d_search_spaces.py)\n- [PyTorch MnasNet search space example](https://github.com/google/vertex-ai-nas/blob/main/notebooks/vertex_pytorch_mnasnet_tutorial.ipynb)\n\nThe Lidar notebook publishes verification results in the notebook.\nThe rest of the PyTorch search spaces code are to be used *as an example only\nand not for benchmarking*.\n\nEach of these search spaces has a specific use case:\n\n- MNasNet search space is used for image classification and object detection tasks and is based upon [MobileNetV2](https://arxiv.org/abs/1801.04381) architecture.\n- EfficientNetV2 search space is used for object detection tasks. EfficientNetV2 adds new operations, such as Fused-MBConv. See the [EfficientNetV2 paper](https://arxiv.org/abs/2104.00298) for more details.\n- NAS-FPN search space is typically used for object detection. You can find a detailed description in this [section](#nas-fpn).\n- The SpineNet family of search spaces include `spinenet`, `spinenet_v2`,\n `spinenet_mbconv`, and `spinenet_scaling`.\n These are typically used for object detection as well.\n You can find a detailed description for SpineNet in this\n [section](#spinenet).\n\n - `spinenet` is the base search space in this family, offering both *residual* and *bottleneck* block candidates during search.\n - `spinenet_v2` offers a smaller version of `spinenet`, which can help in faster convergence, offering only *bottleneck* block candidates during search.\n - `spinenet_mbconv` offers a version of `spinenet` for mobile platforms and uses *mbconv* block candidates during search.\n - `spinenet_scaling` is typically used *after* finding a good architecture by using `spinenet` search space to scale it up or down to meet latency requirements. This search is done over things such as image size, number of filters, filter size, and number of block repeats.\n- [RandAugment](#randaugment) and [AutoAugment](#autoaugment) search\n spaces let you search for optimum\n data augmentation operations for detection and segmentation, respectively.\n Note: Data augmentation is typically used *after* a good model has been\n searched already. You can find a detailed description for DataAugmentation\n in this [section](#data-augmentation).\n\n- [Lidar search space for 3D point clouds](https://github.com/google/vertex-ai-nas/blob/main/notebooks/vertex_nas_pointpillars_tfvision.ipynb)\n shows end-to-end search on featurizer, backbone, decoder and detection head.\n\n- [PyTorch 3D medical image segmentation search space example](https://github.com/google/vertex-ai-nas/blob/main/third_party/medical_3d/medical_3d_search_spaces.py)\n shows search on UNet encoder and UNet decoder.\n\nMost of the time, these default search spaces are sufficient.\nHowever, if needed, you can [customize](https://github.com/google/vertex-ai-nas/blob/main/third_party/tutorial/vertex_tutorial2.md)\nthese existing ones, or add a new one as\nrequired using the PyGlove library. See the [example\ncode to specify the NAS-FPN search space.](https://github.com/google/vertex-ai-nas/blob/main/nas_architecture/tunable_nasfpn_search_space.py#L87)\n\nMNasnet and EfficientNetV2 search space\n---------------------------------------\n\nThe MNasNet and EfficientV2 search spaces define different `backbone` building\noptions such as `ConvOps`, `KernelSize`, and `ChannelSize`. The `backbone` can be\nused for different tasks like classification and detection.\n\nNAS-FPN search space\n--------------------\n\nThe NAS-FPN search space defines the search\nspace in the FPN layers that connect different levels of features for object\ndetection as shown in the following figure.\n\nSpineNet search space\n---------------------\n\nThe SpineNet search space enables searching\nfor a backbone with scale-permuted intermediate features and cross-scale\nconnections, achieving state-of-the-art performance of one-stage object detection\non COCO with 60% less computation, and outperforms ResNet-FPN counterparts by 6%\nAP. The following are connections of backbone layers in the searched SpineNet-49\narchitecture.\n\nData augmentation search space\n------------------------------\n\nAfter the best architecture has been searched already, you can also search for a\nbest data augmentation policy. Data augmentation can further improve the\naccuracy of the previously searched architecture.\n\nThe Neural Architecture Search platform provides RandAugment and AutoAugment augmentation search\nspaces for two tasks: (a) randaugment_detection for object detection and (b)\nrandaugment_segmentation for segmentation. It internally chooses between a list\nof augmentation operations such as auto-contrast, shear, or rotation to be\napplied to the training data.\n\n### RandAugment search space\n\nThe RandAugment search space is configured by two parameters: (a) `N`\nwhich is the number of successive augmentation operations to be applied to\nan image and (b) `M` which is the magnitude of ALL of those operations. For\nexample, the following image shows an example where N=2 operations (Shear and\nContrast) with different M=magnitude are applied to an image.\n\nFor a given value of N, the list of operations are picked at random from the\noperation bank. The augmentation search finds the best value of `N` and `M` for\nthe training job at hand. The search doesn't use a proxy task and therefore\nruns the training jobs to the end.\n\n### AutoAugment search space\n\nThe AutoAugment search space lets you search for `choice`, `magnitude`, and\n`probability` of operations, to optimize your model training.\nThe AutoAugment search space lets you search for the choices of the policy,\nwhich RandAugment doesn't support."]]