Core ML Model Size Calculator
Estimate your Core ML .mlpackage size and App Store binary impact by parameter count and numeric precision.
Quick presets
Use millions. E.g. ResNet-50 = 25, GPT-2 = 124, Mistral-7B = 7000.
Use Core ML Tools for Int8 palettization or linear Int4 quantization.
Estimated .mlpackage size
21.0 MB
50% smaller than Float32 (42.0 MB)
Includes ~10% overhead for model weights index, metadata, and .mlpackage structure.
Precision comparison — 10M parameters
| Precision | Estimated size | Bytes / param | vs Float32 |
|---|---|---|---|
| Float32 (default) | 42.0 MB | 4 B | baseline |
| Float16 | 21.0 MB | 2 B | −50% |
| Int8 (quantized) | 10.5 MB | 1 B | −75% |
| Int4 (quantized) | 5.2 MB | 0.5 B | −88% |
Integration recommendation
Good range for app bundling. Suitable for Neural Engine inference on A14+.
How this calculator works
The formula: params × 10⁶ × bytes_per_param × 1.10 ÷ 1,048,576 = MB
| Precision | Bytes / parameter | Apple availability |
|---|---|---|
| Float32 | 4 bytes | All Core ML versions |
| Float16 | 2 bytes | A11 Bionic and later (Neural Engine) |
| Int8 (palettized/linear) | 1 byte | Core ML 5+ / iOS 15+ |
| Int4 (linear quantization) | 0.5 bytes | Core ML 7+ / iOS 17+ |
The 10% overhead estimate covers the .mlpackage directory structure, weights index, model spec, and metadata files. Actual overhead varies by model architecture; complex pipelines (multi-model, transformer-based, or with embedded preprocessing) may be 15–25% higher. Source: Core ML Tools — Optimizing Models.