Skip to main content
3Nsofts logo3Nsofts
Free · No upload · Browser-only

Core ML Model Size Calculator

Estimate your Core ML .mlpackage size and App Store binary impact by parameter count and numeric precision.

Quick presets

Use millions. E.g. ResNet-50 = 25, GPT-2 = 124, Mistral-7B = 7000.

Use Core ML Tools for Int8 palettization or linear Int4 quantization.

Estimated .mlpackage size

21.0 MB

50% smaller than Float32 (42.0 MB)

Includes ~10% overhead for model weights index, metadata, and .mlpackage structure.

Precision comparison — 10M parameters

PrecisionEstimated sizeBytes / paramvs Float32
Float32 (default)42.0 MB4 Bbaseline
Float1621.0 MB2 B−50%
Int8 (quantized)10.5 MB1 B−75%
Int4 (quantized)5.2 MB0.5 B−88%

Integration recommendation

Good range for app bundling. Suitable for Neural Engine inference on A14+.

How this calculator works

The formula: params × 10⁶ × bytes_per_param × 1.10 ÷ 1,048,576 = MB

PrecisionBytes / parameterApple availability
Float324 bytesAll Core ML versions
Float162 bytesA11 Bionic and later (Neural Engine)
Int8 (palettized/linear)1 byteCore ML 5+ / iOS 15+
Int4 (linear quantization)0.5 bytesCore ML 7+ / iOS 17+

The 10% overhead estimate covers the .mlpackage directory structure, weights index, model spec, and metadata files. Actual overhead varies by model architecture; complex pipelines (multi-model, transformer-based, or with embedded preprocessing) may be 15–25% higher. Source: Core ML Tools — Optimizing Models.

Related

No data is sent to any server. All calculations run in your browser.All tools