Insights / On-Device AI
Apple Foundation Models vs Core ML: Which One to Use
These are not competing frameworks. Foundation Models is a high-level API for Apple's on-device LLM. Core ML is the lower-level runtime that runs any model — including the models that power Foundation Models. Different layers, different use cases.
By Ehsan Azish · 3NSOFTS · March 2026The architecture relationship
Core ML is the runtime that executes machine learning models on Apple hardware. It handles model loading, the inference pipeline, and hardware dispatch — routing execution to the Neural Engine, GPU, or CPU depending on the model and device.
Apple Foundation Models is a Swift API for a specific model: Apple's on-device language model. Under the hood, that model runs on Core ML and the Neural Engine. Foundation Models abstracts away model loading, prompt formatting, and output parsing — you get a Swift API for text generation.
The analogy: Core ML is the graphics card. Foundation Models is a high-level rendering engine that happens to use that card. You can use the card directly for custom work, or use the rendering engine when it does what you need.
What Foundation Models is for
Foundation Models is for natural language tasks where you want LLM-quality output without training a model: text summarization, structured extraction from unstructured text, question answering, and conversational features.
The framework's key capability for production apps is guided generation via the Generable protocol. You define a Swift struct conforming to Generable and the model generates output constrained to your schema — no string parsing, typed Swift output. This is what makes LLM integration reliable in production.
Constraint: Foundation Models requires iOS 26+ and A17 Pro or later (all M-series chips). Apps targeting earlier OS versions need a fallback path.
In production: offgrid:AI uses an on-device LLM (via llama.cpp Swift bindings) rather than Foundation Models to reach iOS 16+ devices. For apps targeting the current hardware generation, Foundation Models is the correct choice.
What Core ML is for
Core ML is for running any ML model — classification, regression, object detection, NLP, embeddings. Models are packaged in the .mlpackage format, bundled with the app, and loaded at runtime.
The primary use cases: classification where you have training data and need deterministic output, image analysis via the Vision framework, and NLP tasks like intent classification or entity extraction where a smaller specialized model outperforms a general LLM.
In production: Sorto uses Core ML for email classification — a custom trained model that categorizes messages into buckets the user defines. The inference is fast, deterministic, and works offline. A general LLM would be slower, harder to tune, and would produce less consistent bucketing.
Decision matrix
Use the table below as a starting point. The right answer depends on your training data availability, OS version floor, and output determinism requirements.
| Use Case | Recommended |
|---|---|
| Text classification | Core ML (custom model) |
| Natural language generation | Foundation Models |
| Image analysis | Core ML (Vision + Core ML) |
| Chatbot / assistant features | Foundation Models |
| Custom trained model | Core ML |
| Zero training data | Foundation Models |
| iOS 16 support needed | Core ML only |
| Accuracy-critical classification | Core ML |
Can you use both in the same app?
Yes — and for complex AI-native apps, you often should. Sorto uses Core ML for email classification and experimenting with Foundation Models for the message summarization feature on supported devices. The classification model runs on all devices; the summarization falls back gracefully when Foundation Models is unavailable.
The architecture pattern: check SystemLanguageModel.default.isAvailable at runtime. If available, use Foundation Models for the LLM task. If not, degrade to a Core ML model or omit the feature. Never block core functionality on Foundation Models availability.
Building on-device AI into your iOS app?
The On-Device AI Integration service covers model selection, Core ML pipeline, Foundation Models where applicable, and offline-first data integration.
