Source link : https://tech365.info/ollama-now-runs-quicker-on-macs-because-of-apples-mlx-framework/

Ollama, the favored app for operating AI fashions domestically on a pc, has launched an replace that takes benefit of Apple’s personal machine studying framework, MLX. The result’s a hefty velocity enhance on Macs with Apple silicon.

In accordance with Ollama, the brand new model processes prompts round 1.6 instances quicker (prefill velocity) and practically doubles the velocity at which it generates responses (decode velocity). Macs with M5-series chips are stated to see the biggest enhancements, due to Apple’s new GPU Neural Accelerators.

The replace additionally consists of smarter reminiscence administration, which ought to make AI-powered coding instruments and chat assistants really feel noticeably extra responsive throughout prolonged use.

Ollama says the brand new efficiency enhance ought to particularly profit macOS customers who run private assistants like OpenClaw or coding brokers like Claude Code, OpenCode, or Codex.

The preview launch is out there to obtain as Ollama 0.19 – simply be sure you have a Mac with greater than 32GB of unified reminiscence to run it. Help is at present restricted to Alibaba’s Qwen3.5, however Ollama says help for extra AI fashions is deliberate.

Standard StoriesApple Getting ready ‘Most Vital Overhaul within the iPhone’s Historical past’

Bloomberg’s Mark Gurman has excessive expectations for Apple’s first foldable iPhone.
In his Energy On publication at this time, he stated the foldable iPhone shall be “the…

—-

Author : tech365

Publish date : 2026-03-31 10:36:00

Copyright for syndicated content belongs to the linked Source.

—-

12345678

Exit mobile version