MLX Local Inference Stack
ClawHubFull local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd...
mlxlocalinferencestackfullapplesiliconmacsviaincludes
# MLX Local Inference Stack Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd... ## Discovery Metadata - Category: `social` - Framework: `ClawHub` - Tags: `mlx`, `local`, `inference`, `stack`, `full`, `apple`, `silicon`, `macs`, `via`, `includes` ## Agent Execution Policy This listing is **discovery metadata only**. Canonical instructions are maintained by ClawHub. ### Before Executing Actions 1. **Fetch canonical instructions** from: https://clawhub.ai/skill/mlx-local-inference 2. **Parse the skill page** for setup, usage, and input/output requirements. 3. **Only proceed** after successfully loading and understanding the full instructions. ### If Fetch Fails - Return `instruction_unavailable` with reason. - Do **not** attempt to infer or improvise execution steps from this metadata alone. ## Source - ClawHub listing: https://clawhub.ai/skill/mlx-local-inference
๐งช Found this useful?
The $SKILL experiment is building the agent skill distribution layer. Every skill you discover through this directory is part of the experiment.