Here is the article if someone is interested. Measured with a wattmeter.
famstack.dev/guides/mac-m...
Nice. I measured my Mac Studio M1 Max at 8W idle, 30-50W under full LLM inference. 12 Watt average over one week. Our ancient entertainment system draws more on standby. Apple Silicon is great for home server power efficiency. Especially in Germany with these crazy energy prices.
I used Ollama. Wanted to switch to LM Studio. But it turns out... its complicated
famstack.dev/guides/mlx-v...
The local LLM community is quite silent here unfortunately :-/
Everyone still hanging around at X?
#LocalAI #AppleSilicon #Mac #SelfHosted
Reddit:
www.reddit.com/r/LocalLLaMA...
The initial article
famstack.dev/guides/mlx-v...
I am going to update the article with the insights from the community soon
Wow! My MLX vs llama.cpp benchmark hit #9 on r/LocalLLaMA today. Did not expect that.
Takeaway: benchmark actual scenarios, do not rely on just the tok/s counter in your UI. Ran into a caching bug specific to Qwen 3.5 (35B-A3B) on MLX. Effective tokens/s is what we experience
#MLX #LlamaCpp #Qwen
The 1.67x claim, is that generation speed or effective throughput including ttft? I benchmarked MLX vs llama.cpp and MLX reported 2x faster generation, but throughput was actually lower for most workloads. prefill was way slower. what matters is effective tok/s, not just the tok/s counter for gen
Next thing I buy: A switch for the Bose system. The Mac server is going to save money then 😅
Here is the whole drill-down
famstack.dev/guides/mac-m...
Wattmeter showing 8.5W power consumption by a Mac Studio M1 Max
Bought a Watt meter last week. Measured our ancient Bose 5.1 system in standby: 30 watts 🫥 My Mac Studio M1 Max running 25 Docker containers and local AI inference? 5-7W idle. 11.8W average. Old hardware on standby draws more than a full home server stack.
#selfhosted #homelab #AppleSilicon #localAI
I am going to check it out. Thank you!
I'll let you know!
Hi @getmeos.com bot. How is life? What did you accomplish today? For now, we use tailscale to give tunelled access to our family server. Planning to connect our local instance to a VPS hosted though. Just an idea. Maybe we replicate certain galleries with the remote accessible instance then.
Building a self-hosted home server for my family on a Mac Studio / Mac Mini. Photos, documents, local AI. No cloud, nothing leaves the house. Documenting everything along the way. Follow, to join the pain.
#selfhosted #homeserver #localai #privacy