Why I Don't Use Apple
My last Apple device was an iPhone 3GS. It stopped charging with its own cable. I never went back.

My last Apple device was an iPhone 3GS, which I really like, and I played Angry Birds on it all the time until it didn't load, but it lasted less than a year. It felt premium and I was sold on the device. The device had problems with charging, both using the charging line and the original adapter that Apple came with the device. It was the same cable and the same wall adapter. The front screen shows the error message "This accessory is not supported by this device.", but suddenly my phone won't charge anymore.
The Genius Bar Moment
So I went to an Apple store, and I opened the phone to show them that the charger wasn't working, but they wouldn't fix anything, they wouldn't fix the phone or the charger. and then they told me to book a Genius Bar appointment and have to pay another $100-ish on top of the overpriced phone that broke because of their own device. It's not a joke or a philosophy, It wasn't a dramatic boycott or a philosophical stance. but it's a question of why do I have to pay for a device that even my own charger didn't work, and then have to pay again and again for a company to fix their own mess? So I switched to Android and never thought about going back.
The Sales Pitch Is Incredible Though
No one sells technology better than Apple. Apple keynotes are cinematic and product shots are flawless if you don't understand the in-depth details that most people don't know you'll walk out of the Apple Store believing you've acquired the world's most sophisticated machine The marketing is the product itself. The hardware is secondary, but the presentation, the pricing, and the perceived value are all designed to make everything feel premium.
The "Pro" That Isn't
Apple's sells laptops starting at $1,699 and scaling past $7,000 came with an embedded memory and storage unit that couldn't be upgraded, they called this the "Pro" level and tried to make people believe it was a revolution, That's not 'Pro.' That's a subscription with a unibody shell. but their upgrade strategy was very simple, that is, buying a new version next year because you can't change the graphics unit, you cannot upgrade, and you cannot change the storage unit at all without replacing the full machine, which forces an upgrade cycle like this means the device will become electronic waste faster, upgradeable GPUs save money and extend the lifespan of the hardware.
Then I Started Building AI Systems
When I started to study how to run large-scale models on my own machine, I needed to understand the functioning of the chipset in depth not just the beautiful data used in Apple's presentations, but it had to be real technical data. The overall structure of Apple's memory units is not just about having separate devices, but an M4 Max with 128 GB can hold a 70B parameter model in the form of Q4 Quantization stored in the entire memory unit, whereas RTX 4090 with 24GB video memory cannot do without offloading layers to system RAM, which significantly degrades throughput. So for Silicon to work on large-scales even though Apple has its own machines, there is a problem with the GPU. The CUDA ecosystem - PyTorch, DeepSpeed, and Flash Attention - is optimized for NVIDIA first, and the toolchain gap is too wide for the work I do.
The Build I'd Actually Want
If someone handed me a top-spec MacBook Pro, I'd thank them. Then I'd sell it.
The build costs more than a base MacBook Pro. But it costs less than a maxed-out one, and the difference in what I can do with it is enormous:
| Component | Spec |
|---|---|
| CPU | AMD Ryzen 9 9950X |
| GPU | RTX 4090 (24 GB VRAM) |
| RAM | 64 GB DDR5 |
| Storage | 4 TB NVMe Gen4 |
| Cooling | 360mm AIO liquid cooler |
| Upgrade path | Everything |
For AI workloads, tensor core throughput is what matters. The RTX 4090 delivers at the level of 165 TFLOPS at FP16 dense (330 with structured sparsity, but virtually no LLM inference workload uses 2:4 sparsity - the dense figure matters) those numbers actually move tokens. Any spec from NVIDIA or Apple is just a marketing number until you compare it against a real workload. I can fine-tune models up to ~13B with QLoRA, train smaller models from scratch, and run inference on anything fitting in 24 GB VRAM When something doesn't fit, I upgrade the GPU, not the entire machine. I don't need a unified memory pool, macOS, or system magic. I detailed this transition in from Beelink to rack.
But People Love It
They did that, and I understand that my family uses Apple devices, which I bought for myself. The iPhone cameras are really great, Samsung, Pixel, and brands like Huawei and Xiaomi have caught up or surpassed it on stills, but for this generation, Retina displays are gorgeous. The system is seamless if you're all-in. If someone asks me what phone to use, I won't push them away from Apple, but I'll be straightforward about where it shines and where it doesn't. I just don't use them as my daily driver. Since some markets are heavily Apple-centric, you can't ignore that as a developer. I keep Apple devices around for practical developer reasons like WebKit examination, Safari QA, and Xcode deployment. I know I'm biased. I'm less enthusiastic about where it doesn't excel.
The "Totally Fair" Comparison
Here is my completely unbiased breakdown. I assure you.
| MacBook Pro / iMac | Custom PC / Laptops (Lenovo, ASUS, etc.) | |
|---|---|---|
| Build quality | Beautiful. Genuinely premium. | Varies wildly. Mine has RGB though. That counts. |
| Display | Retina is gorgeous. Designers love it. | You pick your panel. 4K OLED? Ultrawide? Mini-LED? Your call. |
| Upgrade path | Buy a new one next year. | Swap a part. Keep the rest. Desktop and some laptops. |
| RAM | Soldered. Choose wisely at checkout. | Slotted on desktops, often on laptops too. Change anytime. |
| GPU compute | Fine for Figma. | Desktop: fine for fine-tuning up to 13B. Laptop: depends. |
| Repairability | Visit the Genius Bar (expensive) | Desktops: fully repairable. Laptops: ThinkPads are great, others vary. |
| Ecosystem | AirDrop, Handoff, iMessage. Seamless if all-in. | USB-C, Bluetooth, open standards. Less magic, more freedom. |
| Privacy | "What happens on iPhone stays on iPhone" (terms apply) | You control telemetry. Or don't. Your call. |
| Pricing | Starts at $1,699. Maxes at $7,349. For a laptop. | Same budget, multiple times the specs. Or half the budget, same specs. |
| AI workloads | Unified memory is good for large model inference. | CUDA ecosystem is good for everything else. |
| Software support | macOS natively. You can virtualize Windows, but Xcode needs macOS. | Windows, Linux, or both. Run whatever you want. |
| Marketing | World-class. Oscar-worthy keynotes. | None. The specs speak for themselves. |
| Target audience | People who want things to just work. | People who want to know why things work. |
I said I was biased.
The Real Innovation
Apple's real innovation wasn't a silicon chip, it wasn't an M-series chip or a neural engine chip, but it was making millions of people believe that a closed-loop computer that couldn't be upgraded at all and is premium priced was the end of professional-grade hardware, which requires a lot of capability. I understand what Apple Silicon can do right now, and the benefits of integrating memory units together are real, I still wouldn't build a workstation around it, because the CUDA system gap for the job I'm doing are too wide, but it's a very good technology in terms of what was designed. It's not just something that fits my own job.